Pamela Geller, Breitbart News: Google’s DeepMind – Sharia-Compliant Artifical Intelligence

15

Google DeepMind is the next generation of artificial intelligence.

Tesla Chief Executive Elon Musk believes that artificial intelligence (AI) will be a threat to people. “Should that be controlled by a few people at Google with no oversight?” And who are they?

 ‘Mark my words — A.I. is far more dangerous than nukes’

Story continues below advertisement

Musk warned that artificial intelligence could be our biggest existential threat and believes there should be some regulatory oversight at the national and international level.

Read the whole thing and share widely.

Technological control of our world – how far could it go? And who’s at the top?

A very few people at the top are setting their own agendas for manipulating the world and for what we see of the world.

Many people are well aware that tech giants such as Facebook and Google have immense power over how we receive and share information online. There are quite rightly great and increasing concerns about how these mammoth corporations access, manipulate and profit from our data, and how they shape and control what information comes our way.

A recent report from the UK’s House of Lords also shared the common concerns of many industry insiders and commentators about the dominance of a few tech behemoths. Many worry about free speech online, the policing of “hateful” speech, the manipulation of search results and recommendations, and more. Techniques are being developed to disrupt “extremists” online, with disquietingly loose notions of who counts as an extremist. We must remain acutely aware of this to understand the fight for the information battle-space, and it’s important to connect the dots to understand fully what kind of potential threat we are facing.

These giants do not simply control our world – they control how we even see what world is there to see. They can use powerful psychological nudges to manipulate people. They can mold how we relate to each other. Technology use can even change how the human brain develops.

New technology is a great hope for humanity. With it we can reach isolated communities and individuals, we can spread knowledge and hope, we can organize. But in turn, it can organize us. As artificial intelligence (AI) and machine learning develops and is embedded in tech, its powers will expand, and so, too, will the dangers.

You could say that the human race is currently the subject of an experiment where billions of rats are sitting in little labs pressing levers on all sorts of devices, all over the world. The docile rats have even bought their own equipment.

We are the rats.

But who are the experimenters?

There is currently a great push to examine the ethical issues involved in AI. This is to be welcomed, but there is a critical question about the power that many companies hold in this area, and how far such companies can be held accountable. The concern that a few tech giants dominate is heightened by the concern felt by some insiders that some powerful people in some of these companies are pushing particular agendas of their own. Witness the recent allegations raised by James Damore about the culture at Google. If these allegations have any foundation, there may be serious implications for how this technology is being developed.

In 2015, Google bought the London-based AI company, DeepMind, for a reported $400,000,000. At the time, shareholders were promised that DeepMind would provide ethical oversight. So should we stop worrying?

DeepMind’s location means it’s well placed to cream off the best tech talent from Cambridge, London, and Oxford. It has three founders. Demis Hassabis has the usual geek entrepreneur profile of spectacular and early success in the field, as does Shane Legg. The more perplexing figure is Mustafa Suleyman, the best friend of Hassabis’ younger brother. He dropped out of a degree in philosophy and theology at Oxford in the second year to help set up the Muslim Youth Helpline. Then at age 22, he was appointed to give policy advice on human rights to the then-Mayor of London, Ken Livingstone. Why Ken couldn’t find someone who’s actually finished a degree, worked in human rights law, or who had more than 22 years life experience to offer, one can only guess. Suleyman was involved in the UK branch of Reos partners, a mediation organization, and then helped set up DeepMind. It is said that he was an entrepreneur at school, reselling sweets to other kids from an early age. Perhaps that explains his acumen.

“He had always been the ‘well-spoken interlocutor’ at home, helping parse his father’s broken English. As DeepMind’s 30-year-old co-founder and head of applied AI, he’s responsible for integrating the company’s technology across Google’s products — and ensuring clear communication among the top engineers,” Wired writes. Among his tasks at DeepMind, Suleyman has overseen a team looking at YouTube-recommendation personalization – a powerful way of manipulating people used by those who are officially tasked with disruption online.

And he’s in charge of ethics and safety. There were rumblings for years about the invisibility of any work on ethics at DeepMind, and although this has now started, one can still wonder about Suleyman’s approach to overseeing such work.

Suleyman’s said earlier this year that “there is an emerging consensus that it is the responsibility of those developing new technologies to help address the effects of inequality, injustice and bias.” But these are very broad-brush aims, and somewhat different from each other. There are currently laws enacted which mean certain forms of discrimination must be avoided, for instance – but “addressing inequality” is rather vague. Everything hangs on how these aims are interpreted; accepting “responsibility” can sometimes amount to “seizing the reins” and pushing your own strategies to the front. Suleyman does go on to add, “progress in this area also requires the creation of new mechanisms for decision-making and voicing that include the public directly.” So that could be good, although Suleyman is by no means the only person to say this – indeed, he’s rather late to the table in issuing such a comment.

So how does Suleyman see his own involvement? As overseeing ethics so that the public will be directly involved? Troublingly, he does not seem to understand the difference between ethical oversight of an area and social activism. The latter approach pushes particular agendas for social change, and Suleyman is quite right to see that social activism coupled with far-reaching technological change gives a uniquely powerful mix. He has said: “As someone who started out as a social activist, I can see many examples of people working in tech who are genuinely driven to improve the world.” But it all depends upon what you see as an improvement. And as Suleyman and countless others have pointed out, we can use AI to combat bias, or to incorporate our own biases.

Indeed, DeepMind has been involved in research in conjunction with the Royal Free Hospital, which was found by the Information Commissioner to be in breach of the Data Protection Act in how it handed over data from 1.6 million patients to DeepMind. The report on this did not directly criticize DeepMind, because it is the Royal Free who was responsible for the curation of their patient data. Nonetheless, DeepMind said, “In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health.” An artificial intelligence company having trouble understanding complex rules? We have it from their own mouths – a rush to action before judgment. This is what happens when ethics are driven by social activism.

DeepMind, shallow heart?

An appreciation of ethics requires many things, including the ability to think clearly, consistently, and without bias. Suleyman’s involvement in setting up the Muslim Youth Helpline is presented as forming part the experience that seemingly qualifies him for the job. Naturally, a helpline for troubled young people can be a great benefit, and specific services to particular client groups can be valuable. But the Muslim Youth Helpline appears to try to do more than one thing – to offer something like counseling, and to offer culturally and religiously appropriate responses. Their website states that they are a faith and culturally sensitive organization, and although they do not offer religious advice, “as a faith and culturally sensitive service our volunteers are trained to use hadith and Quranic ayah to give words of comfort where appropriate to the client.”

But it’s easy to find many hadith and passages from the Quran that would be very far from comforting for young people with troubles relating to sex, drugs, sexual identity, gender identity, worries about their beliefs, and even their identity as Muslims. How are the comforting hadith and ayah chosen? This suggests treading a fine line between drawing upon Islamic beliefs and supporting distressed young people. One must suspect a certain cognitive dissonance, and perhaps the same kind of doublethink that blurs the distinction between ethics and activism.

Put some of this together. People working in AI, and in computing technology more generally, can influence what we see online. They can show us things, they can distract us. They can block information, they can silence voices. They can develop algorithms which contain bias, or which eliminate bias – depending, of course, on what is seen as “bias.” They can nudge us to behave in various ways. They can analyze data that can reveal a staggering amount about us. They seem to be claiming the ability to decide who the good guys are, who the bad guys are, what voices are dissent that needs to be crushed. They may be working with governments, as well as in social media companies. And much of this is being carried out within large, extremely rich, extremely powerful corporations, where a few people at the top are setting their own agendas for manipulating the world and for what we see of the world.

The Truth Must be Told

Your contribution supports independent journalism

Please take a moment to consider this. Now, more than ever, people are reading Geller Report for news they won't get anywhere else. But advertising revenues have all but disappeared. Google Adsense is the online advertising monopoly and they have banned us. Social media giants like Facebook and Twitter have blocked and shadow-banned our accounts. But we won't put up a paywall. Because never has the free world needed independent journalism more.

Everyone who reads our reporting knows the Geller Report covers the news the media won't. We cannot do our ground-breaking report without your support. We must continue to report on the global jihad and the left's war on freedom. Our readers’ contributions make that possible.

Geller Report's independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our work is critical in the fight for freedom and because it is your fight, too.

Please contribute here.

or

Make a monthly commitment to support The Geller Report – choose the option that suits you best.

Quick note: We cannot do this without your support. Fact. Our work is made possible by you and only you. We receive no grants, government handouts, or major funding. Tech giants are shutting us down. You know this. Twitter, LinkedIn, Google Adsense, Pinterest permanently banned us. Facebook, Google search et al have shadow-banned, suspended and deleted us from your news feeds. They are disappearing us. But we are here.

Subscribe to Geller Report newsletter here— it’s free and it’s essential NOW when informed decision making and opinion is essential to America's survival. Share our posts on your social channels and with your email contacts. Fight the great fight.

Follow Pamela Geller on Gettr. I am there. click here.

Follow Pamela Geller on
Trump's social media platform, Truth Social. It's open and free.

Remember, YOU make the work possible. If you can, please contribute to Geller Report.

Join The Conversation. Leave a Comment.

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spammy or unhelpful, click the - symbol under the comment to let us know. Thank you for partnering with us to maintain fruitful conversation.

If you would like to join the conversation, but don't have an account, you can sign up for one right here.

If you are having problems leaving a comment, it's likely because you are using an ad blocker, something that break ads, of course, but also breaks the comments section of our site. If you are using an ad blocker, and would like to share your thoughts, please disable your ad blocker. We look forward to seeing your comments below.

0 0 votes
Article Rating
15 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
old003
old003
5 years ago

Oh goody. A drop out ,well spoken interlocutor, steeped in the book of sewage,at the controls of world wide thought process. What can go wrong.

CrashOverride
CrashOverride
5 years ago
Reply to  old003

we get a book entitled “the koran”?

Suresh
Suresh
5 years ago

Nothing new. After facebook, twitter , Google joins lslamofascist gang to suppress conservative
free speech http://tinyurl.com/lgp28rs

saudi/qatar/OIC own part of twitter, Fox Network, fund CNN, MSNBC , buy out politicians , bureaucrats in education dept to push islam in schools/ colleges. Easiest way to brainwash and takeover country and shutdown free speech !

Sunshine Kid
Sunshine Kid
5 years ago
Reply to  Suresh

Self-driving cars still have accidents.

Walt Parkman
Walt Parkman
5 years ago

Why should Google and Deep Mind have all the fun? You can make your own
autonomous weaponry with very fine AI. The new Nvidia Xavier kit is perfect.
$1300. It comes with Nvidia Isaac, which contains an especially neat VR
robot training system. Isaac SIM is based on Epic Unreal Engine. The
neural network training is done with Musk’s Open AI Gym. Myself, I am
making a neat little robot guard dog. As for Google, they are not the only game in town.

robert v g
robert v g
5 years ago

Methinks Musk lost his mind.

scherado
scherado
5 years ago

The neural network training is done with Musk’s Open AI Gym.

What is Musk’s Open AI Gym?

Sing On
Sing On
5 years ago

No surprise here. Remember this brilliant Google decision (http://www.newsweek.com/google-wont-make-ai-murder-people-will-still-help-us-military-965678) made by Sundar Pichai, probably with the advice of Google software developers in India, a country with one of the world’s largest muslim populations? There has to be laws and regulations around this so that “US” software companies cannot become Indian ones and betray the US military. And unless the software helps the US military kill terrorists it is essentially useless. Sorry, but as I’ve been consistently stating offshoring to India has only allowed Indian nationalists to control US technology, and it is detrimental to the national security of the US.

R. Arandas
R. Arandas
5 years ago

Reminds me of 1984…and how Big Brother is watching you all the time.

Sunshine Kid
Sunshine Kid
5 years ago

The one thing about artificial intelligence is that it has no soul. It does as the programmer intended, only more efficiently. This does not mean “better” by any means, because artificial intelligence has no ability to actually know right from wrong. It is programmed to follow guidelines laid out for it, but cannot begin to know the difference between good and evil. And anything mankind produces has a portion of evil in it, purposely put there or not.

Alleged-Comment
Alleged-Comment
5 years ago

Yes, you should be afraid if AI stands for Arrogance and Ignorance.

Ichabod Crain
Ichabod Crain
5 years ago

“Witness the recent allegations raised by James Damore about the culture at Google. if these allegations have any foundation, there may be serious implications for how this technology is being developed.”
if these allegations have any foundation… Obviously they do. We have endless evidence of that now. There is no “if” about it.

AR154U☑ᵀʳᵘᵐᵖ DEPLORABLE 2020
AR154U☑ᵀʳᵘᵐᵖ DEPLORABLE 2020
5 years ago

comment image

scherado
scherado
5 years ago

I took an introductory AI programming class in college and will, occasionally, read articles on this subject. In the class, we used the interpretive form of the programming language PROLOG to write the required software for the class assignments.

Look, again, at that last sentence you just read.

It’s not easy to understand, much less (or more?) explain, Artificial Intelligence; what it is and how it is distinct from “conventional” software (a.k.a., programs, what we now call “apps,” which is and abbreviation of “application programs.”)

Make no mistake: The entire world of technology is, now run by and relies on conventional software.

Now, ask yourself. When a CEO wants to implement (automate) a policy in the product, let’s take facepalm (facebook) as our product, the thing we’re selling, what difference does it make whether the policy is implemented using conventional software or the techniques of artificial intelligence?.

If someone decides to permit an AI program (app) to make decisions, then the problem of confirming that the software does what it is intended to do is one and the same no matter what software language or coding techniques are chosen to implement the task.

IzlamIsTyranny
IzlamIsTyranny
5 years ago

If you want a vision of the future, imagine a boot stamping on a human face – forever.
Read more at: https://www.brainyquote.com/quotes/george_orwell_159438
If you want a vision of the future, imagine a boot stamping on a human face – forever.
Read more at: https://www.brainyquote.com/quotes/george_orwell_159438
If you want a vision of the future, imagine a boot stamping on a human face – forever.
Read more at: https://www.brainyquote.com/quotes/george_orwell_159438
If you want a vision of the Islamic future, imagine a boot stamping on a human face — forever. With apologies to George Orwell.
Imagine a worldwide Islamic totalitarian theocracy being enforced w/21st century technology.

Sponsored
Geller Report
Thanks for sharing!