Opinion Technology

Google’s Dreams

RebelNews Staff
Written by Damien Williams

At this point, everyone reading this has likely seen examples of the Google DeepDream Inceptionism project in various news outlets. But for those who have managed to remain unfamiliar with it, DeepDream is basically what results from an advanced artificial neural network being fed a slew of images and then tasked with producing some of its own. So far as it goes, this is rather unsurprising if we think of it as a next step. DeepDream is based on a combination of DeepMind and Google X — the same neural net that managed to correctly identify what a cat was — which was acquired by Google in 2014.

I say this is unsurprising because it’s a standard developmental educational model: To emulate and remember somewhat concurrently, in order to reinforce what’s been learned, and to then create something somewhat new, but still pretty similar to the original. In the terminology of developmental psychology this process is regarded as essential for the mental growth of an individual. This makes sense, as DeepDream is just one component of a project on which Google has spent a great deal of time and money. From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their natural language voice search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an autonomous generated intelligence.

In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the emotional development that we would seek to encourage in a human child. And while I am very concerned with the question of what it means to create and be responsible for a non-biological offspring, I have to ask: Do we really want Google (or Facebook, or Microsoft) to be that child’s primary caretaker? Should a future, superintelligent, vastly interconnected, differently-conscious machine mind be inculcated with what a multi-billion-dollar multinational corporation considers “morals?”

Do we really want Google (or Facebook, or Microsoft) to be that child's primary caretaker? Click to Tweet

Google is supposedly building its own ethics board, but all research points to it being internally funded, with no clear rules as to oversight or authority, and most importantly as-yet nonexistent. It has been over a year and a half since Google’s acquisition of DeepMind, with the subsequent announcement of the establishment of a (contractually required) ethics board. As of his appearance at Playfair Capital’s AI2015 conference, Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm has obviously already been put into use, and even distributed to the public. And Google has been seeking patents on neural networks and other AI architecture.

So all of this prompts questions like what kinds of recommendations is this board likely to make, and which kinds of moral frameworks are they even considering, in their starting parameters. The philosophical field of ethics is vast and complex, a fact which only intensifies when concerns of the accelerating pace of technology are added in. To that end, knowing that Google supposedly has an ethics board isn’t enough. More transparency is also required.

RELATED:  Stop Defending Torture and Improve Interrogation

While many have said that Google long ago swept their previous “Don’t Be Evil” motto under their rugs, that might be an oversimplification. When considering how anyone moves into James-Bond-esque super villain territory, I think it’s prudent to remember one of the central tenets of good storytelling: the villain never thinks they’re the villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s — none of these people whom we all look at with a clear and rightly assessed scorn ever thought of themselves as being in the wrong. Everyone, every person who undertakes an action for any reason, is most intimately tied to the reasoning that brought them to those actions, and so perceiving that their actions might be “wrong” or “evil” takes us a great deal of special effort.

“But,” you say, “can’t all of those people say that this rule applies to everyone but them?” And thus, like any first-year philosophy class, there arises the messy ambiguity of moral relativism, that maybe everything you believe is just as good as anybody else, because everyone has their reasons, their upbringing, their culture… Stop. Don’t fall for it. While our individual personal experiences can’t be 100% mapped onto anyone else’s, that does not mean that all judgments based on those experiences are created equal.

Rarely — if ever — has a country’s general populace been asked outright whether they want to undertake a completely draconian series of personal and social restrictions, in the name of “safety.” More often, the people are cajoled, pushed, or influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everyone’s privacy with passive voice reception backdoored into two major Chrome distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not. But I can paint you a picture as to how these situations came to be.

Note: This next part is pure reconstruction, on my part.

Starting with Google building a passive audio backdoor into all devices that use Chrome, it is, again, highly unlikely that they started off with this as their intention. The likeliest path is that they wanted to have a way for anyone, anywhere, to activate their voice search functionality, simply by speaking keywords. A fine goal, and one that takes us that one step closer to the super-shiny future we all think we’re somehow owed. But the fact remains that, in the 21st century, when the biggest audio phreak on the planet is the US Government and everyone else is just playing catchup, Google should have known better.

RELATED:  ADM, Netherlands: Happy Children Running Free in a Village of Innovation

In the words of Harold Finch: “…any exploit is a total exploit. The tiniest crack becomes a flood.” And so it is. The mechanism for this process is left open, and then someone creates a black box version that seems to function just like the original, but, being a black box, there’s real way to know. Any Chromium user was dealing with potentially all of their background conversation being listened to and archived, without any input or oversight from the user. As the Guardian reports, the functionality has been removed from Chromium, so if you see anything of the sort on your version, you know to go ahead and get rid of it. But while they’ve walked back on this one particular policy, their other social control measures are still fully in place.

Last month, Yasha Levine reported on claims that Google has been harassing the homeless population of Venice Beach. Reportedly, the teams of security personnel have been rousting people from their encampments, and preventing them from using long-held public facilities. Not only is this pretty disgusting, but it also directly contravenes Los Angeles County law. Now, in keeping with our analysis, I don’t think that the management types over at Google woke up one day and thought to themselves, “You know what’s wonderful? Kicking people when they’re down.” What I think did happen was similar to whatever let Google execs consider moving into the Bay Area and shuttling their employees to the Valley a good idea.

Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Nevermind the fact that it’s against the law, and that these people you’re upending are literally trying their very best to live their lives. Because it’s all for the Greater Good, isn’t it? All to make the world a better place — to make it a place where we can all have natural language voice to text, and robot butlers, and military AI and robotics contracts to keep us all safe…!

Or, more likely, a publicly traded corporation’s profit motives will almost always edge out any moral motives, as long as it’s operating within a capitalist system.

Tim Maly has suggested that hostile-to-human-needs AGI already exists, in the form of feral capitalism. With the complexity and speed of the “needs” of profit structures, there are multiple vectors of pressure and competing intentions within a corporation. They are all vying for control. The functions of these incorporated systems are at least as complex as those of a human mind, but the interaction of their components takes place at such a level of abstraction from human understanding that we hesitate to call it “consciousness.” So while, from the human perspective, Google is not “alive,” what becomes clear, now, is that corporations, like fire or a virus, seek to sustain and reproduce themselves. And the overlap of those corporate drives for profit and those residual drives for moral good in the people who comprise them seem to play out just a touch paternalistic.

RELATED:  NSA Spying Program XKEYSCORE Collects Your Passwords, Photos, and More

Google’s shareholders want profit, and Google’s cheerleaders want to see it as a force for good in the world. Executives and team leaders within Google want these things and more, and may have any number of different ideas about how to achieve them. So, with all of these desires and intentions at play, in “Google’s mind,” what does it matter if we have to displace a few people, in the process of getting programmers where they need to be? Were those really the kind of people we wanted in the world we’re building? Won’t we all be better off when we can just give them menial upkeep tasks to do, to make them productive? Well, then first we have to build that world. And you can’t make an omelette without transhifting some strategic paradigm disruptions.

Google is not of one mind about anything, any more than any other group of people working toward a single goal (“…to organize the world’s information and make it universally accessible and useful”), or any one person, with an end goal, but no plan. But Google’s learning curve affects the whole world, and their opacity and seeming lack of self-reflection sets a terrible precedent. This is just part of why Google — or Facebook, or Microsoft, or any corporate entity — ought not be the ones in charge of rearing a machine intelligence. They may not think they’re evil, and the human components might even have the best of intentions, but if we’re honestly looking at bringing a new kind of mind into this world, it needs much better examples to follow. Opening up Google’s ethics board to considerations over and above AGI and machine consciousness — and being more transparent about what those AGI considerations contain — would be a good place to start. Forcing that board to engage with sociologists and psychologists and to think more clearly about the deeper impact of their policies on those who find themselves most-often marginalized is an important further step.

A new kind of mind needs examples that see things like the lives and experiences and presence of disenfranchised groups of people, in “our” spaces not merely as problems to be solved, but as opportunities for new ways of understanding the world. New minds need teachers who understand that while relativism isn’t the end of inquiry, it can be a beginning, an entryway into thinking about needs and ways of living other than our own.

Google — and Facebook, and Microsoft…— claim to want these same teachers. They could stand to do a better job of listening to their recommendations.

About the author

Damien Williams

Damien has worked in various venues to explore the intersections of pop-culture, science, philosophy, and the academic study of the occult. He's written several articles and given multiple presentations on autonomous created intelligences as presented in comics, movies, television, animation, and literature, and their impact on and in society.

Leave a Reply

1 Comment on "Google’s Dreams"

Notify of
Sort by:   newest | oldest | most voted
trackback

[…] spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to […]

wpDiscuz

Google’s Dreams

by Damien Williams
1
Advertisment ad adsense adlogger
Read previous post:
Chicago PD Sued by Two Residents After Block Party Rumble

A Chicago mother is suing the Chicago Police Department for police brutality a year after a video appeared online showing...

Close