A recent op ed in New York Times by Bruce Brown and Alan Davidson discusses the the anti-trust decision concerning Google and presents the decision as a “victory” for “free speech on the Internet”.
“Advocates of aggressive action against Google saw the computer algorithms behind search as a utility that should be heavily regulated like the gas or electricity that flows into our homes. But search engines need to make choices about what results are most relevant to a query, just as a news editor must decide which stories deserve to be on the front page. Requiring “search neutrality” would have placed the government in the business of policing the speech of the Internet’s information providers. To quote Justice Black, it would have made search engines publish those results “’which their ‘reason’ tells them should not be published.'”
And given the way in which the authors have constructed their arguments it probably is a victory for “free speech advocates” in the sense of not creating a precedent which in turn could allow for such interventions as “regulat(ing) the content of Amazon’s book recommendations, the locations on Bing’s maps, the news stories that trend on Facebook and Twitter, and many other online expressions of social and political importance”. Clearly not outcomes that follow Justice Black’s test of “editorial reasonableness”.
I’m wondering though whether the issue concerning Google is rather misplaced when included under matters concerning free speech/free expression. The issue of whether to have a search algorithm propelling a robotic process of information selection covered by free speech “rights” is something for legal scholars to ponder at their leisure.
I’m wondering whether it wouldn’t be better to “investigate” Google for possible “freedom of thought” violations rather than issues concerning “freedom of speech”… Google has the potential for much more serious impacts on our capacity to know (or not know) certain things, than on what we can say or not say…
Think about Google’s ambition to be the cataloguer, access portal for all human knowledge–to, as they describe their mission, “organize the world’s information and make it universally accessible and useful”. This is not about providing information as a publisher or website developer, rather it is to provide a “means” (a lens, a framework, a methodology) through which all human knowledge/information can be accessed.
The question then is what is the status of that “means”…
Wikipedia provides a definition of “freedom of thought” as follows, “freedom of thought…is the freedom of an individual to hold or consider a fact, viewpoint, or thought, independent of others’ viewpoints”. Given that Google is providing the technology lens through which the vast proportion of those seeking information or knowledge online are undertaking their searches, the methodologies/algorithms employed by Google are by definition in a position to restrict (or direct) an individual’s opportunity to “consider a fact, viewpoint, or thought, independent of others’ (in this case Google’s algorithm’s) viewpoints”.
In fact, Google’s algorithms have to be understood at the level of “epistemology” i.e. from the perspective of their role (in fact, intervention) in framing our underlying “knowledge, understanding, justified belief about the nature of the world”.
If we choose to accept Google’s ambition at face value; and if our information seeking behaviour is in correspondence with Google’s ambitions; then the primary lens through which we pursue our knowledge, understanding, justified belief about the nature of the world (or at least that which can be accommodated within a digitized Internet-enabled framework) is through/be means of Google’s algorithm(s).
Perhaps George Orwell’s concept of the “unperson“(from the novel Nineteen Eighty-Four but subsequently applied to Soviet actions against perceived dissidents) might be revealing here…
“In the George Orwell book Nineteen Eighty-Four, an Unperson is someone who has been vaporized. Vaporization is when a person is murdered by being turned into vapors. Not only has an unperson been killed; they have also been erased from society, the present, the universe, and existence. Such a person would be taken out of books, photographs, and articles so that no trace of them is found in the present anywhere – no record of them would be found. The point of this was that such a person would be gone from all citizens’ memories, even friends and family. There is no Newspeak word for what happened to unpeople, therefore it is thoughtcrime to say an unperson’s name or think of unpeople. This is like the Stalinist Soviet Party erasing people from photographs after death; this is an example of “real” unpeople.”
Google with some 84% of the global search activity (70% in the US) can, through its algorithms “unperson” someone or an idea etc. (for example by dropping their reference down several pages in the search ranking, or (even inadvertently) “disappearing” something from the search engine completely…
Google has already indicated that they “tweak” their algorithms for a variety of purposes (including commercial). What or who is to prevent them from, some time in the future, tweaking their results to favour one religion or another, one political or ethical stance over another, one perspective on scientific knowledge or another; and who at this point is in a position to say that they aren’t doing so already?
Tom Lowenhaupt
January 6, 2013
Over the years the power of Google to control civic discourse has been a concern to my organization. (Connecting.nyc Inc. a NYS not-for-profit advocating for the development of the the .nyc TLD as a public interest resource.)
Regarding the concepts of “right to know” and “right to say,” the following is clipped from our Transparent Search wiki page – http://www.coactivate.org/projects/campaign-for.nyc/transparent-search.
“Such placements need to be carefully reviewed by fairness rules of what might be called ‘search journalism.’ Here in the U.S. the First Amendment to our Constitution says “Congress shall make no law … abridging the freedom of speech, or of the press…” and poses an apparent block to any search as speech regulation.
“But it might be argued that there are parallels between the impact of technology on the interpretation of the First Amendment, with parallels drawn with the Second Amendment. Americans are all too familiar with a decades long controversy about that Amendment’s guaranteeing a citizen’s right to bare arms: Did the Founding Fathers intend that citizens be allowed to own and use powerful automatic weapons? The corresponding First Amendment question might be: Was big data, big money, and search imagined by the Founding Fathers?”
“So for the immediate future, regulation of search journalism is unlikely. And with the creation of a transparent “search.nyc” vital to our city’s effective operation, a trusted entity must be identified to oversee its development.”
Sorry to clip, post, and run but the sun is nearing the horizon and I’ve made promises. I’ll return to follow the conversation.
Best,
Tom Lowenhaupt
Michael Gurstein
January 6, 2013
Thanks Tom, a really interesting and useful example of search engines and the “right to know”.
Mike
shawna
January 7, 2013
This idea of investigating Google for ‘freedom of thought’ violations is interesting, but I wonder what such an investigation would look like, and whether you would find the evidence you were looking for. Google may hold a great deal of power as the lens through which 84% of the world search the web, but it may not actually be abusing that power. At least, not yet. I think it is worth discussing, not only within civil society, but with Google and others, what responsibilities go along with being the access portal for all human knowledge.
Steel Hoppers
January 10, 2013
I agree with Shawna. How can anyone prove that Google violates “freedom of thought”? A person may be swayed by what he or she sees, but fact remains that the same person is also responsible to criticize and sift through what he or she wants to believe in.
Michael Gurstein
January 10, 2013
Thanks Shawna, good observations. The problem is that it would be very difficult to know if Google is deliberately skewing its search algorithms to result in certain knowledge frameworks over others. And the real issue may be not that it is doing so deliberately but rather that it is doing so inadvertently in directions and with consequences that couldn’t easily be determined, if at all. Can or should a private corporation be allowed to have such potential power even if we are convinced that it is never likely to use it.
Michael Gurstein
January 10, 2013
Hi Steel, what makes you think people will be able to know what they don’t know or to criticize something when they don’t have sufficient information to know what they should be sifting for or criticizing.
Tony Roberts
January 11, 2013
Mike thanks for this post. You ask the alarming question: what is to prevent Google results favouring one religious, political or ethical stance over another? My response was to wonder whether it is not inevitable that they already do! The content of internet web-pages are not neutral, nor is search-engine technology, so internet search results can not be neutral either. The internet is not full of ‘truth’ – it is full of the opinion of authors – that is, full of their religious, political and ethical stances. The billions of web-pages that a search engine crawls over reflect currently dominant religious, political and ethical values. The content of the internet represents the religious, political and ethical stances of that sector of people in the relatively privileged position of being able to publish their opinions. Added to this divide is the effect of a search engine’s ranking values. Search engine choices are not determined by mechanically generated code and therefore somehow ‘neutral’ but are determined by human-authored code which reflects human-values and corporate biases (including Google’s admission that they ‘tweak’ search algorithms for commercial reasons and ‘other’ considerations). If this analysis holds true then Google results already both favour and reproduce the dominant discourse as well as reflecting the corporation’s interests; in doing this they will be inevitably be favouring dominant religious, political or ethical stances over minority ones. Maybe in this context ‘search engine neutrality’ is an illusory goal?