“Is Google like Gas or Like Steel?” Neither, it is like Nernst’s Third Law of Thermodynamics (or the Nicene Creed…!

Posted on January 5, 2013

7


A recent op ed in New York Times by Bruce Brown and Alan Davidson discusses the the anti-trust decision concerning Google and presents the decision as a “victory” for “free speech on the Internet”.

“Advocates of aggressive action against Google saw the computer algorithms behind search as a utility that should be heavily regulated like the gas or electricity that flows into our homes. But search engines need to make choices about what results are most relevant to a query, just as a news editor must decide which stories deserve to be on the front page. Requiring “search neutrality” would have placed the government in the business of policing the speech of the Internet’s information providers. To quote Justice Black, it would have made search engines publish those results “’which their ‘reason’ tells them should not be published.'”

And given the way in which the authors have constructed their arguments it probably is a victory for “free speech advocates” in the sense of not creating a precedent which in turn could allow for such interventions as “regulat(ing) the content of Amazon’s book recommendations, the locations on Bing’s maps, the news stories that trend on Facebook and Twitter, and many other online expressions of social and political importance”. Clearly not outcomes that follow Justice Black’s test of “editorial reasonableness”.

I’m wondering though whether the issue concerning Google is rather misplaced when included under matters concerning free speech/free expression.  The issue of whether to have a search algorithm propelling a robotic process of information selection covered by free speech “rights” is something for legal scholars to ponder at their leisure.

I’m wondering whether it wouldn’t be better to “investigate” Google for possible “freedom of thought” violations rather than issues concerning “freedom of speech”… Google has the potential for much more serious impacts on our capacity to know (or not know) certain things, than on what we can say or not say…

Think about Google’s ambition to be the cataloguer, access portal for all human knowledge–to, as they describe their mission, “organize the world’s information and make it universally accessible and useful”. This is not about providing information as a publisher or website developer, rather it is to provide a “means” (a lens, a framework, a methodology) through which all human knowledge/information can be accessed.

The question then is what is the status of that “means”…

Wikipedia provides a definition of “freedom of thought” as follows, “freedom of thought…is the freedom of an individual to hold or consider a fact, viewpoint, or thought, independent of others’ viewpoints”. Given that Google is providing the technology lens through which the vast proportion of those seeking information or knowledge online are undertaking their searches, the methodologies/algorithms employed by Google are by definition in a position to restrict (or direct) an individual’s opportunity to “consider a fact, viewpoint, or thought, independent of others’ (in this case Google’s algorithm’s) viewpoints”.

In fact, Google’s algorithms have to be understood at the level of “epistemology” i.e. from the perspective of their role (in fact, intervention) in framing our underlying “knowledge, understanding, justified belief about the nature of the world”.

If we choose to accept Google’s ambition at face value; and if our information seeking behaviour is in correspondence with Google’s ambitions; then the primary lens through which we pursue our knowledge, understanding, justified belief about the nature of the world (or at least that which can be accommodated within a digitized Internet-enabled framework) is through/be means of Google’s algorithm(s).

Perhaps George Orwell’s concept of the “unperson“(from the novel Nineteen Eighty-Four but subsequently applied to Soviet actions against perceived dissidents) might be revealing here…

“In the George Orwell book Nineteen Eighty-Four, an Unperson is someone who has been vaporized. Vaporization is when a person is murdered by being turned into vapors. Not only has an unperson been killed; they have also been erased from society, the present, the universe, and existence. Such a person would be taken out of books, photographs, and articles so that no trace of them is found in the present anywhere – no record of them would be found. The point of this was that such a person would be gone from all citizens’ memories, even friends and family. There is no Newspeak word for what happened to unpeople, therefore it is thoughtcrime to say an unperson’s name or think of unpeople. This is like the Stalinist Soviet Party erasing people from photographs after death; this is an example of “real” unpeople.”

Google with some 84% of the global search activity (70% in the US) can, through its algorithms “unperson” someone or an idea etc. (for example by dropping their reference down several pages in the search ranking, or (even inadvertently) “disappearing” something from the search engine completely…

Google has already indicated that they “tweak” their algorithms for a variety of purposes (including commercial).  What or who is to prevent them from, some time in the future, tweaking their results to favour one religion or another, one political or ethical stance over another, one perspective on scientific knowledge or another; and who at this point is in a position to say that they aren’t doing so already?