On an e-list which discusses Internet Governance someone just pointed to an article breathlessly headlining “U.S. Ranks Second in Internet Freedom, Behind Estonia” and pointing to a report on “Freedom on the Net” produced from “research” conducted by Freedom House. Evidently these reports have some resonance since this one turned up some 550,000 “results” on a Google search! Notably according to the acknowledgements, “this publication was made possible by the generous financial contributions of the U.S. State Department’s Bureau of Democracy, Human Rights, and Labor (DRL), the U.S. Agency for International Development (USAID), Google, and Yahoo.”
The report has two interrelated elements: the first, a series of somewhat useful national case studies document the current state of play at a national level for certain elements related to Freedom House’s traditional concern with “freedom of expression”; the second, and very much more problematic but which provides the basis of the headline quoted above and one expects most of the interest that underlies the huge number of Google references noted above, is an “Index of Freedom on the Net”.
Freedom on the Net aims to measure each country’s level of internet and digital media freedom. Each country receives a numerical score from 0 (the most free) to 100 (the least free), which serves as the basis for an internet freedom status designation of Free (0-30 points), Partly Free (31-60 points), or Not Free (61-100 points).
For those with an interest, it is worth taking the time to download the report (662 pages) and go to the “Methodology” section on page 640 and following.
The “methodology” appears to be as follows:
- A series of questions associated somehow with a very high level definition of “freedom” loosely associated with the UNDHR have somehow been formulated (the actual relationship and development process is not explained) but no evidence of pre-test or independent assessment of these questions is provided;
- These questions include multiple sub-questions — some of which are mutually exclusive and most of which require significant and in many cases somewhat technical/and or formal definitions as in almost all instances the actual determination of a result is largely based on a subjective assessment of a local set of circumstances (e.g. Do the authorities regularly monitor websites, blogs, and chat rooms, or the content of e-mail and mobile text messages, including via deep-packet inspection?);
- These “questions” are in turn given for assessment to “experts” (no justification is given for how these “experts” are chosen, what their specific area of expertise might be, their independence relative to the subject matter, their standing among their peers as for example through lists of peer reviewed publication in the field and so on, is provided);
- These “experts” in turn are required to assign a single numerical score for each question for their designated countries. These scores are then compiled on a national basis and provided to a series of meetings of Freedom House staff and a range of Freedom House selected “local experts, scholars, and civil society representatives from the countries under study” for a preliminary assignment of an “Internet Freedom” score. (No details on who these are, what the process of assessment might be, any independent i.e. non-Freedom House selected participation or review is provided.)
- A score is then assigned to each country.
- The outcome of this “comprehensive study” is then presented to Freedom House staff (who) do “a final review of all scores to ensure their comparative reliability and integrity”. And this becomes the basis for the graphs, tables, and various comparisons of this that flow from this…
Please note that there is no referencing in the methodology section (at least in the most recent report); nor is there an indication of any independent (peer) review, verification or assessment at any stage in the process.
Sadly, if not surprisingly, this “methodology” wouldn’t pass muster in any reputable Master’s (let alone Ph.D.) program that I have had any experience with and would be laughed out of the room in any peer reviewed publication or independent research funding program.
As a case study in using pseudo science as a way to manifest and justify researcher bias or as an exercise in applied ideology it might I think, be quite useful and I would recommend this to any suitable undergraduate program in social science methodology as a case in how not to do independent research.
What the index does tell us, for anyone interested is what countries the good folks at Freedom House “like” (in the Facebook sense) and who they don’t like–a good way of developing a Christmas list of those who have been naughty and nice (according to Freedom House). However, what if anything it tells us about “Freedom on the Net” or anything remotely related to that is rather more questionable.
As a research study, which this claims to be (the term “research” is used 5 time in the first three paragraphs of the acknowledgements), this should be an embarrassment to both the host organization and the funders.
In fact, the idea of establishing an index of transgressions including by governments and by the major Internet corporations against various elements of the UN Declaration on Human Rights is an excellent idea. However, for such an index to be meaningful it would have to be developed with a degree of methodological rigour, institutional (and ideological) independence and based on as international a range of inclusion and participation as the Internet itself.
One guesses that the US with its continuing extension into ever broader areas of on-line surveillance and it’s persecution of those such as Aaron Swartz on the basis of outmoded conceptions of property rights in digitized knowledge, might not rate quite so high on such an independent and somewhat more obviously value free index. The ratings for folks in the corporate sector including the funders of the Freedom House study might be interesting to assess as well.