About this Resource

The growing interest in international surveys and the desire for governments to be able to make comparisons raises the importance of making sure the results are accurate. Some recent examples illustrate the political fallout from results that look implausible.

Example 2: International Adult Literacy Survey (IALS)

In 1994 nine countries took part in the first International Adult Literacy Survey (IALS). The results show large differences between countries in the distribution of skills. In Sweden, only 8% of the population were found to be at the lowest literacy level compared with 14% in Germany, 17% in Canada and 21% in the United States. Both France and Poland were found to have much higher proportions of their population at the lowest level, 41% and 43% respectively.

The French Government withdrew the results from the publication in the absence of any plausible explanation as to why the rates were so high in France compared with their neighbouring countries. This resulted in front page news in France on two fronts, about whether the results were right and about the role of literacy in modern French society. The long running story led to a methodological review of the survey which found that the results for France as well as some other countries were subject to concern from a number of perspectives, most notably translation, sampling, survey procedures and scoring. France went on to develop an alternative method for measuring literacy, l'Enquête Information et Vie Quotidienne (IVQ) (Information in Daily Life).

Example 3: Programme for International Student Achievement (PISA)

In the literacy surveys of 15 years olds conducted under the Programme for International Student Achievement (PISA) in 2000. In this survey German students performed at a much lower level than their counterparts in neighbouring countries. This again led to headline news stories and debate in the media. This time however the discussion focussed almost entirely on the poor standards and what to do about them rather than on whether the statistics were correct.

Example 4: UK participation in PISA 2003

The third example relates to the UK participation in PISA 2003 when the UK narrowly missed the minimum response rates required. As it was anticipated that the response rate would be difficult to achieve additional information was collected during fieldwork in order to be able to look at bias in the sample. The UK was the only country who was able to demonstrate the existence and extent of bias in the sample. Despite this, the UK results were not included in the main analyses as they were deemed to have failed the minimum quality criteria.

Events such as these, where national pride is at stake, means that international comparisons are subject to much more scrutiny than previously. It is important that your analysis is sound, particularly if comparing the relative performance of different countries.

The University of Manchester; Mimas; ESRC; RDI

Countries and Citizens: Unit 3 Making cross-national comparisons using micro data by Siobhan Carey, Department for International Development is licensed under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales Licence.