Select the headings to open them. They will open on this page. Click for further information about these headings if required.
Your browser does not support these headings. To ensure that the contents remain accessible, they have been automatically opened so that all the information on the page is displayed.
However, to take advantage of the headings and to ensure that the layout and design of this site are displayed correctly, you are recommended to upgrade to a current version of one of the following standards compliant browsers:
- Internet Explorer (http://www.microsoft.com/windows/downloads/ie/getitnow.mspx)
- Mozilla Firefox (http://www.mozilla.com/en-US/firefox/firefox.html)
- Opera (http://www.opera.com/download/)
There are references to sources and further reading within the text. You can view the full reference by selecting the name to open a 'pop-up window'. You can then add comments to these references and include them in a personal references list.
Ongoing instructions are provided, but if you would like to read more information on how to do this before you begin, or if you experience problems, select this link for instructions on how to use the personal references list
- Select the references to see full bibliographic details in a pop-up window.
- NB. If you use pop-up window blocking software, you will need to deactivate it for pop-ups on this site to use the reference list. Alternatively, all full references can be seen by navigating to the 'References' page.
- If you would like to add a comment to attach to your record of the reference, write in the text box.
- Select 'add to list' to add the reference and comment to your list.
- You can view your references at any time, by selecting one of the 'Show references list' links. This will open your list in a pop-up window.
- NB. Each module has a different reference list. If you are navigating between modules, any references collected will be saved to different lists. To view the list for a particular module, select any 'Show references list' link within that module.
- If you leave this page, your list will be saved and will be
available for you to refer to again if you return.
(This will not work if you have disabled cookies in your browser settings)
- NB. Comments will not be saved if you navigate away from the page. You should copy all comments before you leave if you would like to save them.
Glossary links are also included within the text. If a word appears as a link, clicking on this link will show the definition of the word in a 'pop-up window'. Select the following link for information about these glossary links if required.
- Select the links see the definitions in a pop-up window.
- NB. If you use pop-up window blocking software, you will need to deactivate it for pop-ups on this site to use the glossary links. Alternatively, all glossary definitions can be seen on the 'Glossary' page in the 'Resources' section.
Accessing respondents is a key concern in online questionnaires. As Coomber (1997) has highlighted, there is little point in setting up an online questionnaire and passively 'waiting' for eligible respondents to find the site: more active enrolment is needed to encourage users to complete an online survey. Though, as Fricker (2008) reports, web-based recruitment can be fruitful, it can also be fraught with difficulties around avoiding accusations of spamming, accessing hard-to-reach users such as 'lurkers in discussion boards, and even dealing with aggressive messages or attacks from hackers. The significance of having any relevant web site providers 'on your side' cannot be underestimated. This access issue is becoming increasingly important. As the use of the internet increases in the general population, and the novelty of responding to online questionnaires is wearing off, getting online users to complete online questionnaires is becoming more problematic, while 'survey fatigue' is increasingly an issue regardless of mode (Whitte, 2009). Online users are becoming wise to the fact that they are paying for the privilege of being 'over-surveyed' (McDonald and Adam 2003). The result is that online users are intolerant of unsolicited communications and invitations to participate in research are increasingly considered 'spamming' (Harris 1997), resulting in online surveys often having lower response rates than onsite surveys. Witmer et al. (1999), for instance, report response rates of 10% or lower being common for online surveys, while Shih and Fan (2007) report response rates in a mixed-mode survey being 14% higher for mail surveys compared to web-based ones.
A further issue of concern when using online questionnaires is that they present serious sampling problems for a study based on the quantitative tradition. There is no access to a central registry, or master database, from which to create an accurate sampling frame, nor is there any way of discerning how many users are logging on from a particular computer or how many accounts/memberships a particular individual might have. This means random sampling or gaining a representative sample is not possible which places serious limitations on the inferential value of web surveys, especially those focusing on 'broad and diffuse populations' (Couper, 2007, p.S88). Internet surveys on the whole, therefore, attempt to select a sub-set of users to participate in the survey. This may be through attempts at non-probability sampling, or through self-selection. Coomber (1997) has suggested that online self-selection is suitable to use when researching a particular group of internet users, especially when connecting with groups that are not bound in a particular area but that share a common interest (O'Lear 1996, 210). So while self-selection may clearly limit the scope of the results where broad sample representativeness is required, it is important for reaching marginal groups or if the researcher is conducting an interpretive investigation. Moreover, it must be noted that self-selection occurs in many conventional surveying situations and is not unique to online research.
There is, however, divergent opinion as to whether the internet provides an inherently biased sample population for quantitative studies. Research has documented that in the early years of its inception, those using the internet tended to be predominately male, white, first world residents under 35 years old while those with lower educational levels, lower incomes, living in rural areas and black or Hispanic were underrepresented (Mann and Stewart 2000). Some argue that access to the internet is still highly unevenly distributed both socially and spatially (Janelle and Hodge 2000; Warf 2001). Indeed, according to Silver (2000), the digital divide has continued to grow in America, and this divide is fast becoming a 'racial ravine', suggesting a biased internet user sample population. Hewson et al. (2003) however, are more optimistic. They argue that overall the evidence suggests that the internet user population now represents a vast and diverse section of the general population and that it is rapidly moving beyond the select group of technologically-proficient male professionals who were once largely predominant. Dodd (1998, 63), for example, argues that the internet's broad scope can actually improve representativeness, as many population groups usually difficult to contact may be easier to access via the internet while Litvin and Kar (2001) show that the sample characteristics of conventional methods and electronic methods are converging, with electronically solicited samples becoming more like random paper-based samples, as technological uptake of the internet increases. Indeed, most recent research opinion suggests that as online surveys can often survey an entire population of a particular group, rather than a sample, they can reduce or eliminate the effects of sampling error altogether (Umbach, 2004). Moreover, samples can also be weighted to reduce bias so if a certain demographic group is underrepresented, its responses can be counted more heavily (Best and Krueger 2004).
However, when the online researcher is working in a more qualitative tradition, then purposive sampling of key online communities/online individuals with particular interests can be a particularly useful sampling strategy. Snowballing can also be a useful sampling method whereby one individual is contacted for a specific piece of research and they recommend further contacts for the research project. Recruitment therefore gains momentum, or 'snowballs' as the research progresses. Similarly, in these cases, occurences such as the 'pass-along effect' whereby links to a questionnaire are forwarded beyond the sampling frame outside the control of the researcher, may not be negative and may, with careful consideration intentionally be incorporated into the sampling strategy (Norman and Russell, 2006).
A further issue relating to online questionnaires involves verifying the identity of the participants and the reliability of their responses. Often it quite simply is not possible to verify the identity of respondents so there is the possibility that some respondents may be 'spoofs' or indeed may play with their online identity in completing the research (Roberts and Parks 2001). Online research also does not enable the researcher to assess the reliability of responses. As Hewson et al. (2003, 44) state: '…when materials are administered via a computer terminal rather than in person, the researcher is less able to judge the extent to which the responses are sincere and genuine, the conditions under which the questionnaire was answered and the state of the participants at the time of participation (for example, intoxicated, distracted, and so on)…'. While being an irresolvable sampling issue of online research at present, this is not unique to virtual methods: incorrectly completed questionnaires, unreliable responses and non-verifiable identities may also be a feature of onsite surveys. Moreover, in conducting online community research, how necessary is it to 'prove' the offline identity of the participants anyway? Taylor (1999, 443) argues that this depends on the initial research question and that '…the acceptance of online life as a thing in itself' is important. Indeed, it is increasingly recognised that online textual persona cannot be separated from the offline physical person who constructs them and they are commonly based on offline identities in any case (Valentine 2001). Additionally, recent research suggests that the anonymity of participants can play a positive role in the research process, reducing researcher bias and being particularly useful for embarrassing and sensitive topics (Hewson et al. 2003). But identity verification also raises issues for the research participants too as it may be harder for them to verify the researcher's identity. As Raghuram (personal communication, 2005) notes: 'One danger of online questionnaires is that questionnaires involve trust and it may be harder to build up trust when you are not face-to-face. Physically seeing a face can give you a sense of reassurance and in itself provides a forum for communication.' One way to overcome this problem is to have a dedicated project website in which the identity of the researchers and their institutional affiliation can be verified (see Madge and O'Connor, 2002).
According to Jeavons (1998) response rates show no relationship to gender, age or education level. However, response rates do drop off rapidly to online questionnaires. Crawford et al. (2001) suggest that if people are going to complete a web-survey they will do so in the first few hours or days of receiving it. However, it has also been found that increasing response rates can be achieved by follow-up reminders. Crawford et al (2001) propose that a single reminder email can double the number of respondents while Schaefer and Dillman (1998) found that four repeated contacts yielded the highest response rate. To improve response rates online questionnaire formats should be simple. Complex graphics, grid questions, open-ended questions and requests to supply email addresses all reduce response rates (Jeavons 1998; Knapp and Heidingsfelder 2001; Porter and Whitcomb 2003a) (Although Deutskens et al. (2004) suggest that enhancing online questionnaires with visual elements can lead to a higher response rate). Response rates can also be improved with carefully worded introductory letters or emails which include details of estimated time to complete the survey and a statement indicating that the respondent is part of a small group chosen to participate in the study (Porter and Whitcomb 2003b). Short surveys (maximum ten minutes) improve response rates (Crawford et al (2001) as do those that request personal information at the start of the questionnaire rather than the end (Frick et al. 2001). Type of internet connection and hardware and software used in accessing the internet will also impact on response rates, while emphasis on anonymity or confidentiality has also been found to increase response rates (Michaelidou and Dibb, 2006). Trust in the sponsor of a web survey also has a direct positive impact on response rates (Fang et. al., 2009). Best and Krueger (2004) also suggest that introducing a 'social presence' will discourage item non-response. To avoid particular questions not being completed, messages can be inserted to express gratitude, emphasise importance or describe progress. The use of missing data messages when an item has not been completed has also been found to reduce item non-response.
- Send introductory letter outlining project and estimated time needed to complete the questionnaire;
- Include an institutionally sanctioned website to validate researchers' identity;
- Provide clear instructions on how to complete the questionnaire;
- Request personal information at the start of the questionnaire rather than the end;
- Use simple questionnaire format and avoid unnecessary graphics;
- Avoid grid questions, open-ended questions and requests for email addresses;
- Design survey so it takes approximately 10 minutes to complete;
- Do not include more than 15 questions;
- Send one or two follow up reminders;
- Include 'social presence' or missing data messages to reduce item non-response;
- Emphasise confidentiality.
The impact of incentives on improving response rates is mixed. Some researchers suggest that incentives have no effect on response rates (Cook et al. 2000) while others indicate some improvement of response rates when an incentive is introduced (Bosnjak and Tuten 2001). There are three main types of incentive: cash equivalents through web-based companies, gift certificates from popular retailers or lotteries that promise financial or produce reward. Deutskens et al. (2004) suggest that vouchers are the best incentives with long questionnaires while lotteries are more efficient in short surveys. Bennett (2000) proposes that the incentive must be relevant to the audience while Birnholtz et al (2004) suggest that generally cash is a superior incentive to gifts for an online survey, even with technologically sophisticated respondents. This may be due to the perceived limitations, delayed payoff or reduced visibility of online gift certificates. Bosnjak and Tuten (2003) also show that pre-paid incentives have no advantages in terms of willingness to participate and actual completion rates, when compared to postpaid incentives. Finally, it must also be remembered that introducing voucher incentives has the potential to skew the response rate towards people who like shopping in the shops where the vouchers may be used. This also raises issues with respect to internationalisation, as clearly many vouchers are place specific and cannot be used beyond national boundaries. Su et. al. (2008) found that the effects of incentives for online questionnaires are largely consistent with those for more 'traditional' surveys: for example that material incentives are most effective, that prepaid incentives are marginally more effective than promised incentives, and that conditional incentives significantly reduce response rates compared to unconditional ones with only a slight positive impact on response quality. Clearly, though, a decision on whether incentives are appropriate and what form of incentive is most suitable can only be made according to the particular context of research and the target population.
It is clear therefore, that although the online questionnaire has great potential in reaching specific groups difficult to access using conventional means, and in increasing the opportunity for having a very large worldwide pool of respondents, it also has the potential for supporting the views of those 'privileged' with computer access. This is especially the case if the research is represented uncritically without reference to the sampling procedure. Findings from online questionnaires are indicative, should be read with caution and analysed with acceptance of the likely relative sample bias (although the degree of this cannot be measured). Thus according to Wakeford (2000, 33): 'The quantity of information that may be generated, and the speed at which responses can be collected, can result in pleasing piles of data- but we should be wary of being seduced by sheer quantity; data is only useful if it is representative of the larger population.' This is clearly currently the case but recent research does hint that in the future the sampling issue may become a less significant issue in the virtual environment. Riva et al. (2003), for example, report no significant differences in responses gained from the same questionnaire from online participants compared to those completing a paper survey, even when the online sample is not controlled. However, it is likely that the use of online questionnaires will increase further in popularity as problems of coverage bias and unfamiliarity subside. Also, as the tools for conducting online questionnaires improve in sophistication and research on how best to employ this particular method progress, it is likely that the use of online questionnaires will proliferate.
Following Best and Krueger (2004), there are 5 main stages in drawing a sample for an online questionnaire:
The target population will be informed by the aims of the research. Issues of non-response bias must be considered as respondents who answer an online questionnaire may have very different attitudes or demographic characteristics to those who do not respond. This is particularly the case because some social groups may be underrepresented among internet users, including people of limited financial resources, members of some ethnic groups, older people and those with lower educational levels (Umbach, 2004). Additionally, specifying the target population will depend upon the specific internet medium to be used, as different numbers and types of people use different services. For example, 89% of internet users use email, 81% use the web but only 18% use mailing lists (Best and Krueger 2004). Additionally list users are more likely to be white, employed, married and parents than email and web users (Best and Krueger 2004).
After specifying the target population the sampling frame must be designed. This sampling frame is used to identify and locate suitable respondents. The development of the sampling frame for online questionnaires can be more difficult than for onsite questionnaires as specific computers and their users cannot be identified or located in advance. Development of the sampling frame will also depend upon the specific service being used (Best and Krueger 2004). For example, for email users the email addresses of the potential respondents must be discerned but there is no one comprehensive directory of email addresses so they must be gained from the holder, an associated user, or an organization that compiles email lists for internal or external purposes. For mailing lists, web portals or searches must be used to locate the appropriate mailing list and then the researcher must subscribe to communicate with the users of the list, bearing in mind the associated ethical issues.
Researchers must then determine the members of the sampling frame to be selected based on probabilistic or non-probabilistic sampling methods. Probabilistic sampling methods ensure that each member of the sampling frame has an equal chance of being selected. This can be problematic in online research as there is no access to a central registry, or master database, from which to create an accurate sampling frame, nor is there any way of discerning how many users are logging on from a particular computer or how many accounts/memberships a particular individual might have. Probabilistic sampling is therefore only possible when the target population is restricted to a group of users that can be fully identified and contacted, for example when a complete list of email addresses can be discerned for schools or trade associations. Non-probabilistic sampling is more common in online questionnaires whereby sub-set of users are selected. Coomber (1997) has suggested that online self-selection is suitable to use when researching a particular group of internet users, whilst O'Lear (1996) suggested that this is particularly useful when connecting with groups that are not bound in a particular area but that share a common interest. Generalisations from such specific user groups can be problematic but some attempts have been made to overcome these through post-stratification weighting (see (see Taylor et al. 2001) and propensity scoring Miller and Panjikaran 2001). Whatever the sampling method selected, its limitations must be clearly stated and taken into account in any analysis and conclusions, as is also good practice in research using onsite questionnaires.
The selection of the sample size will be determined by the particular research question. It will also depend on the desired number of cases, the extent of invalid contact and the projected cooperation of respondents (Best and Krueger 2004). Witmer et al. (1999) report response rates of 10% or lower being common for online surveys, so this must be built into decisions about sample size.
Care must be taken when initiating contacting procedures and it is important to have ethical clearance for the research. Respondents should also have given their informed consent, and the questionnaire should follow equal opportunity guidelines. The Higher Education and Research Organisation in the UK (HERO) webpage on Professional Ethics and Equal Opportunities provides an overview of the key issues along with a range of relevant links.
According to Best and Krueger (2004) there are three main methods of recruitment for online questionnaires. The choice will depend on the sampling strategy and the aims of the research:
The online questionnaire is posted on a website which visitors view and complete using a web browser. Respondents can be recruited by placing a hypertext link on the home page if the website receives heavy traffic. A programme can be installed on the web server to randomly deliver the survey to people who visit the home page, but using this strategy makes estimation of the sampling frame difficult, thus precluding measurement of response rate and non-response bias. If the webpage does not receive sufficient traffic, then invitations can be sent out via email, postal mail, advertisements in the media, posting on frequently used online services, or using in-house directories of online addresses. Online advertisements can also be posted on frequently visited websites inviting respondents with specified characteristics (e.g. age, sexuality and occupation).
Various locations can be selected including entry portal sites (e.g. Microsoft network), sponsored search engines and directories (e.g. Yahoo), sponsored content sites (e.g. Amazon.com) but formal requests must be made and this may be costly. Advertisements can be embedded (displayed to all internet users visiting a particular web page and occupy some proportion of the page) or intercept (appear in a separate browser window and include pop-ups and floating advertisements). Care must be taken as users may have software designed to prevent intercept advertisements being displayed on their systems. Features of the advertisement that encourage users to visit the website include simple intrinsic appeals ('Contribute to an important study') and having a stationary background rather than moving images. The advertisement then directly links to the main website for formal recruitment. Such approaches are useful for recruiting large, diverse non-probabilistic samples and can also be useful for targeting particular groups through appeals on specialist websites (see example).
How an Online Support Network Affects the Experience of Living with a Food Allergy - Dr. Neil Coulson and Dr. Rebecca Knibb, University of Derby.
A request was made to place the following advertisement on the homepage of an online support group website, FAST (Food Allergy Survivors Together).
Select the image to see a larger version or a description. This will open in a new window, which you should close to return to this page.
By following the link, users were given the following simple message:
There are currently two exciting opportunities to participate in online research. Please show your support for this research by responding to the short questionnaires.
News Release for All Members
Posted: January 14, 2005
This is a survey/research project that you are invited to participate in. It is being conducted by Dr. Neil Coulson and Dr. Rebecca Knibb at the University of Derby, England, UK. Please select the link below.
The email message contains an embedded hyperlink to a website hosting the online questionnaire.
- There is no one central registry of email users lists available to generate probabilistic samples.
- Sources of email lists can be obtained from public data bases, web portals and individual websites but these can be outdated and incomplete and are often search driven so can be time-consuming to obtain.
- Email addresses can be purchased but companies may have gained permission unknowingly so ethical problems can be encountered.
- Email solicitations can be considered 'spamming' so direct permissions should be sought where possible and all research should include institutional legitimization.
- Emails should be sent direct to single recipient and more than one address should never be listed in the 'to' or 'cc' field since all the recipients will see the entire list. The function 'bcc' can be used to send a single message to multiple recipients without revealing users' email addresses to other participants.
- Include a valid email address in the 'from' field or recipients may consider your message 'spam'.
- The 'subject' field must be precise and attract users to participate in the study.
- Emails with attachments are less likely to be opened owing to virus threats.
- Email message must be compelling, brief and clear to get people to respond, including aims of study, research procedure, how respondent's name and address were obtained, researchers details, institutional affiliation etc.
- Hyperlinks and graphics (including institutional logos) should be used sparingly and audio and video should be omitted.
- Informed consent must be obtained.
- Provide the URL that will take people directly to the online questionnaire.
- Inform the recipients how to contact the researchers if they have a problem or question.
The Association of American Geographers 'Internationalizing Geography in Higher Education' study.
The following email was sent to members of the association to request participation.
Subject: Internationalizing Geography in Higher Education
The Association of American Geographers invites your participation in a new study, “Internationalizing Geography in Higher Education,” to examine international education and collaboration in the discipline of geography. Partial funding for this research project comes from the National Science Foundation and the American Council on Education.
We are gathering information from geographers who teach at higher education institutions outside the United States to consider their perceptions of international collaboration and its value for improving geography education and research. We are also investigating the extent to which these geographers support the goals of international and global learning in their courses.
The results of this study will help professional associations, academic departments, and higher education institutions develop resources and programs to facilitate scholarly and educational collaborations among geographers worldwide. Your perspective is important to us regardless of whether you are currently or formerly have been involved in international collaborative work.
We ask that you take a few minutes of your time to complete the survey available here:
When you first visit this site, you must create a username and password to access the questionnaire online -- your AAG membership number (if you have one) will not work. Alternatively, you can download the survey and return a hard copy to us by mail or fax. In either case, we would appreciate receiving your responses by Friday, April 1, 2005.
The survey can be completed in approximately 20 minutes and you can save your work online to complete at a later time.
This research has been reviewed by Texas State University’s Institutional Review Board (IRB) under the Online Center for Global Geography Education project [Tel: 01-512-245-2314]. All records of the content of the survey will be held strictly confidential and neither you nor your department will be identified by name in the final report. You are under no obligation to participate in the study. Your completing and returning the questionnaire will be taken as evidence of your willingness to participate and your consent to have the information used for the purposes of the study. If you decide to participate, you are free to withdraw at any time with no penalty. A copy of the report will be mailed to participating individuals upon request after the completion of the study.
Thank you in advance for your participation. Please direct any questions regarding the survey to Michael Solem at firstname.lastname@example.org.
Dr. Michael Solem, Educational Affairs Director
Waverly Ray, Research Assistant
A message is posted to a mailing list requesting participants for the online questionnaire.
- Useful for non-probabilistic samples of sub-users.
- Public lists can be searched via mailing list archive sites.
- Private lists must be obtained through individual contact and formal permission to organisations concerned, for example relevant businesses or educational institutions.
- Researchers must subscribe to the list to request formal recruitment via the list administrator or moderator.
- Sensitivity and respect should be displayed towards members of the list and only one or two follow up postings (repeat requests) should be made to increase response rates - more than this may be considered spam.
- The appeal for participants should be compelling, brief and clear to get people to respond, including aims of study, research procedure, researchers details, institutional affiliation and why this specific mailing group has been targeted.
- Respondents must be instructed to return any emails to the researcher not the whole mailing list.
- Information about participant demographics must be collected in order to assess the nature of the sample obtained.
Women’s experiences of infertility - Dr Nicola Illingworth, University of Stirling
Four potential online support and discussion groups were identified. Two of these groups were known to the researcher previously via an exploratory study one year earlier. A further two groups were identified by members of these groups (snowballing) by either passing my details on or advising me of an active group to contact.
The initial correspondence with the first two groups are shown below as examples.
Group 1 (UK site): first contact August, 2001.
Details concerning the nature of the current research and previous experience/research in this field was forwarded to the group moderator. Access was agreed by group moderator within 48 hours. Following Rosenthal (1975), a series of repeat calls for participants were posted via the site Bulletin Board at 7 day intervals (x 4).
Access Request: Group moderator:
I am a PhD student based at the University of Stirling, Scotland conducting research into women's experiences of infertility and treatment processes. I would like to request permission to access this discussion group for my current research.
During 1999/2000, I conducted a small-scale pilot study in this field and subsequently received a further 3 years funding for doctoral research in this area. I am particularly interested in talking to women experiencing all stages of infertility and the treatment process - combining experiences pre-treatment, during treatment and post-treatment. My previous research used email communication as a contact method, receiving very positive feedback from participants. Likewise, this current research will also be conducted primarily using the internet - either in the form of one-to-one e-mail interviews or diary-keeping. Anonymity and confidentiality will be strictly maintained at all times.
If you would like further discussion and information regarding my past/current research, please contact me either by phone or by e-mail.
Many thanks and look forward to hearing from you
Department of Applied Social Science
University of Stirling
Tel: 01786 466305
Group moderator response:
Thank you for your request to join this site. Access is confirmed. Membership details will be forwarded shortly.
Infertility Research Call (Bulletin Board):
My name is Nicola Illingworth. I am a research student, based at the University of Stirling, Scotland, conducting research exploring women’s experiences of infertility and the treatment process. During 1999/2000, I conducted a small-scale pilot study in this field and subsequently received a further 3 years funding for doctoral research in this area. I am particularly interested in talking to women experiencing all stages of infertility and the treatment process – combining experiences pre-treatment, during treatment and post-treatment.
Research participation involves completion of an initial email questionnaire and subsequent participation in either a one-to-one email interview or diary-keeping through treatment stages. No technical expertise – other than access to an email account - is required to participate – and I will offer help and advice if needed. Anonymity and confidentiality will be strictly maintained at all times.
If you would like to take part in this research and/or would like more information, please contact me at:
Group 2 (International site): first contact February, 2002.
Access was confirmed by the group moderator after 7 days.
This was a larger site, to which a series of research calls were posted (as above) via the site Bulletin Board and more specialised discussion groups.
Although there had been a high level of response when the researcher had accessed this site previously, on this occasion, response was minimal.
A search of related discussion sites revealed a number of negative comments towards more recent research conducted using this site. In particular, a number of comments received suggested researchers had only revealed themselves after monitoring group discussion for lengthy periods – perhaps a contributing factor.
The following case studies give an indication of how sampling issues were dealt with in three studies employing online methods.
The 'Cyberparents' logo
The Cyberparents Research Project
An internet-based research project initiated to examine how, why and in what ways new parents use the internet as an information source about parenting and as a form of social support. The project focused on one pioneer UK parenting website:Babyworld.
Online questionnaire type
A web-based questionnaire survey was used to identify general patterns of use of Babyworld. The questionnaire survey was created using the html compiler 'Adobe GoLive 4.0' and followed a similar format to traditional self-completion postal questionnaires, the main difference being that the survey form was set up online. In order to administer the questionnaire a series of webpages were developed. All pages included the University of Leicester crest to show institutional affiliation, to give the project credibility and ensure the participants could verify our status. The website included a homepage with a brief introduction to the project, which was linked to further pages entitled 'meet the researchers' and 'more about the project'.
In our research several hotlinks were created between the questionnaire, the Cyberparents website and Babyworld website. The links from Babyworld to the research webpages were made at the suggestion of the website providers and positioned strategically in prime locations on the Babyworld home page and the most used pages of the website. This was the only mechanism to elicit responses. It is significant to note that without the agreement and co-operation of the website providers to place these strategic hypertext links, the survey would most certainly not have been successful since it would have been impossible to recruit these specific online community members in any other way. Thus the issue of access to online communities and website providers is crucial when conducting online research.
As we did not have access to a central registry, or master database, from which to create an accurate sampling frame, random sampling or gaining a representative sample was not possible. We therefore used online self-selection as we were researching a particular group of internet users- mothers with new born babies using a specific website. We were quite clear in writing up the results of our research that our findings were limited to this self-selected sample.
In our research it was not possible to verify the identity of respondent but the questionnaire was so specific to being a new parent and a user of the Babyworld website that it would have been difficult, if not impossible, to complete the questionnaire without a detailed working knowledge of the website. However, this does not diminish the possibility that some respondents may have been 'spoofs' or indeed may have played with their online identity in completing the research.
No incentives were used in our research. As we were also both mothers of new born babies, survey respondents suggested that our commonality of experience as new mothers had encouraged them to complete the questionnaire. But we were uneasy with the respondents 'losing out' in terms of paying for the internet connection time to complete the questionnaire.
Empirical Evidence Regarding the Folk Psychological Concept of Belief
To examine aspects of the folk psychological concept of belief, and in particular to test some claims made within the folk psychology debate about the nature of this concept, and explore further factors which may influence people’s judgments in this area.
Online questionnaire type
The questionnaire was administered to participants by email, and consisted of a text file which was cut and pasted into the body of an email message. The text contained a short passage, followed by a question about this passage. Emails were sent to participants from the researcher’s university account, and participants were asked to respond by replying to that same email address.
Participants were initially recruited via psychology undergraduate seminars; however, this proved time consuming and did not generate a large number of respondents. 15 respondents were acquired in this manner, before the internet sampling method was adopted. Internet participants were recruited by posting participation requests to a number of USENET newsgroups with which the researcher was familiar. The posting invited people to respond by emailing the researcher if they were interested in participating, or wanted to find out more about the study. 158 people emailed to express an interest in taking part. Of these, 135 took part in the study (i.e. received and returned the questionnaire), within a few weeks of the initial posting. We were impressed with this level of response, compared with the previous method of recruiting undergraduates via seminar classes. Two key factors likely played a role in this pleasing level of response: firstly, this study was carried out in the very early days of internet-mediated research and thus the level of participation may have been enhanced due to novelty value; secondly, the newsgroups targeted were likely to reach potential participants for whom the research topic had high issue salience, thus further encouraging participation.
One problem which emerged during recruitment concerned a harsh response from one newsgroup moderator, who emailed the researcher stating that the posting was inappropriate and had been removed. Clearly, it is good practice to request permission from newsgroup moderators prior to posting a participation request; failure to do this was an oversight in this study.
Related to the point above concerning issue salience, the internet sample obtained contained a substantial proportion of respondents working within academia, and a fair number of these with backgrounds in areas related to psychology and other cognate disciplines. Given the nature of this research (to test everyday commonsense intuitions) a less specialist sample would have been ideal. Of the final sample of 140 participants (12 of these were the undergraduates recruited via seminars), 21 were excluded for the final data set because they were not considered naïve to the topic under investigation. There would thus appear to be a trade-off between acquiring larger sample sizes, and reducing bias due to issue salience, when using internet sampling methods similar to those employed here. Individual research contexts and goals will dictate the appropriate balance, but it should be borne in mind that it may be necessary to reduce the final data set subsequently, as was the case in the current study.
Participants’ identity was not known beyond the address of the email account from which they responded. For the purposes of this study further identity verification was not necessary, since there were no restrictions on who could take part beyond the requirement of being naïve to the topic under investigation (also, participants needed be over 18 years in order to be able to give informed consent, though enforcing this is a general problem for internet-mediated research).
It was necessary, however, to be able to track participants such that the version of the questionnaire viewed was known (there were several conditions in this study) when the responses were returned to the researcher, so that the response could be linked to the study participation condition. In most cases this was not problematic, since respondents typically replied from the same email address, and appended their answers to the original email which contained the questionnaire. However, a few participants either responded from a different email account, sent their answers without the original questionnaire appended, or both. In the case where both these events occurred, it would not have been possible to determine which version of the questionnaire these responses referred to, had the participants not thought to highlight the fact that they were responding from a different email account (which they did in all cases), and quote their alternative email address. Luckily the researcher had kept a separate record of which version was sent to each email address, otherwise matching responses to conditions would have been problematic even where the respondent had sent their answers form the same email account but deleted the original questionnaire. This issue highlights a possible disadvantage of email-based internet research methods, over for example web-based questionnaires where the researcher has more control over the format in which data is returned.
The number of responses obtained was good, and these were fast to come in. A measure of actual response rate was not possible since the sampling frame (how many people read the participation request) could not be determined. However, it is worth noting that of the 158 initial inquiries, 135 people went on the actually complete and return the questionnaire which was sent to them (i.e. an 85% return rate).
No incentives were offered in this study.
A Critical Geography of UK Biotechnology
This research aims to identify, interrogate and critically evaluate the innovative geographies of UK biotechnology, which is centred on the Golden Triangle. Biotechnology in the UK is a relatively new phenomenon, less than 30 years old, but its development has been haphazard due to the experimentalist politics and uncertain governance of the UK government. Furthermore, the unconsolidated management strategies and ambiguous ambitions of academia, industry and the government often result in an unresolved conflict between public and private interests.
Online questionnaire type
The online survey was used as a precursor to interviewing and focus groups. The results from the questionnaires helped identify key actors with relevance to specific aspects of the research project. None of the questionnaire results are intended to represent UK biotechnology activity and are for reference purposes only.
The recruitment was via an introductory email explaining the rationale of the research, while also covering ethical and confidentiality issues. E-mail addresses were obtained from a variety of sources from which a database of institutions and organizations directly and indirectly involved with UK biotechnology was created.
Sampling issuesThere was no predetermined sampling frame. The email containing the survey was sent to as many people as possible so as to identify those directly relevant to my different aspects of my research.
The identity of respondents was not an issue as the survey was only sent to a specific list. The survey requested respondents’ contact details which were used for subsequent contact.
The response rate was c.25%, which was considered positive from what was a mass-mailed survey. Not all of the responses were followed up as they were not deemed to be relevant, and some of those who did not respond to the survey were subsequently interviewed despite not responding. Those who were interviewed but did not respond to the survey mainly gave the reason for not responding as the survey was 'lost' to other tasks and then forgotten. The majority of responses to the survey were within 24hrs of the email request being sent, which appears to support the reasons given for not submitting by those later interviewed.
One individual specifically requested to be 'interviewed' by email due to the demand on time during the working week The interview consisted of 6 emails asking a series of questions with each response informing subsequent question for either clarification or exploring alternative topics. The clear benefit of this type of interview was the fact that it was written and each email permitted reflection on responses prior to further correspondence.
No incentives were offered directly, although a number of individuals and organization have requested copies of the final PhD and any associated publications.