About this Resource
[Skip navigation][Access key info]SEARCH | SITE MAP
SELF-STUDY INDEX
Exploring online research methods - Incorporating TRI-ORM

Implementation: Piloting, evaluation and analysis

[Skip instructions]

[i] Click on the headings to open them. They will open on this page. Open the following link for further information about these headings if required.

Your browser does not support these headings. To ensure that the contents remain accessible, they have been automatically opened so that all the information on the page is displayed.

However, to take advantage of the headings and to ensure that the layout and design of this site are displayed correctly, you are recommended to upgrade to a current version of one of the following standards compliant browsers:

There are references to sources and further reading within the text. You can view the full reference by clicking on the name to open a 'pop-up window'. You can then add comments to these references and include them in a personal references list.

Ongoing instructions are provided, but if you would like to read more information on how to do this before you begin, or if you experience problems, select this link for instructions on how to use the personal references list

Instructions:

  1. Select the references to see full bibliographic details in a pop-up window.
  2. NB. If you use pop-up window blocking software, you will need to deactivate it for pop-ups on this site to use the reference list. Alternatively, all full references can be seen by navigating to the 'References' page.
  3. If you would like to add a comment to attach to your record of the reference, write in the text box.
  4. Select 'add to list' to add the reference and comment to your list.
  5. You can view your references at any time, by selecting one of the 'Show references list' links. This will open your list in a pop-up window.
  6. NB. Each module has a different reference list. If you are navigating between modules, any references collected will be saved to different lists. To view the list for a particular module, select any 'Show references list' link within that module.
  7. If you leave this page, your list will be saved and will be available for you to refer to again if you return.
    (This will not work if you have disabled cookies in your browser settings)
  8. NB. Comments will not be saved if you navigate away from the page. You should copy all comments before you leave if you would like to save them.
  9. Use of the references list is JavaScript dependent. If JavaScript is disabled, it will be necessary to open the 'References' page to view the full references.

Glossary links are also included within the text. If a word appears as a link, clicking on this link will show the definition of the word in a 'pop-up window'. Select the following link for information about these glossary links if required.

  1. Select the links see the definitions in a pop-up window.
  2. NB. If you use pop-up window blocking software, you will need to deactivate it for pop-ups on this site to use the glossary links. Alternatively, all glossary definitions can be seen on the 'Glossary' page in the 'Resources' section.
  3. Use of the glossary links is JavaScript dependent. If JavaScript is disabled, it will be necessary to open the 'Glossary' page to view the definitions. Opening this page in a new window may allow you to refer more easily to the definitions while you navigate the site.

Open/close headingPiloting

Piloting is of increased significance in the virtual venue because if the researcher acts too quickly, the result can be disastrous as a poorly composed message can be sent around the globe at the flick of a switch. Thus according to Hewson et al. (2002) there is a direct relationship between pre-study adversarial testing of electronic materials and diminished risk of all sorts at study time.

Prior to distributing the online questionnaire all aspects of design must be piloted, preferably with different types of potential respondents and with different types of computer. Navigation, spelling, typographical errors, appearance and readability must all be checked. Usability testing can also be conducted for web-based questionnaires which involves checking that the website performs the function for which it was designed, with the minimum amount of user frustration, time and effort (Pearrow 2000). Nielsen (2000) suggests that testing five users will reveal 85% of a website's usability problems. At its simplest, usability testing involves asking the user to do something and observing how they manage the task. During the tests the participants are asked to verbalize their thoughts by 'thinking out loud'. The researcher should not give directions or judgments as to how the participant is using the web-based survey. Problems with page design, navigation, content and links should be noted. Any problems can be remedied before the final questionnaire is distributed. This approach can also be combined with the use of evaluation questionnaires in pilot studies (see example below).

Open/close headingExample

The following example of a paper-based questionnaire for a pilot study has been provided by researchers involved in an ESRC project on gender stereotypes (Prof Constantine Sedikides, University of Southampton; Dr Alison Lenton, University of Edinburgh). In order to carry out the studies connected to this project they established the [External Link - opens in a new window]Social Psychology Web-lab.

This particular study was designed to investigate the structure of implicit gender stereotypes by having participants categorise words as to whether they apply to women/men in general or whether they do not. They were also asked to rate the status of each word and to answer some questionnaires concerning individual differences with respect to own gender identity, sexist tendencies, susceptibility to social desirability, etc. It was hypothesised that the female stereotype is both broader and less rigidly held than the male one. The study entitled "gender representations" can be found on the project Website under "previous studies".

The pilot study with approx. 20 participants was conducted directly before the study went online. As a result of the piloting, some form of adjustment was made in virtually every aspect of the study: instructions were clarified, the answering scale for the categorisation task was thoroughly re-designed and the specific timing of the task (e.g. the fixation cross before each trial) was adjusted.

Piloting studies seem particularly helpful for complex tasks and tasks that involve timing of certain behavioural aspects of the participant. For every study since then, participants have been asked to provide feedback via an open-ended question at the very end of the study. This means that the difficulties/concerns/comments of participants can be taken into account on an ongoing basis within a given study (e.g. if feedback from the first few reveals that they didn't understand a particular set of instructions, they can be changed so that future participants will have a better understanding of the tasks before them) as well as between studies (e.g., some of the same scales tend to be used across all studies and participant feedback {along with the actual data} might reveal something about the utility of this practice).

Title: 'Feedback on Web-study' next to the logo for the Social Psychology Web-lab

Thank you very much for your participation. The study you have just completed is a test version of an experiment we will run on the Internet in the coming months. The purpose of the experiment is to examine the way in which stereotypes of men and women are structured. This is done by looking at the way participants in the study associate words with the concepts of "maleness" and "femaleness".

In order to finalise the set-up of our study, we would like to ask you to reflect on your impressions and experiences while completing the different tasks. The basic structure of the final version will be similar to the one you have just completed – except that it won’t require participants to make the confidence ratings.

Given that the upcoming study could be improved in light of your feedback, we would appreciate, if you could offer your comments on each of the following:

* initial instructions (e.g. with respect to clarity, length, language)

 

* categorisation task (e.g. with respect to timing, task difficulty, the ease of operating the answering scale, the clarity of the interspersed instructions)

 

*ratings of word status (e.g. with respect to the operation of the answering scale, length of the task, clarity of instructions)

 

*ratings of word status (e.g. with respect to the operation of the answering scale, length of the task, clarity of instructions)

 

*The time the fixation cross (‘+’) was shown before the target word (the word that should be categorised) appeared was 1.5 seconds. When you do not have to do the confidence ratings in-between the categorisations, do you think the fixation cross should be shown…

 

Shorter     

1
checkbox

2
checkbox

3
checkbox

3
checkbox

3
checkbox

     Longer

 

*You might have experienced that one of your trials ‘timed out’ (i.e., you didn't respond quickly enough). The time from the moment the target word is shown until the timeout was two seconds in this version. What do you think about this timing? – The time for categorising the words should be…

Shorter     

1
checkbox

2
checkbox

3
checkbox

3
checkbox

3
checkbox

     Longer

 

*How tiring did you find the study?

not at all   
tiring

1
checkbox

2
checkbox

3
checkbox

3
checkbox

3
checkbox

     very
     tiring

 

*How interesting did you find the study?

not at all
interesting

1
checkbox

2
checkbox

3
checkbox

3
checkbox

3
checkbox

     very
     interesting

 

Thanks for your help!

 

Close heading CLOSE

Close heading CLOSE

Open/close headingEvaluation

Prior to analysis and interpretation the researcher must evaluate the success of the online questionnaire. According to Denscombe (2003, 158-159) this will depend on the ability of the online questionnaire to:

  1. Provide full information
  2. Supply accurate information
  3. Achieve a decent response rate
  4. Uphold ethical principles.

It is also worth considering the following questions, adapted from Thurlow et al. (2004):

  1. Research breadth: Have you received a reasonable number of correctly completed online questionnaires to allow you to address your research question?
  2. Research quality: Are you happy that your online questionnaire has been of good quality to produce a robust set of data?
  3. Good examples: Have you found relevant case studies and examples with which to explore your research question?
  4. Own ideas: Have you been original in your thinking and translated this into the research design for the online questionnaire?

Close heading CLOSE

Open/close headingAnalysis

Once the success of the online questionnaire has been evaluated, the results can be analysed as soon as they are received.

Where a web form has been used to automatically populate a database, the data can be manipulated and exported once collected.

Where a web-form has been created that emails results direct to an email address, the emails can be gathered in a single folder (manually or automatically through setting an email 'rule'). They can then be automatically exported in a suitable format (such as comma separated values) directly into spreadsheet or statistical analysis packages. This process is made easier through the use of 'import/export wizards' which can guide the user through the process (See image below for an example of MS Outlook's import/export wizard).

Image of MS Outlook's import/export wizard dialogue box
The Microsoft Outlook import/export wizard dialogue box
Description

Dialogue box shows the option 'Select an action to perform' with a range of import and export options (the selected option is 'Export to a file'. Under the options, a description of the selected option is given (in this case, the description is 'Export outlook information to a file for use in other programs'. Lastly, there are 'next', 'back' and 'cancel buttons.

Further information about gathering and exporting data can be found in the 'Technical guide' module.

Once imported into a suitable package, the data can then be sorted and analysed. A brief general overview of data analysis can be found on the following page taken from the Web Center for Social Research Methods' 'knowledge base' by William M. Trochim at Cornell University.
[External Link - opens in a new window] http://www.socialresearchmethods.net/kb/analysis.htm.

The P|E|A|S (Practical Exemplars and Survey Analysis) website, supported by the ESRC Research Methods Programme, is a resource which provides exemplars in survey analysis, outlines the underlying theory, and offers information about analysis software.
[External Link - opens in a new window] http://www.napier.ac.uk/depts/fhls/peas/index.htm.

The quantitative resources area of the Social Science Information Gateway (SOSIG) website also offers links to key websites and training materials on data analysis and statistics.
[External Link - opens in a new window] http://www.intute.ac.uk/socialsciences/researchtools/

 

Close heading CLOSE

 

 

 

  © 2004-2010  All rights reserved    |    Maintained by ReStore    |    About this website    |    Disclaimer    |    Copyright    |    Citation policy    |    Contact us