Difference between revisions of "Administering an SRT"

From SurveyWiki
Jump to navigationJump to search
Line 6: Line 6:
 
Before you do anything, you need to make sure you have everything in place for training the test administrators. To train them, you will need to carry out the following steps
 
Before you do anything, you need to make sure you have everything in place for training the test administrators. To train them, you will need to carry out the following steps
 
# '''Prepare the training materials''' ⇒ You will have already prepared the final test in both elaborated transcription and recorded form having worked through [[Developing an SRT]]. In addition, you will have the responses for at least 20 of the participants who helped you develop the SRT. You can use 10 of these responses to train test administrator s and then the other 10 for the third step below of ensuring administrator reliability. What you'll create are in effect 10 test recordings. Each one will have the 15 final test sentences and the responses to these of one of the 10 test development participants. You'll also need to prepare score sheets for your administrator training and these can be edited down to the final 15 sentences from the scoresheets you used in test development. We recommend that you encourage your administrators during training to note down the specific type of error they hear occuring. This is unnecessary on the field where a simple mark can be used, but in training, you need to know if they understand the differences between the types of errors so that they can score accurately.  
 
# '''Prepare the training materials''' ⇒ You will have already prepared the final test in both elaborated transcription and recorded form having worked through [[Developing an SRT]]. In addition, you will have the responses for at least 20 of the participants who helped you develop the SRT. You can use 10 of these responses to train test administrator s and then the other 10 for the third step below of ensuring administrator reliability. What you'll create are in effect 10 test recordings. Each one will have the 15 final test sentences and the responses to these of one of the 10 test development participants. You'll also need to prepare score sheets for your administrator training and these can be edited down to the final 15 sentences from the scoresheets you used in test development. We recommend that you encourage your administrators during training to note down the specific type of error they hear occuring. This is unnecessary on the field where a simple mark can be used, but in training, you need to know if they understand the differences between the types of errors so that they can score accurately.  
# '''Train the administrators''' ⇒
+
# '''Train the administrators''' ⇒ Your administrators will need to be familiar with several things before they can administer a test: they'll need to be familiar with each line of the transcription including [[IPA]], they'll need to know the scoring system, they will have to be able to manage the practicalities of carrying out a test (see below for more) and they should be able to repeat all the test sentences and follow the transcript at the same time as the recording. At this point, they're able to actually listen to the 10 practice recordings that you've completed and score them one by one, comparing their answers to the master scoresheet. They should repeat this as many times as they need to to become confident scorers.
# '''Ensure administrator reliability''' ⇒  
+
# '''Ensure administrator reliability''' ⇒ Using the other 10 of the 20 recordings made during test development, you can make another set of recordings to help with administrator reliability. By using these, the administrators can test themselves to ensure that they are maintaining the standards required for scoring tests. Unlike step 2 above, the administrators should not pause the recording at any time or refer to the master scoresheet until they have scored the entire test. At this point, they can compare their score with that of the trainer to see how close they are. If an administrator is scoring either 5 above or below the trainer's total, they will need further training until they are more accurate in their assessments. You should also use these recordings to periodically re-calibrate administrator's skills if they are to administer this SRT over longer periods of time.
  
 
===How do you select participants?===
 
===How do you select participants?===

Revision as of 04:03, 17 May 2011

Data Collection Tools
Tools.png
Interviews
Observation
Questionnaires
Recorded Text Testing
Sentence Repetition Testing
Word Lists
Participatory Methods
Matched-Guise
Sentence Repetition Tests
Developing an SRT
Administering an SRT
Analysing SRT Data

Pre-Testing

Before you do anything, you need to make sure you have everything in place for training the test administrators. To train them, you will need to carry out the following steps

  1. Prepare the training materials ⇒ You will have already prepared the final test in both elaborated transcription and recorded form having worked through Developing an SRT. In addition, you will have the responses for at least 20 of the participants who helped you develop the SRT. You can use 10 of these responses to train test administrator s and then the other 10 for the third step below of ensuring administrator reliability. What you'll create are in effect 10 test recordings. Each one will have the 15 final test sentences and the responses to these of one of the 10 test development participants. You'll also need to prepare score sheets for your administrator training and these can be edited down to the final 15 sentences from the scoresheets you used in test development. We recommend that you encourage your administrators during training to note down the specific type of error they hear occuring. This is unnecessary on the field where a simple mark can be used, but in training, you need to know if they understand the differences between the types of errors so that they can score accurately.
  2. Train the administrators ⇒ Your administrators will need to be familiar with several things before they can administer a test: they'll need to be familiar with each line of the transcription including IPA, they'll need to know the scoring system, they will have to be able to manage the practicalities of carrying out a test (see below for more) and they should be able to repeat all the test sentences and follow the transcript at the same time as the recording. At this point, they're able to actually listen to the 10 practice recordings that you've completed and score them one by one, comparing their answers to the master scoresheet. They should repeat this as many times as they need to to become confident scorers.
  3. Ensure administrator reliability ⇒ Using the other 10 of the 20 recordings made during test development, you can make another set of recordings to help with administrator reliability. By using these, the administrators can test themselves to ensure that they are maintaining the standards required for scoring tests. Unlike step 2 above, the administrators should not pause the recording at any time or refer to the master scoresheet until they have scored the entire test. At this point, they can compare their score with that of the trainer to see how close they are. If an administrator is scoring either 5 above or below the trainer's total, they will need further training until they are more accurate in their assessments. You should also use these recordings to periodically re-calibrate administrator's skills if they are to administer this SRT over longer periods of time.

How do you select participants?

It is thus assumed that among any group of L1 speakers, the language they speak will be relatively uniform and so testing only a small number of any of them will give the same results as if we tested all of them. Thus, when testing to see if people can understand another language because of linguistic relatedness between their L1 and an L2, testing only a few speakers of the L1 will give you the information you need.

But SRTs are not tests of inherent intelligibility. They are used to test language proficiency... and proficiency varies not only from person to person but also from day to day for each language user.

This means we need to sample well also to bear in mind that our results may be influenced by variables as idiosyncratic as time of day or whether the participant's baby kept them up for hours the night before!

We should also ensure that participants are people who have no impediment to speaking clearly i.e. all their teeth, aren't chewing something, etc.

Screening Questionnaire

Once participants have been selected through a sampling method, we need to administer a questionnaire to gather basic demographic data. This will help to confirm that they are suitable for our research. It might also be helpful to include variables that might influence language learning in this questionnaire. The following is a list of some of the things such a questionnaire might include:

  • name
  • age
  • level of education
  • place of residence
  • profession
  • language spoken at home
  • clan
  • places travelled to
  • frequency of travel
  • purpose of travel
  • language/s spoken while travelling
  • relatives who speak the test language
  • patterns of exposure to the test language
  • patterns of use of the test language
  • preferences for language use
  • language attitudes
  • etc!

This list does not cover everything. But not everything would need to be covered in a screening questionnaire. The goals of the survey will have determined the need for and the purpose of the test. Use these factors to guide you as you construct the screening questionnaire and select the items that are most relevant for your test.

Once the participant has taken the test and you have a score for them, you can analyse their results and compare it to the information they have provided in this questionnaire. Good sampling means that any variables which are going to affect your data should be revealed through the questionnaire.

Testing

(notes from Radloff section 2.5 and 3.6)

Scoring

(notes from Radloff section 2.6)