Literacy training as an alternative to RTTs
There was a survey plan drafted in Cameroon that involved carrying out literacy classes. Collaboration with literacy workers was proposed, as they would train local trainers to teach in the literacy classes. A strength of carrying out experimental literacy classes would be that certain members of the local speech community would actually learn to read another language to some extent over a two or three week period. Transfer of knowledge would be measured, which would serve as a metric of how true our hypothesis that "X people can read Y literature ". It was hoped that both extensibility testing and literacy training would occur simultaneously.
It would be instructive to explore the extent to which extensibility testing results in a decision to promote a more ongoing use of materials by the tested speech community or partners. There may also be the issue of designing appropriate bridge materials.
Problems with RTT for Extensibility
In the late 1980s, there was an abortive attempt to measure extensibility by asking speakers of the different varieties to do manual adaptations of RTT-like stories/texts. It was abandoned because the team discovered that there were just too many variables to account for.
Subjects were asked to take a text in a variety other-than-their-own and to adapt it to their own variety. These varieties were all part of a very long dialect chain and were variously identified as being different languages or dialects of a single language or something in between those extremes according to speakers’ perceptions of identity. Existing scripture texts were used for this, so the team was working with translated materials from three different translations (two of the variables: less-than-fully-natural text, and multiple translation styles and registers).
Extensibility was supposed to be assessed by trying to quantify and evaluate the number of adaptations that were made. However, this proved problematic as the team realised they hadn’t adequately analysed ahead of time what sorts of changes might take place. While they could count changes, it was very hard to place them into any reliable set of categories, let alone try to assign some evaluative weight to those categories.
In addition, the fact that different people have different skills introduced further uncontrolled variables. Some subjects were much better at doing the task than others. Several other factors were not taken into account including L1 proficiency, their levels of bilingualism in the LWC (or other vernacular varieties), their level of education, their previous exposure to the scripture, their contact with the varieties the team was asking them to adapt from, etc.
|Here’s an idea that is in the "pilot test phase" right now:
Specialists do translation with a mother tongue speaker of a certain dialect. They have published some stories. We try to find out more about how far the materials they produce can reach (we have linguistic data about how close the dialects are, but scarce knowledge about acceptance and actual intelligibility). So we have a translator a) tell an incident from his life which we use to do an "ordinary" retell-RTT and b) read a passage from a text that we considered was not that well known. We segment these and have test persons retell segment by segment as with any retell-test. We call both tests “retell-RTTs”, but we say that it's two different genres (even though in our case written language is very much like the spoken language).
- You stated that for the text passage, you are recording a reading done by a speaker from the "reference" dialect of the language (I'll call it dialect A), and hope to test that it various other dialects (dialect B, C, D, ...). Have you considered recording the text done by a speaker of the dialect you are actually testing? In other words you would go to one of the language dialects where you want to test the text (dialect B for instance) and record a speaker of dialect B reading the translation, and then test that recording among other dialect B speakers. If the pronunciation of the words in dialect A is a hindrance to dialect B's comprehension and acceptability, than this could be overcome by recording the reading from a speaker of dialect B.
- Of course this may mean you would need to train the dialect B person who reads the passage, encouraging him/her to pronounce the words in his own dialect and not try to mimic the pronunciation of a dialect A speaker. And perhaps this may not even be possible if there other significant linguistic differences between dialect A and B such that the person from dialect B would not even be able to read the translation in dialect A. But if that were the case, it might already be evident that the translation is not extensible to dialect B.
- As we specifically aim to check if the "reference" dialect (dialect A) could work for other dialect groups as well we found it natural that a speaker of dialect A is reading the passage we were going to use for testing. We figured that we first check if dialect A is "good enough" for speakers of other dialects (i.e.: both well understood and accepted), and only if dialect A wouldn't do, we'd do more survey.
- It would be very hard to find people to read out the dialect A translation in their own dialect. Still, it might be possible. Differences are more significant to speakers than the pronunciation. The most striking of these differences we have found to be in vocabulary. The pronunciation differences are usually no obstacle for most adults. Our experience so far showed that it is easier for people to deal with dialect differences in an oral form rather than written; and following the same line: acceptance of literature in a dialect which is not one's own is higher if the text is presented orally. That is why, for now, we test orally.
- Thanks for sharing your insight. It appears that the differences in pronunciation minimally affected comprehension, and as such, it was not really beneficial to record the text from a speaker of the test area.
- We have tried a few kinds of literature RTT. If we don’t have lots of varieties to test, then we have recently tried to do some combination RTTs, where we play an oral RTT story in the variety used for the literature, an oral recording of someone reading (as naturally as possible) a lesser-known text from the literature, and then a written test. If there are people skilled at writing, we try to have someone write a short personal experience narrative, rather than using a more foreign/awkward/knowable scripture portion.
- If the pilot test subjects have some knowledge of the content of the text you choose, that might enable them to answer correctly about things that aren’t very clear otherwise in a story.
- Showing a written text and asking people to read it aloud has been very effective. Whereas in the past, we would get vague answers about how many people could read or how well they could read, the situation becomes much more clear when someone actually has to do it.
- We are just starting to experiment with ways to quantify or at least categorize these responses. Below are some of our ideas. Asking questions about a written text is obviously different from a spoken text, especially if the text is still in front of them. One option would be to encourage the subject to set the paper aside and try to answer. A potential confounding factor in that, though, is that beginning readers tend to have to invest so much effort in just sounding everything out that they can’t devote so much time to the content.
|a. Observation: Speed (fast, medium, slow)
|b. Observation: Mistakes (every sentence, 1-3, none)
|c. Observation: Confidence (yes, no)
|How much did you understand: (1) everything, (2) most, (3) half, (4) some, (5) none?
|What is this story about? Please, include all the details you can. (if they don’t give details, then ask the following questions)
|Where did they go?
|When did they rest?
|What did they see?
We used RTT methodology to test three recorded stories to assess text comprehension. In order to ensure our subjects would not already be familiar with the text, we chose stories from the Bible and tested only non-Christians.
- A first-person narrative told by a man here. This was the hometown/practice test to screen out those who fail all tests.
- The "Prodigal Son" from Luke as translated by a man who now lives elsewhere. This story was read by a man from the test location.
- "The King who forgives a man with a debt" in Matthew 18, as translated by a team living elsewhere and read by one of the translators.
Our tests were validated somewhat when we found that people who had learned the dialect (through intermarriage or travel) did very well on the tests, while people who hadn't learned the dialect did very poorly. There did not appear to be any difference in comprehension between the story read by a speaker of the language from elsewhere and the story read by a man from the test location.
Interpreting Results of Extensibility Testing: An example from the Philippines
|We wanted to know if literature produced for one language community (A) can be used by another language community (B). Basically, B has 7 sub-tribes who speak in different intonation with some variations on the vocabulary but they can understand each other and would speak to each other in their own variety. B used to be part of A. People from B speak with A in their own variety. We did a RTT of A in each B sub-tribe and each time we record the RTT questions in the local variety. Basically, comprehension of oral text is high (esp for adults).
Now in one portion of our SLQ (reading comprehension) we asked them to read a short text (with familiar/common words) from the A text and a short paragraph in both the national language and LWC. The result? Very few were able to understand the text even though they understood the national language and the LWC perfectly. The main difference between A and B are the phones [l] and [n]. A uses /l/ in place of B's /n/.
I guess it is worthwhile to consider the media format used in comprehension tests and the format in which literature will actually be made available. In our case, I think written literature would not be extensible based on the result of our comprehension test. I have no idea if they accept it orally. Nevertheless, a separate translation would be better. In any case, these are just my initial observations and tentative conclusions.
- This example shows an extra complicating factor when testing written materials: the test subjects must be trained to read the orthography they will be tested on. If “the main difference” between them is that one phoneme is used in one language whereas in the other it differs, then the problem might be solved by teaching people to pronounce the symbols differently than in LWCs. It might be that if people practiced reading these letters with a different pronunciation, they could understand it well enough to use it. For example, as an English speaker, when I read Spanish, French, or German words, I know I must pronounce J differently in each language.
- Just because an initial "test of reading" indicates some problems with comprehension, it cannot be conclusively concluded that separate materials are needed.
- There may not be a need for "separate literacy materials" if "[l] in A corresponds to [n] in B" is a very consistent correspondence, then the literacy teachers in B might be able to teach the existing lessons differently saying, "in our municipality we pronounce the letter 'l' as the sound [n]". The same lesson should work (if the correspondence is consistent) with the teachers just teaching and reading everything in good B pronunciation.
- Of course it sounds as if literacy rates are high in B (based on finding people who could read the passages tested. In that case a widely distributed pamphlet may be all that is needed explaining how B people can read A (with their own correct pronunciation - for their dialect). Just as British English teachers teach one way of pronouncing "short o", while American English teachers teach another way of pronouncing "short o".
- Also mentioned were some vocabulary differences. Those would need to be considered as well... Are these dialectal synonyms - words like creek, brook and stream that mean essentially the same thing but are used in different dialect areas? Such words have no negative connotations and are already well known so people say oh they say brook but we say creek". Or are the vocabulary words different and not widely known so someone from the US reading a British car manual might say "Why does this car manual talk about a bonnet?" Or has there been a semantic shift i.e. people from the US say cookie for what British people call biscuits but say biscuit for what British people call a scone? These last two are a bit more complicated - some reading materials may need to be changed for that. It may even be worse if the word used in the printed materials is actually an offensive, rude or vulgar word in the other variety.
- The first types of vocabulary differences may need only acknowledgement of differences (and overt permission for B speakers to write with their own vocabulary items), or the middle ones may need glossary help, or additional contextual clues to help readers learn the new items/meanings. But the last one (words that have offensive meanings in other dialects) may either point to the need for a revision (so that no offensive words are included) or separate materials.
- Even if apparently simple solutions are found to address the phonological, orthographical and lexical differences/issues mentioned above, the solution may not be simple. Attitudes must also be considered. If there has been a long and unresolved history of antagonistic or paternalistic relations between the B and the A may mean that the B would never use materials that are known to have A origins. How long or how antagonistic or how unresolved is "enough"? Tough question.
- Decisions about these matters should not be made by outsiders (and especially not by surveyors or others who have only a temporary connection to the language community). Surveyors can and should share what they learned about such a situation. It seems that any decisions should be made by B stakeholders perhaps using a participatory approach. Probably outside stakeholders from the government and NGOs would also join those from B in considering together the current situation (existing materials in several languages), the desires of the B people (what do they perceive to be their "own language" and how important is it to them to have access to materials in their language), the many options each with its own costs and benefits before reaching a decision about next steps. Often only initial steps could be decided on "teach a group to read using the pronounce l and n rule" and then re-evaluate. Or make a list of words that are different between B and A and identify which are offensive, easily understood etc. and then meet again to decide on further action. The activities decided on are best done by insiders because they will trust more the results they gather themselves.
- Another factor to consider: Distributing oral materials from A among the B may be a way to give them access to comprehensible literature almost immediately. That may also pave the way for the acceptance of A written materials.
- There's also a lesson here for countries that are not as far along in beginning all the remaining translation tasks as they are in that example. If the B question had been addressed when A work was started, and if it was discovered then that a common literacy might be possible, several of the problems outlined in this example could have been taken into account. B representatives could have been involved in the translation from the beginning, which would have been a blessing. An alternative could have been found for any words offensive or words unknown to the B. Attitude difficulties might have been resolved (I say might, because attitudes issues change and there might have been an attitude issue then that has disappeared by now.)
The following University of South Africa doctorate thesis by Sue Hasselbring discusses principles involved in making extensibility decisions:
Cross-Dialectal Acceptance of Written Standards: Two Ghanaian Case Studies
Go back to the Survey FAQs page.