فهرست مطالب

International Journal of Language Testing
Volume:4 Issue: 1, Mar 2014

  • تاریخ انتشار: 1393/01/30
  • تعداد عناوین: 8
|
  • Zohreh R. Eslami* Page 1
  • Andrew D. Cohen* Page 5
    The article provides a rationale for assessing second language (L2) pragmatics in the classroom and then looks at tasks to assess comprehension of speech acts. Next we look atvarious ways to collect students’ pragmatic production, such as through oral role-play, written discourse as if spoken, multiple-choice, or short-answer responses. Then six strategies for more effective assessment of pragmatics are presented, such as how to make the speech act situations realistic and how to rate for key aspects. The assumption made in the article is that classroom teachers may be avoiding the assessment of pragmatics, especially nonnative teachers who feel that they themselves are incapable of judging what constitutes correct behavior.
    Keywords: Pragmatic assessment, classroom, based teacher assessment, teacher, based assessment, L2 speech act performance assessment
  • Kathleen Bardovi-Harlig*, Sun-Young Shin Page 26
    This article argues that testing in pragmatics has for too long relied on the same six measures of pragmatics assessment introduced by Hudson, Detmer, and Brown (1992, 1995). We demonstrate that there is a wealth of potential test formats in the L2 pragmatics acquisition literature that are as yet untapped resources for pragmatics testing. The article first reviews definitions of pragmatics that are useful in guiding the design and development of pragmatic measures and subsequent scoring. It then discusses the principles of language assessment as they have been applied to tests of pragmatics. Next it assesses and reports on current interest in pragmatics testing in language programs through informal interviews conducted with researcher-teachers on current practices in pragmatics testing. We then introduce tasks that are used in pragmatic research which are innovative in the context of assessment, and address the potential of each task to enhance task authenticity, their practicality for testing, and their potential for broadening our construct representation.
    Keywords: pragmatics assessment, reliability, validity, authenticity, practicality
  • Jianda Liu*, Lijun Xie Page 50
    Written Discourse Completion Task (WDCT) has been used in pragmatics tests to measure EFL learners’ interlanguage pragmatic knowledge. In a WDCT, the studentsgive their responses to situations designed to elicit certain pragmatic functions, so human raters are required to rate the students’ performance. When decisions are made based upon such ratings, it is essential that the assigned ratings are accurate and fair. As a result, efforts should be taken to minimize the impact of rater inaccuracy or bias on ratings. This paper reports a study of rater effects in a WDCT pragmatics test. Based onthe Myford& Wolfe (2003; 2004) model and corresponding retrospective interviews, four types of rater effects were investigated and discussed quantitatively and qualitatively: leniency/severity, central tendency, halo effect, and differential leniency/severity. Results revealed significant differences in terms of rating severity, with a general tendency towards severity. Though the raters could effectively and consistently employ the rating scales in their ratings, some of them showed certaindegrees of halo effect. Most raters were also found to exhibit certain bias across both traits and test takers. Possible reasons behind the rater effects were analyzed. Finally suggestions were raised for rating training.
    Keywords: WDCT, rater effects, Pragmatics test, rater training
  • Zia Tajeddin*, Minoo Alemi Page 66
    Pragmatics assessment literature provides little evidence of research on rater consistency and bias. To address this underexplored topic, this study aimed to investigate whether a training program focused on pragmatic rating would have a beneficial effect on the accuracy ofnonnative English speaker (NNES) ratings of refusal production as measured against native English speaker (NES) ratings and whether NNES rating bias diminishes after training. To this end, 50 NNES teachers rated EFL learners’ responses to a 6-item written discourse completion task (WDCT) for the speech act of refusal before and after attending a rating workshop. The same WDCT was rated by 50 NES teachers who functioned as a benchmark. Pre-workshop non-native ratings as measured against the native benchmark in terms of mean, SD, mean difference, and native/non-native correlation revealed that non-native raters tended to be more lenient and greatly divergent in rating total DCT and across items. Subsequent to training, however, non-native rating produced more accurate and consistentscores, indicating its approximation toward the native benchmark. To measure rater bias, a FACETS analysis was run. FACETS results showed that both before and after training, many of the raters were outliers. Besides, after training, a few raters became biased in rating certain items. From these findings, it can be concluded that pragmatic rater training can positively influence non-native ratings by getting them closer to those of natives andmaking them more consistent, but not necessarily less biased.
    Keywords: pragmatic rater training, refusal, native English speaker teacher, non, native English speaker teacher, bias, FACETS
  • Noriko Ishihara*, Akiko Chiba Page 83
    Despite an upsurge of interest in teaching pragmatics in recent years, the assessment of L2 pragmatic competence appears to have attracted little attention. Assessment in this area seems to center on either formal or interactional assessment (see Ross & Kasper, 2013). Using qualitative analysis, this preliminary study explores the benefits and limitations of the teacher-based and interactional assessment of young learners’ pragmatic development facilitated through dialogic intervention into pragmatics using the visual presentation of narratives. The teacher-based assessment instruments included: a) formality judgment tasks (FJTs); b) discourse completion tasks (DCTs); c) student-generated visual DCTs (SVDCTs); d) pre-designed assessment rubrics; and e) the teacher’s written reflections. The outcome of these instruments was compared with the analysis of f) audio- and video-recorded classroom interactions. The data from five Japanese learners aged 7-12 studying in Hong Kong are reported. The analysis of the data demonstrated that multiple teacher-based assessments used at different points during the instruction revealed enhanced pragmatic awareness and production of the target requests on the learners’ part. However, the teacher-based assessment instruments sometimes resulted in an incomplete or inconsistent data set and occasionally yielded overly generous or inaccurate assessments. In contrast, the interactional assessment, though it tends to be impractical in everyday teaching contexts, revealed the teacher’s ongoing mediation and the dynamic process of joint knowledge construction, including teacher or peer scaffolding, the learners’ response to the mediation, collaborative meaning-making, stages of other-regulation, and emerging signs of self-regulation. Some of the teacher-based assessments offered an opportunity to explore a broader repertoire of pragmatic knowledge in the learners that may not surface in interactive oral discourse. Teacher-based and interactional assessment can thus be viewed as complementary in terms of credibility and practicality as they inform each other regarding the learning outcome and the process of knowledge co-construction. (299words)
    Keywords: teacher, based assessment, interactional assessment, pragmatic development, young learners, mediation, scaffolding, knowledge co, construction, requests, student, generated visual DCTs, multimodality
  • Esther Us, Oacute, -Juan*, Alicia Mart, Iacute, Nez-Flor Page 113
    The importance of assessing pragmatics in the classroom has been recognized as a difficult and complex task since there are a lot of contextual factors that influence an appropriate use of the language. Therefore, it is essential to carefully design the methods that elicit learners’ production of a particular pragmatic feature given the fact that the use of a particular elicitation instrument may influence research outcomes. Considering these aspects, the aim of the present paper is the elaboration of a discursive type of instrument, that of an interactive discourse completion task, to assess learners’ use of the strategies employed when complaining and apologizing in a second/foreign language context. Additionally, the potential of using verbal reports to obtain learners’ insightfulinformation as regards their execution of speech act production is also highlighted by the creation of a retrospective verbal report that may be used in combination with the elicitation method being elaborated. The choice of the speech acts of complaining andapologizing has been done on the fact that the performance of them in a second/foreign language may be a difficult task for learners due to their lack of familiarity with the norms and conventions of the target language which, in consequence, may result in an impolite and rude behaviour. Therefore, learners may require a certain level of pragmatic competence to perform these speech acts in an appropriate way in order to avoid possible communication breakdowns. To do so, learners need to know that the appropriate choice of the conventional expressions of complaining and apologising may depend on sociopragmatic issues such as the social status (low or high) and the social distance (close or distant) between the interlocutors, as well as the intensity of offense (less or more) involved in the communicative act. Considering these aspects, the aim of the present paper is the elaboration of a discursive type of instrument that of an interactive discourse completion task, to assess learners’ use of the strategies employed when complaining and apologising in a second/foreign language context. Additionally, the potential of using verbal reports to obtain learners’ insightful information as regards theirexecution of speech act production is also highlighted by the creation of a retrospective verbal report that may be used in combination with the elicitation method being elaborated.
    Keywords: Assessment, Interlanguage Pragmatics, Conventional Expressions, Complaints, Apologies, Test Instruments, Interactive Discourse Completion Tasks, Verbal Reports
  • Zohreh R. Eslami*, Azizullah Mirzaei Page 137
    The present study compares two measures most frequently used to assess pragmatic competence: Written Discourse Completion Tasks (WDCT) and Oral Discourse Completion Tasks (ODCT). The study focuses on these two speech act data collection methods and explores the validity of using different forms of Discourse Completion Tasks (DCTs) in non-Western contexts. Twenty Iranian university students responded to both measures eliciting requestive speech acts. The response length, range and content of the expressions, formality level, and spoken vs. written language forms were analyzed. The findings show that the two measures elicit different production samples from the students. ODCTs induced longer, more elaborate responses, and more linguistic forms representing spoken variety of the language than WDCTs. These differences appear to be caused by the oral mode of ODCTs. In WDCTs students mixed different styles (spoken and written) and used both formal and informal linguistic devices in one situation. Our findings indicate that WDCTs may be inappropriate for collecting data in Persian language, which has marked differences between spoken and written variety and highly complicated stylistic variations. Studies like this underscore the fact that more work is needed to both extend the range and scope of speech act studies to non-Western languages and refine the methodologies used to measure pragmatic competence.
    Keywords: Pragmatic Assessment, Non, Western Language, Persian Language, Written DCT, Oral DCT, Speech Act