فهرست مطالب

International Journal of Language Testing
Volume:6 Issue: 2, Oct 2016

  • تاریخ انتشار: 1395/09/30
  • تعداد عناوین: 5
|
  • Sepeedeh Hanifehzadeh, Farzaneh Farahzad Page 50
    The present study was designed basically to develop a psycho-motor mechanism scale based on the theory of translation competence proposed by PACTE (2003), and then to assess the validity and reliability of the constructed scale. In this quantitative research, after designing the scale, two translation tasks were given to 90 M.A. students majoring in translation studies at four different branches of Islamic Azad University. Based on the ratings by two experienced raters, the reliability and validity of the scale were determined. Therefore, two types of validity including concurrent and construct validity were assessed. The possible correlation between TOEFL PBT, as the test of linguistic ability and the researcher constructed psycho-motor scale was later checked, and verified. Next, the possible correlation between holistic scale for translation quality assessment developed by Waddington (2001), and the researcher constructed psycho-motor scale was examined and proved. For calculating the construct validity of the scale, factor analysis was run to probe the underlying constructs of the eight components of the researcher-constructed psycho-motor mechanism scale. As for reliability, the correlation between the two ratings by the raters based on the constructed scale was calculated and the scale was found to be reliable. The present findings, approved by the validity and reliability of the researcher-constructed scale, can contribute to the field of translation studies, which seems to be in great need of objective and communicative scales for translation tasks based on an anchored and consolidated theory of translation quality assessment like that of PACTE (2003).
    Keywords: PACTE, Psycho, motor mechanism, Reliability, Scale, Translation quality assessment, Validity
  • Guangyan Chen Page 72
    This studydevelops a model of analytic rating scalesto assess L2 Chinese oral performance. It uses Exploratory Factor Analysis (EFA) to identify a model and employs Confirmative Factor Analysis (CFA) in a separate dataset to test the degree of model fit. The researcher videotaped ten speeches and ACTFL professional raters assessed the oral performancesin these samples. The researcher then selected three samples (Samples 1, 2, and 3) to representthe proficiency levels of Novice High, Intermediate High, and Advanced Low. Then, the researcher developed 20 rating items byinterviewing ten experienced L2 Chinese teachers and running an EFA. The 20 items were descriptors that Chinese teachers used to assess oral performance in two studies: Study 1 and Study 2. To complete Study 1, the researcher recruited 45 teachers to assess Sample 1 using the 20 items, 62 teachers rated Sample 2, and 49 teachers rated Sample 3. In Study 2, 104 teachersassessed all three samples. The EFAindicated a four-factor model of analytic rating scales: “fluency,” “conceptual understanding,” “communication clarity,” and “communication appropriateness.” In this model, the correlations between these analytic rating scales were relatively high and teachers weighted “fluency” as most important. Together the four scales explained65.5% ofteachers’ holistic judgments of oral performance.The CFA did not show a strong model fit to the data, but the fit was acceptable. This modeladvances our understanding of the relationship between analytic rating scales and holistic ratingsinthe context of L2 Chinese. These findings give Chinese teachers with which a reference to assess U.S. college students’ L2 Chinese oral performance.
    Keywords: Analytic rating scales, Factor analysis, Language assessment, L2 Chinese, Oral performance
  • S. Jamal Hemmati, Purya Baghaei *, Masoumeh Bemani Page 92
    Research has been conducted on linguistic aspects of L2 reading comprehension ability but studies which concern cognitive aspects of this skill are rather scarce. Efforts should be made to have sound understanding of the attributes and subskills needed for successful performance on L2 reading comprehension tests. In this study the reading attributes underlying the reading comprehension section of the Iranian National University Entrance Examination (Konkoor) held in 2011are investigated. The G-DINA model, a general cognitive diagnostic model, is applied and the CDM package in R is used to run the analyses. The findings indicate problematic areas in reading comprehension at a national level. Students who desire to pursue their studies at national universities, the teachers who train the candidates at high schools and the ministry of education can benefit from such feedbacks.
    Keywords: Cognitive Diagnostic Modeling, Reading comprehension, UEE
  • Hamid Ashraf, Mona Tabatabaee Yazdi, Aynaz Samir Page 101
    The literature on C-test and Cloze test in a second language provides us with few accounts of the real mental processes that test-takers are involved in, which in fact indicate the real nature of what these tests measure. However, the literature leaves researchers with little attention to the cognitive processes involved in X-Tests. Therefore, the purpose of this study was to discover the extent to which the C-Test and the X-Test tap participants’ use of mental strategies. In doing so a C-Test and an X-test were administered to eight subjects who were EFL learners in Mashhad, Iran. They all took part in introspective methods of think-aloud and retroactive interviews throughout the test administration. Think aloud protocol was used to collect the required data. The results showed that both tests were similar to each other regarding mental processes, except in using two strategies that participants applied only for filling out the gaps of the X-Test. Moreover, it appeared that the X-Test was more difficult for the subjects.
    Keywords: C, test, X, test, Think Aloud Protocol, Cognitive Correlates
  • Fahimeh Khoshdel, Purya Baghaei *, Masoumeh Bemani Page 113
    In this paper we tried to demonstrate the validity of C-Test using construct identification approach. In this approach to construct validation the factors which contribute to item difficulty are identified. The assumption is that the factors which make items difficult are actually the construct underlying the test. For the purposes of this study, 11 item-level and sentence-level factors, deemed to affect item difficulty, were entered into a regression analysis to predict classical item p-values. The 11 factors explained only 8% of the variance in item difficulties. This finding shows that lexical and sentential factors explain only a very small portion of the variance in p-values. It seems that a great amount of variation in item difficulties should be attributed to above-sentence and text level factors. The implications of the study for C-Test construct validity are discussed.
    Keywords: C, Test, validation, construct identification