Nowadays we are witnessing the dramatic growth of utilizing corpus-based studies in linguistics known as corpus linguistics. The current research aims to study the improvement of frequency techniques in Farsi Language and has been conducted in order to achieve a scientific approach in automatic term extraction focused on extracting basic medicine terms. Using statistical approaches along with corpus linguistic tools (hybrid extraction methods) for automatic term extraction purposes, have become quite common in a number of languages such as English, French, Japanese and Korean. So far, these approaches have not been utilized in Farsi language widely and most of the efforts for term extraction have been conducted in traditional ways. On the other hand, these approaches are language specific and it is not possible to use them for a different language. They should be modified based on the properties of the target language in order to achieve an extraction method which is appropriate for that language. To do so, a group of frequency models with approaches to count frequency in a main corpus and a special corpus and their improved methods have been utilized. The frequency method used in this study has counted the terms in a general and a main corpus which is created by the researcher. These corpuses are formed from the texts in science textbooks of Iran High schools (grades 9-12), science text books of Iran middle schools (grade 7-8), the science texts taught in the Qazvin Imam Khomeini Farsi Language Center and some journals and articles on general science. Achieved results show that there is a potential possibility to extract terms automatically in Farsi. Among the major challenges of utilizing the simple methods we can refer to the process of separating high frequency words such as coordinators or prepositions. Therefore, to increase the power of this model, we improved the basic models by applying some techniques on the them. It is observed that the improved frequency method has shown a better performance in the special corpus as opposed to other methods and has been able to predict up to 60% of the special vocabulary in the first 50 high frequency extracted vocabulary. On the other hand, other results of the study show that the presence of low frequency vocabulary in the general corpus with a frequency similar to the frequency of special vocabulary, has led to achieving weaker results than the simple method.
پرداخت با کارتهای اعتباری بین المللی از طریق PayPal امکانپذیر است.
- دسترسی به متن مقالات این پایگاه در قالب ارایه خدمات کتابخانه دیجیتال و با دریافت حق عضویت صورت میگیرد و مگیران بهایی برای هر مقاله تعیین نکرده و وجهی بابت آن دریافت نمیکند.
- حق عضویت دریافتی صرف حمایت از نشریات عضو و نگهداری، تکمیل و توسعه مگیران میشود.
- پرداخت حق اشتراک و دانلود مقالات اجازه بازنشر آن در سایر رسانههای چاپی و دیجیتال را به کاربر نمیدهد.