A Set Statistical features for Evaluating Interactive Question Answering
Evaluation plays an important role in the interactive question answering(IQA) systems. In the context of evaluating IQA systems, there is practically no specific methodology for evaluating these systems in general. The main problem with designing an assessment method for IQA systems lies in the fact that is rarely possible to predict interaction part. To this end, human needs to be involved in the evaluation process. In this paper, an appropriate model is presented by introducing a set of built-in features for evaluating IQA systems. To conduct the evaluation process, four IQA systems were considered, and then a database of conversation was exchanged between users and systems. After performing the preprocessing on the conversation, the statistical characteristics of the conversation was extracted and base on that characteristics matrix was formed. Finally, using SVM, human thinking divided into two groups. The correlation coefficient between human thinking and proposed set features indicated the high accuracy of set features presented in evaluating of IQA systems.