Classification of EEG Signals for Discrimination of Two Imagined Words
In this study, a Brain-Computer Interface (BCI) in Silent-Talk application was implemented. The goal was an electroencephalograph (EEG) classifier for three different classes including two imagined words (Man and Red) and the silence. During the experiment, subjects were requested to silently repeat one of the two words or do nothing in a pre-selected random order. EEG signals were recorded by a 14 channel EMOTIV wireless headset. Two combinations of features and classifiers were used: Discrete Wavelet Transform (DWT) features with Support Vector Machine (SVM) classifier and Principle Component Analysis (PCA) features with a Minimum-Distance classifier. Both combinations were capable of discriminating between the three classes much better than the chance level (33.3%), none of them was reliable and accurate enough for a real application though. The first method (DWT+SVM) showed better results. In this case, feature set was D2, D3, D4 and A4 coefficients of 4-level DWT decomposition of the EEG signals, roughly corresponding to major frequency bands (Delta, Theta, Alpha and Beta) of these signals. Three binary SVM machines were used. Each machine was trained to classify between two of the three classes, namely Man/Red, Man/Silence or Red/Silence. Majority Selection Rule was used to determine final class. Once two of these classifiers presented the true class, a win (correct classification) was counted, otherwise a loss (false classification) was considered. Finally, Monte-Carlo Cross Validation showed an overall performance of about 56.8% correct classification which is comparable with the results reported for similar experiments.
- حق عضویت دریافتی صرف حمایت از نشریات عضو و نگهداری، تکمیل و توسعه مگیران میشود.
- پرداخت حق اشتراک و دانلود مقالات اجازه بازنشر آن در سایر رسانههای چاپی و دیجیتال را به کاربر نمیدهد.