به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت
جستجوی مقالات مرتبط با کلیدواژه

convolutional neural network

در نشریات گروه پزشکی
  • Khosro Rezaee *, Asmar Nazerian, Hossein Ghayoumi Zadeh, Hani Attar, Mohamadreza Khosravi, Mohammad Kanan
    Introduction

    Drowsy driving is a significant contributor to accidents, accounting for 35 to 45% of all crashes. Implementation of an internet of things (IoT) system capable of alerting fatigued drivers has the potential to substantially reduce road fatalities and associated issues. Often referred to as the internet of medical things (IoMT), this system leverages a combination of biosensors, actuators, detectors, cloud-based and edge computing, machine intelligence, and communication networks to deliver reliable performance and enhance quality of life in smart societies.

    Methods

    Electroencephalogram (EEG) signals offer potential insights into fatigue detection. However, accurately identifying fatigue from brain signals is challenging due to inter-individual EEG variability and the difficulty of collecting sufficient data during periods of exhaustion. To address these challenges, a novel evolutionary optimization method combining convolutional neural networks (CNNs) and XGBoost, termed CNN-XGBoost Evolutionary Learning, was proposed to improve fatigue identification accuracy. The research explored various subbands of decomposed EEG data and introduced an innovative approach of transforming EEG recordings into RGB scalograms. These scalogram images were processed using a 2D Convolutional Neural Network (2DCNN) to extract essential features, which were subsequently fed into a dense layer for training.

    Results

    The resulting model achieved a noteworthy accuracy of 99.80% on a substantial driver fatigue dataset, surpassing existing methods.

    Conclusion

    By integrating this approach into an IoT framework, researchers effectively addressed previous challenges and established an artificial intelligence of things (AIoT) infrastructure for critical driving conditions. This IoT-based system optimizes data processing, reduces computational complexity, and enhances overall system performance, enabling accurate and timely detection of fatigue in extreme driving environments.

    Keywords: Driving Fatigue, Extreme Environment, EEG Signals, Optimized Xgboost, Convolutional Neural Network, Iomt
  • Abhishek Dasa, Mihir Narayan Mohanty*
    Background

    Invasive ductal carcinoma (IDC) is a prevalent type of breast cancer with significant mortality rates. Early detection is crucial for effective treatment options. Deep learning techniques have shown promise in medical image analysis, but further improvements are needed.

    Methods

    A Wavelet-Convolutional Neural Network (WCNN) isproposed, incorporating wavelet filtersand convolutional filtersin each layer to capture both frequency and spatial domain features. The processed images resulting from both types of filters werecombined and passed through a MaxPooling layer to extract salient features.Four such hybrid layers were considered for extracting effective features.This novel approach allowedthe model to effectively learn multi-scale representations, leading to improved performance in breast cancer classification tasks.The model was trained and evaluated on a publicly available breast histopathology image dataset.

    Results

    The proposed WCNN achieved a classification accuracy of 98.4% for breast cancer detection, outperforming existing state-of-the-art models.

    Conclusion

    The WCNN framework demonstratedthe potential of combining wavelet and convolutional filters for improved breast cancer detection, offering a promising approach for early diagnosis and better patient outcomes

    Keywords: Breast Cancer, Medical Image Analysis, Convolutional Neural Network, Deep Learning
  • Abolfazl Pordeli Shahreki, Fatemeh Sadat Hosseini-Baharanchi, Masoud Roudbari *
    Background
    One of the best ways to reduce the spread of tuberculosis (TB) is to diagnose the disease using chest X-ray (CXR) images as a low-cost and affordable method. However, there are two problems: the lack of adequate radiologists and the possibility of misdiagnosis. This is why it is necessary to use an accessible and accurate diagnostic system. This research aimed to design an accurate and accessible automatic diagnosis system that can solve diagnosis problems using deep learning.
    Methods
    Six convolutional neural networks (CNNs), InceptionV3, ResNet50, DenseNet201, MnasNet, MobileNetV3, and EfficientNet-B4, were trained by transfer learning, the Adam optimizer, and 20 training epochs using the new, large, and accurate TBX11K dataset. The network was designed to categorize images into three groups: patients diagnosed with TB, patients exhibiting lung abnormalities unrelated to TB, and healthy individuals with no evidence of TB or other pulmonary anomalies within the lung imagery.
    Results
    In the testing step, the networks achieved very high performance. The EfficientNet-B4 network outperformed the other networks with a sensitivity of 97.1%, specificity of 99.9%, and accuracy of 99.5%. It also performed better than previous studies in TB diagnosis using CXR images by CNNs.
    Conclusion
    This research showed that with access to large high-quality datasets and standard training, it is possible to entrust the diagnosis of TB using medical images to computers and artificial neural networks with high confidence as they achieved accuracies higher than 99%.
    Keywords: Convolutional Neural Network, Medical Images, Deep Learning, Chest X-Ray, Tuberculosis
  • Christopher D. Yang, Jasmine Wang, Ludovico Verniani, Melika Ghalehei, Lauren E.Chen, En Y. Lin
    Purpose

    To validate a convolutional neural network (CNN)-based smartphone application for the identification of glaucoma eye drop medications in patients with normal and impaired vision.

    Methods

    Sixty-eight patients with visual acuity (VA) of 20/70 or worse in at least one eye who presented to an academic glaucoma clinic from January 2021 through August 2022 were included. Non-English-speaking patients were excluded. Enrolled subjects participated in an activity in which they identified a predetermined and preordered set of six topical glaucoma medications, first without the CNN and then with the CNN for a total of six sequential measurements per subject. Responses to a standardized survey were collected during and after the activity. Primary quantitative outcomes were medication identification accuracy and time. Primary qualitative outcomes were subjective ratings of ease of smartphone application use.

    Results

    Topical glaucoma medication identification accuracy (OR = 12.005, P < 0.001) and time (OR = 0.007, P < 0.001) both independently improved with CNN use. CNN use significantly improved medication accuracy in patients with glaucoma (OR = 4.771, P = 0.036) or VA ≤ 20/70 in at least one eye (OR = 4.463, P = 0.013) and medication identification time in patients with glaucoma (OR = 0.065, P = 0.017). CNN use had a significant positive association with subjectreported ease of medication identification (X2 (1) = 66.117, P < 0.001).

    Conclusion

    Our CNN-based smartphone application is efficacious at improving glaucoma eye drop identification accuracy and time. This tool can be used in the outpatient setting to avert preventable vision loss by improving medication adherence in patients with glaucoma.

    Keywords: Convolutional Neural Network, Deep Learning, Glaucoma, Medication Tools
  • سارا معتمد، الهام عسکری*
    مقدمه

    قلب انسان از لحاظ فیزیولوژیکی یک شاخص زنده محسوب می گردد و امکان ثبت سیگنال ها به صورت مداوم با استفاده از دستگاه الکتروکاردیوگرام وجود دارد. بیماری های قلبی- عروقی یکی از مهم ترین علت های مرگ‎ومیر  در کل دنیا محسوب می شوند. مبتلا شدن به بیماری های قلبی-عروقی می تواند قابل اجتناب باشد، البته درصورتی که بتوان سریع و دقیق این بیماری را پیش بینی یا شناسایی کرد و در جهت جلوگیری اقدام نمود.

    روش کار

    در این مطالعه با هدف تشخیص الگوهای موجود در ویژگی های استخراج شده از بیماران آریتمی قلبی، گروهی از نارسایی های قلبی پیش بینی خواهد شد و نمونه های بیمار را از حالت عادی تشخیص می دهد. در مدل پیشنهادی ابتدا روی مجموعه دادگان، در مرحله پیش‎پردازش، عملیات گسسته سازی و جایگزینی مقادیر از دست رفته به روش میانگین‎گیری ستونی انجام می شود، سپس به منظور کاهش ویژگی‎ها، پیچیدگی و افزایش سرعت و دقت، عملیات انتخاب ویژگی، روی داده‎های نرمال شده صورت گرفته و به طبقه بند‎های درخت تصمیم، k نزدیک ترین همسایه، بیز ساده و CNN ارسال می‎شود.

    یافته ها

    در مقایسه ای که از دقت به دست آمده از الگوریتم های مختلف قبل و بعد از اعمال روش پیشنهادی انجام شده است مشاهده می شود که کلیه روش ها پس از اعمال روش پیشنهادی، با دقت بهتری عمل می کنند همچنین CNN عملکرد بهتری از خود نشان داده است.

    نتیجه گیری

    نتایج نشان داد که دقت مدل پیشنهادی با نرخ تشخیص 34/92 درصد بهتر از سایر روش ها عمل می نماید.

    کلید واژگان: آریتمی قلب، شبکه‎ های عصبی پیچشی، بهینه سازی ازدحام ذرات، طبقه بندی چند کلاسه
    Sara Motamed, Elham Askari*
    Introduction

    The human heart is a physiologically vital organ whose signals can be continuously recorded using an electrocardiogram (ECG) device. Cardiovascular diseases are one of the leading causes of mortality worldwide. Timely and accurate identification of this condition, along with preventive measures, can help reduce the incidence of cardiovascular diseases.

    Method

    This article aimed to predict a group of heart failures based on the patterns found in extracted features from patients with cardiac arrhythmias, distinguishing them from samples in a normal state. The proposed model involves preprocessing operations, such as discretization and replacement of missing values using column-wise averaging, on the dataset. Subsequently, feature selection operations were performed on normalized data to reduce complexity and improve speed and accuracy. The data is then fed into decision tree classifiers, k-nearest neighbors, naive Bayes, and convolutional neural networks.

    Results

    A comparison of the accuracy obtained from different algorithms before and after applying the proposed method reveals improved performance across all methods after implementation. Particularly, the convolutional neural network demonstrates superior performance.

    Conclusion

    Based on the results, it can be concluded that the proposed model achieves an accuracy of 92.34%, surpassing other methods.

    Keywords: Cardiac Arrhythmia, Convolutional Neural Network, Particle Swarm Optimization, Multiclass Classification
  • Parisa Gifani, Ahmad Shalbaf *
    Background

    The Gleason grading system has been the most effective prediction for prostate cancer patients. This grading system provides this possibility to assess prostate cancer’s aggressiveness and then constitutes an important factor for stratification and therapeutic decisions. However, determining Gleason grade requires highly‑trained pathologists and is time‑consuming and tedious, and suffers from inter‑pathologist variability. To remedy these limitations, this paper introduces an automatic methodology based on transfer learning with pretrained convolutional neural networks (CNNs) for automatic Gleason grading of prostate cancer tissue microarray (TMA).

    Methods

    Fifteen pretrained (CNNs): Efficient Nets (B0‑B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet‑50, SeResnet 50, Xception, DenseNet121, ResNext50, and inception_resnet_v2 were fine‑tuned on a dataset of prostate carcinoma TMA images. Six pathologists separately identified benign and cancerous areas for each prostate TMA image by allocating benign, 3, 4, or 5 Gleason grade for 244 patients. The dataset was labeled by these pathologists and majority vote was applied on pixel‑wise annotations to obtain a unified label.

    Results

    Results showed the NasnetLarge architecture is the best model among them in the classification of prostate TMA images of 244 patients with accuracy of 0.93 and area under the curve of 0.98.

    Conclusion

    Our study can act as a highly trained pathologist to categorize the prostate cancer stages with more objective and reproducible results.

    Keywords: Convolutional neural network, Gleason grading, prostate cancer, transfer learning
  • مهسا تقوی زاده، مهدی نوشیار، عادل اکبری مجد، سهند شاهعلی نژاد*
    مقدمه

    شناسایی و تشخیص بیماری آلزایمر موجود در بافت مغز از چالش های جدی شناسایی در حوزه پردازش تصاویر پزشکی است. در حال حاضر MRI، متداول ترین راه تشخیص آلزایمر در بین روش های تصویربرداری می باشد. عدم شناسایی صحیح بافت درگیر می تواند منجر به تشخیص نادرست به عنوان بافت سالم مغزی شود. الگوریتم یادگیری عمیق به عنوان فرآیند تشخیص ویژگی های مرتبط، اطلاعات مفید را استخراج می کند؛ لذا این پژوهش سعی دارد با استفاده از شبکه عصبی کانولوشنال در پردازش تصاویر پزشکی با دقت بهتری، نسبت به کارهای گذشته در تشخیص، انجام دهد.

    روش

    با استفاده از شبکه عصبی کانولوشنی طراحی شده استخراج ویژگی های تصاویر  T1MRI  صورت گرفته است. تصاویر آلزایمر با استفاده از نرم افزار Matlab2023a تحلیل شده اند و خروجی های مد نظر حاصل شده اند.

    نتایج

    تصاویر آلزایمر مغزی T1 پس از پیش پردازش و ورود به شبکه عصبی عمیق طراحی شده، مورد بررسی قرار گرفته اند که در خروجی حاصل از الگوریتم پیشنهادی، دقت شناسایی و سرعت شناسایی الگوریتم با بهبود ابر پارامترها در مقایسه با سایر روش های معمول بالاتر بود که دقت 96% و حساسیت 100% در شناسایی ارائه کرده است.

    نتیجه گیری

    هدف از الگوی یادگیری عمیق این است که داده های تصویری با ابعاد بزرگ و تعداد زیاد  به شکل قابل فهم برای ماشین ها درآیند. انتظار می رود در آینده استخراج ویژگی با دقت بیشتری انجام شود و جزئیات بیشتری جهت بازشناسی اشیاء در تصویر، در اختیار سیستم های بینایی ماشین قرار گیرد.

    کلید واژگان: بیماری آلزایمر، شبکه عصبی کانولوشن، یادگیری عمیق، تصاویر MRI
    Mahsa Taghavizadeh, Mehdi Nooshyar, Adel Akbarimajd, Sahand Shahalinezhad*
    Introduction

    Identifying and diagnosing Alzheimer's disease in brain tissue is one of the serious challenges in diagnosis in the field of medical image processing. Currently, MRI is the most common way to diagnose Alzheimer's disease, and failure to correctly identify the tissue involved in it can lead to incorrect diagnosis as healthy brain tissue. Deep learning algorithm as a process of detecting features related to damaged tissue and extracting useful information. In this research, we decided to use the convolutional neural network in the processing of medical images so that we can perform the diagnosis with better accuracy than the previous works.

    Methods

    Using a convolutional neural network, the features of MRI images have been extracted. Alzheimer's images have been analyzed using Matlab2023a software and the intended outputs have been obtained.

    Results

    Brain Alzheimer's images have been analyzed after pre-processing and entering the deep neural network, and in the output of the proposed algorithm, the identification accuracy and identification speed of the algorithm with the improvement of cloud parameters was higher compared to other common methods, which was 96% accuracy. presented in identification.

    Conclusion

    The purpose of using deep learning is to make image data with large dimensions and a large number into a conceptual form for machines. It is expected that in the future feature extraction will be done more accurately and more details will be available to machine vision systems to recognize objects in the image.

    Keywords: Alzheimer's disease, convolutional neural network, deep learning, MRI images
  • حانیه رضازاده تمرین، الهام صنیعی*، مهدی صالحی باروق
    مقدمه

    سرطان پستان، شایع ترین سرطان بدخیم در زنان است و بیشتر از سایر سرطان ها باعث مرگ و میر آنان می شود. روش ترموگرافی یکی از روش های تشخیص سرطان پستان است. مهم ترین چالش در تشخیص زود هنگام از روی این تصاویر می تواند مربوط به خطای انسانی و یا عدم دسترسی به شخص ماهر باشد. استفاده از روش های هوش مصنوعی در پردازش تصاویر در تشخیص زودهنگام و کاهش خطای انسانی می تواند موثر باشد. هدف کلی این پژوهش معرفی شبکه های ترکیبی عمیق برای تشخیص هوشمند سرطان پستان از تصاویر ترموگرافی است.

    روش

    تصاویر ترموگرافی مورد استفاده در این پژوهش از پایگاه داده DMR-IR جمع آوری شده است. ابتدا ویژگی های اصلی تصاویر توسط شبکه پیچشی عمیق CNN استخراج گردید. سپس در ادامه از دو الگوریتم FCNNs و SVM برای کلاس بندی سرطان پستان از تصاویر ترموگرافی استفاده شد.

    نتایج

    نرخ دقت برای الگوریتم های CNN_FC و CNN-SVM، به ترتیب 94/2%، 95/0% بود. علاوه بر این، پارامترهای قابلیت اطمینان هم برای این طبقه بندی کننده ها به ترتیب 92/1%، 97/5% و حساسیت برای هر یک از این طبقه بندی کننده ها به ترتیب  95/5%، 94/1% محاسبه شد.

    نتیجه گیری

    مدل پیشنهادی مبتنی بر شبکه ترکیبی عمیق دقت مناسبی نسبت به الگوریتم های مشابه دارد؛ بنابراین می تواند پزشکان را در تشخیص زودهنگام سرطان پستان از طریق تصاویر ترموگرافی کمک نموده و خطای انسانی را به حداقل برساند.

    کلید واژگان: تشخیص، سرطان پستان، یادگیری عمیق، شبکه عصبی کانولوشنی، ترموگرافی
    Hanieh Rezazadeh Tamrin, Elham Saniei*, Mehdi Salehi Barough
    Introduction

    Breast cancer is the most common cancer in women that causes more deaths than other cancers. Thermography is one of the methods of breast cancer diagnosis. The most important challenge in early detection of these images can be human error or lack of access to a skilled person. The use of artificial intelligence methods in image processing can be effective in early detection and reduction of human error. The main aim of this research was to introduce hybrid networks for intelligent diagnosis of breast cancer from thermographic images.

    Method

    The thermographic images used in this study were collected from the DMR-IR database. First, the main features of the images were extracted by deep convolutional network (CNN). Then, FCNNs and SVM algorithms were used to classify breast cancer from thermographic images.

    Results

    The accuracy rate for CNN_FC and CNN-SVM algorithms was 94.2% and 0.95%, respectively. In addition, the reliability parameters for these classifiers were calculated as 92.1%, and 97.5%, and the sensitivity for each of these classifiers as 95.5%, and 94.1%, respectively.

    Conclusion

    The proposed model based on the deep hybrid network has good accuracy compared to similar algorithms; therefore, it can help doctors in the early diagnosis of breast cancer through thermographic images and minimize human error.

    Keywords: Diagnosis, Breast Cancer, Deep Learning, Convolutional Neural Network, Thermography
  • L. Peng, Q. Wu, R. Shi, H. Kong, W. Li, W. Duan, L. Zhu*
    Background

    Traditional diagnostic methods are limited in accuracy when detecting maxillary sinus fungal balls, leading to a higher risk of misdiagnosis or missed diagnosis. This study focuses on a deep learning-based algorithm for assisting in the localization and diagnosis of maxillary sinus fungal balls, addressing the limitations of conventional diagnostic procedures.

    Materials and Methods

    Axial CT imaging data of maxillary sinus were collected from 107 patients, including 47 cases of maxillary sinus fungal balls, 30 cases of other maxillary sinus lesions and 30 cases of healthy maxillary sinus, based on which, a dataset was constructed and a two-stage assisted diagnosis algorithm consisting of a classification and detection model was established. In the first stage, slices containing maxillary sinus were classified and selected. In the second stage, the selected slices were detected to diagnose and localize the fungal ball lesions in the maxillary sinus.

    Results

    The accuracy of the classification model was 92.71%, the mAP and AP50 of the detection model were 0.73 and 0.76, respectively, and the accuracy of the algorithm for the diagnosis of maxillary sinus fungal balls was 84.4%.

    Conclusion

    It is feasible to develop a two-stage auxiliary diagnosis method for maxillary sinus fungal ball based on deep learning.

    Keywords: Maxillary sinus, fungal ball, computed tomography, deep learning, convolutional neural network
  • Ali Nouri, Zahra Tabanfar
    Purpose

    Attention-Deficit-Hyperactivity-Disorder (ADHD) is a neurodevelopmental disorder that begins in early childhood and often persists into adulthood, causing personality issues and social behavior problems. Thus, detecting ADHD in its early stages and developing an effective therapy is of tremendous interest. This study presents a deep learning-based model for ADHD diagnosis in children.

    Materials and Methods

    The 'First-National-EEG-Data-Analysis-Competition-with-Clinical-Application' dataset is used for this purpose. Following preprocessing, data is segmented into 3-second epochs, and frequency features are extracted from these epochs. The Fourier transform is applied to each channel separately, and the resulting two-dimensional matrix (channel×frequency) for each epoch is used as the Convolutional Neural Network's (CNN) input. The CNN is made up of two convolutional layers, two max pooling layers and two fully connected layers as well as the output layer (a total of 9 layers) for classification. To improve the method's performance, the output of the classification of each input variable is analyzed. In other words, the role of each channel/frequency in the final classification is being investigated using the Layer-wise Relevance Propagation (LRP) algorithm.

    Results

    According to the results of the LRP algorithm, only efficient channels are employed as Convolutional Neural Network (CNN) inputs in the following stage. This method yields a final accuracy of 94.52% for validation data. In this study, the feature space is visualized, useful channels are selected, and deep structure capabilities are exploited to diagnose ADHD disorder.

    Conclusion

    The findings suggest that the proposed technique can be used to effectively diagnose ADHD in children.

    Keywords: Attention Deficit Hyperactivity Disorder, Convolutional Neural Network, Layer-Wise RelevancePropagation Algorithm, Electroencephalogram Signal Processing
  • Zohre Fasihfar, Hamidreza Rokhsati, Hamidreza Sadeghsalehi, Mustafa Ghaderzadeh*, Mehdi Gheisari
    Background

    Malaria remains a significant global health problem, with a high incidence of cases and a substantial number of deaths yearly. Early identification and accurate diagnosis play a crucial role in effective malaria treatment. However, underdiagnosis presents a significant challenge in reducing mortality rates, and traditional laboratory diagnosis methods have limitations in terms of time consumption and error susceptibility. To overcome these challenges, researchers have increasingly utilized Machine Learning techniques, specifically neural networks, which provide faster, cost-effective, and highly accurate diagnostic capabilities.

    Methods

    This study aimed to compare the performance of a traditional neural network (NN) with a convolutional neural network (CNN) in the diagnosis and classification of different types of malaria using blood smear images. We curated a comprehensive malaria dataset comprising 1,920 images obtained from 84 patients suspected of having various malaria strains. The dataset consisted of 624 images of Falciparum, 548 images of Vivax, 588 images of Ovale, and 160 images from suspected healthy individuals, obtained from local hospitals in Iran. To ensure precise analysis, we developed a unique segmentation model that effectively eliminated therapeutically beneficial cells from the image context, enabling accurate analysis using artificial intelligence algorithms.

    Results

    The evaluation of the traditional NN and the proposed 6-layer CNN model for image classification yielded average accuracies of 95.11% and 99.59%, respectively. These results demonstrate that the CNN, as a primary algorithm of deep neural networks (DNN), outperforms the traditional NN in analyzing different classes of malaria images. The CNN model demonstrated superior diagnostic performance, delivering enhanced accuracy and reliability in the classifying of malaria cases.

    Conclusion

    This research underscores the potential of ML technologies, specifically CNNs, in improving malaria diagnosis and classification. By leveraging advanced image analysis techniques, including the developed segmentation model, CNN showcased remarkable proficiency in accurately identifying and classifying various malaria parasites from blood smear images. The adoption of machine learning-based approaches holds promise for more effective management and treatment of malaria, addressing the challenges of underdiagnosis and improving patient outcomes.

    Keywords: Malaria Parasites, Image Processing, Artificial Neural Network, Deep Learning, Convolutional Neural Network
  • Mehdi Bazargani, Amir Tahmasebi, MohammadrezaYazdchi, Zahra Baharlouei
    Background

    Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative.

    Methods

    In this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice.

    Results

    Average classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset.

    Conclusion

    Comparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice.

    Keywords: Convolutional neural network, electroencephalography, embedded system, emotionrecognition
  • الهام عسکری*، سارا معتمد، صفورا عاشوری قلعه کلی
    مقدمه

    تشخیص دقیق بیماری آلزایمر در مراحل اولیه نقش مهمی را در مراقبت از بیمار دارد و می بایست اقدامات پیشگیرانه را قبل از آسیب غیرقابل برگشت به مغز انجام داد. با افزایش سن تغییراتی در حافظه ایجاد می شود که طبیعی است؛ اما نشانه های بیماری آلزایمر بیش از فراموشی های موقتی می باشد. تشخیص زودهنگام و هوشمند بیماری آلزایمر در حالات مختلف می تواند کمک شایانی به بیماران و پزشکان بکند.

    روش

    در روش پیشنهادی برای بهبود بازشناسی افراد مبتلا به آلزایمر از افراد سالم در حالات هیجانی از شبکه عصبی کانولوشنی استفاده خواهد شد. ابتدا بر روی سیگنال الکتروانسفالوگرافی، پیش پردازش های موردنیاز انجام می شود و سپس به عنوان ورودی به شبکه اعمال خواهد شد. در ادامه جهت بهینه سازی وزن های شبکه عصبی کانولوشنی از الگوریتم ژنتیک استفاده می شود.

    نتایج

    تحقیقات انجام شده نشان می دهد که لوب پیشانی مغز با احساسات در ارتباط می باشد و استفاده از کانال های F3 و F4 در مقایسه با سایر کانال ها اطلاعات بیشتری را منعکس می کند، بنابراین با این اطلاعات عمل تشخیص افراد آلزایمری در حالات هیجانی بهتر انجام می شود.

    نتیجه گیری

    روش پیشنهادی با سایر دسته بندها در حالات خوشایندی و برانگیختگی مورد ارزیابی قرار گرفت و مشاهده شد که این روش در مقایسه با روش های دیگر با دقت 92/3 درصد در خوشایندی و 94/3 درصد در برانگیختگی در بازشناسی افراد مبتلا به آلزایمر از کارایی بهتری برخوردار است.

    کلید واژگان: آلزایمر، الکتروانسفالوگرافی، شبکه عصبی کانولوشن، الگوریتم ژنتیک
    Elham Askari*, Sara Motamed, Safoura Ashori Ghale Koli
    Introduction

    Accurate diagnosis of Alzheimer’s disease in the early stages plays an important role in patient care, and preventive measures should be taken before irreversible brain damage occurs. With increasing age, there are changes in memory, which is normal, but the symptoms of Alzheimer’s disease are more than temporary forgetfulness. Early and intelligent diagnosis of Alzheimer’s disease in different situations can greatly help patients and physicians.

    Method

    In the proposed method, a convolutional neural network will be used to improve the recognition of people with Alzheimer’s disease from healthy people in emotional states. First, the required pre-processing is done on the electroencephalography signal, and then, it will be applied as an input to the network. Next, the genetic algorithm is used to optimize the weights of the convolutional neural network.

    Results

    The research shows that the frontal lobe of the brain is related to emotions and the use of F3 and F4 channels reflects more information compared to other channels, so with this information, the process of recognizing Alzheimer’s patients in emotional states is better.

    Conclusion

    The proposed method was evaluated with other categories in valence and arousal states. It was observed that this method has a better efficiency compared to other methods with an accuracy of 92.3% in valence and 94.3% in arousal in recognizing people with Alzheimer’s disease.

    Keywords: Alzheimer, Electroencephalography, Convolutional Neural Network, Genetic Algorithm
  • Sara Bagherzadeh, Keivan Maghooli*, Ahmad Shalbaf, Arash Maghsoudi
    Introduction

    Nowadays, deep learning and convolutional neural networks (CNNs) have become widespread tools in many biomedical engineering studies. CNN is an end-to-end tool, which makes the processing procedure integrated, but in some situations, this processing tool requires to be fused with machine learning methods to be more accurate.

    Methods

    In this paper, a hybrid approach based on deep features extracted from wavelet CNNs (WCNNs) weighted layers and multiclass support vector machine (MSVM) was proposed to improve the recognition of emotional states from electroencephalogram (EEG) signals. First, EEG signals were preprocessed and converted to Time-Frequency (T-F) color representation or scalogram using the continuous wavelet transform (CWT) method. Then, scalograms were fed into four popular pre-trained CNNs, AlexNet, ResNet-18, VGG-19, and Inception-v3 to fine-tune them. Then, the best feature layer from each one was used as input to the MSVM method to classify four quarters of the valence-arousal model. Finally, the subject-independent leave-one-subject-out criterion was used to evaluate the proposed method on DEAP and MAHNOB-HCI databases.

    Results

    Results showed that extracting deep features from the earlier convolutional layer of ResNet-18 (Res2a) and classifying using the MSVM increased the average accuracy, precision, and recall by about 20% and 12% for MAHNOB-HCI and DEAP databases, respectively. Also, combining scalograms from four regions of pre-frontal, frontal, parietal, and parietal-occipital and two regions of frontal and parietal achieved the higher average accuracy of 77.47% and 87.45% for MAHNOB-HCI and DEAP databases, respectively.

    Conclusion

    Combining CNN and MSVM increased the recognition of emotion from EEG signals and the results were comparable to state-of-the art studies.

    Keywords: Emotion recognition, Electroencephalogram, Continuous wavelet transform, Convolutional neural network, Feature extractor, Support vector machine ​​​​​​​
  • Parisa Gifani, Majid Vafaeezadeh, Mahdi Ghorbani, Ghazal Mehri-Kakavand, Mohamad Pursamimi, Ahmad Shalbaf, Amirhossein Abbaskhani Davanloo
    Background

    Diagnosis of the stage of COVID‑19 patients using the chest computed tomography (CT) can help the physician in making decisions on the length of time required for hospitalization and adequate selection of patient care. This diagnosis requires very expert radiologists who are not available everywhere and is also tedious and subjective. The aim of this study is to propose an advanced machine learning system to diagnose the stages of COVID‑19 patients including normal, early, progressive, peak, and absorption stages based on lung CT images, using an automatic deep transfer learning ensemble.

    Methods

    Different strategies of deep transfer learning were used which were based on pretrained convolutional neural networks (CNNs). Pretrained CNNs were fine‑tuned on the chest CT images, and then, the extracted features were classified by a softmax layer. Finally, we built an ensemble method based on majority voting of the best deep transfer learning outputs to further improve the recognition performance.

    Results

    The experimental results from 689 cases indicate that the ensemble of three deep transfer learning outputs based on EfficientNetB4, InceptionResV3, and NasNetlarge has the highest results in diagnosing the stage of COVID‑19 with an accuracy of 91.66%.

    Conclusion

    The proposed method can be used for the classification of the stage of COVID‑19 disease with good accuracy to help the physician in making decisions on patient care.

    Keywords: Computed tomography, convolutional neural network, ensemble, stage of COVID‑19, transfer learning
  • Seyyed Abed Hosseini, Yeganeh Modaresnia, Farhad Abedinzadeh Torghabeh
    Background

    Attention deficit hyperactivity disorder (ADHD) is prevalent worldwide, affecting approximately 8-12% of children. Early detection and effective treatment of ADHD are crucial for improving academic, social, and emotional outcomes. Despite numerous studies on ADHD detection, existing models still lack accuracy distinguishing between ADHD and healthy control (HC) children.

    Methods

    This study introduces an innovative methodology that utilizes granger causality (GC), a well-established brain connectivity analysis technique, to reduce the required EEG electrodes. We computed GC indexes (GCI) for the entire brain and specific brain regions, known as regional GCI, across different frequency bands. Subsequently, these GCIs were transformed into color-coded images and fed into a custom-developed 11-layer convolutional neural network.

    Results

    The proposed model is evaluated through a five-fold cross-validation, achieving the highest accuracy of 99.80% in the gamma frequency band for the entire brain and an accuracy of 98.50% in distinguishing the theta frequency band of the right hemisphere of ADHD and HC children by only using eight electrodes.

    Conclusion

    The proposed framework provides a powerful automated tool for accurately classifying ADHD and HC children. The study’s outcome demonstrates that the innovative proposed methodology utilizing GCI and a custom-developed convolutional neural network can significantly improve ADHD detection accuracy, improving affected children’s overall quality of life.

    Keywords: ADHD, Electroencephalography, Effective connectivity, Granger Causality, Convolutional Neural Network
  • سید محمد موسوی، سوده حسینی*
    مقدمه

    COVID-19 تاثیرات مخربی بر سلامت عمومی در سراسر جهان ایجاد کرده است. از آنجایی که تشخیص زودهنگام و درمان به موقع بر کاهش مرگ و میر بر اثر ابتلاء به COVID-19 تاثیر گذار است و روش های تشخیصی موجود ازجمله آزمایش RT-PCR مستعد خطا است، راه حل جایگزین استفاده از تکنیک های هوش مصنوعی و پردازش تصویر است. هدف کلی معرفی مدل هوشمند مبتنی بر یادگیری عمیق و شبکه عصبی پیچشی برای شناسایی موارد مبتلا به COVID-19 و ذات الریه به منظور اقدامات درمانی بعدی به کمک تصاویر پزشکی ریه است.

    روش

    مدل پیشنهادی شامل دو مجموعه داده رادیوگرافی و CT-scan است. تصاویر مجموعه داده مورد پیش پردازش قرار می گیرند و فرآیند تقویت داده روی تصاویر اعمال می شود. در مرحله بعد از سه معماری EfficientNetB4, InceptionV3 و InceptionResNetV2 با استفاده از روش یادگیری انتقالی استفاده می شود.

    نتایج

    بهترین نتیجه به دست آمده برای تصاویر CT-scan متعلق به معماری InceptionResNetV2 با دقت 99/366% و برای تصاویر رادیولوژی مربوط به معماری InceptionV3 با دقت 96/943% می باشد. علاوه بر آن نتایج حاکی از آن است که تصاویر CT-scan از ویژگی های بیشتری نسبت به تصاویر رادیوگرافی برخوردار بوده و تشخیص بیماری روی این نوع داده ها با دقت بیشتری انجام می پذیرد.

    نتیجه گیری

    مدل پیشنهادی مبنتی بر شبکه عصبی پیچشی دقت بالاتری نسبت به سایر مدل های مشابه دارد. همچنین با اعلام نتایج آنی می تواند در ارزیابی اولیه مراجعان به مراکز درمانی به خصوص در زمان اوج همه گیری ها که مراکز درمانی با چالش هایی مانند کمبود نیرو متخصص و کادر درمان مواجه می شوند کمک نماید.

    کلید واژگان: پردازش تصویر، هوش مصنوعی، کووید19، شبکه عصبی پیچشی
    Seyed Mohammad Mousavi, Soodeh Hosseini*
    Introduction

    COVID-19 has had a devastating impact on public health around the world. Since early diagnosis and timely treatment have an impact on reducing mortality due to infection with COVID-19 and existing diagnostic methods such as RT-PCR test are prone to error, the alternative solution is to use artificial intelligence and image processing techniques. The overall goal is to introduce an intelligent model based on deep learning and convolutional neural network to identify cases of COVID-19 and pneumonia for the purpose of subsequent treatment measures with the help of lung medical images.

    Method

    The proposed model includes two datasets of radiography and CT-scan. These datasets are pre -processed and the data enhancement process is applied to the images. In the next step, three architectures EfficientNetB4, InceptionV3, and InceptionResNetV2 are used using transfer learning method.

    Results

    The best result obtained for CT-scan images belongs to the InceptionResNetV2 architecture with an accuracy of 99.366% and for radiology images related to the InceptionV3 architecture with an accuracy of 96.943%. In addition, the results indicate that CT-scan images have more features than radiographic images, and disease diagnosis is performed more accurately on this type of data.

    Conclusion

    The proposed model based on a convolutional neural network has higher accuracy than other similar models. Also, this method by generating instant results can help in the initial evaluation of patients in medical centers, especially during the peak of epidemics, when medical centers face various challenges, such as lacking specialists and medical staffs.

    Keywords: Image Processing, Artificial Intelligence, COVID-19, Convolutional Neural Network
  • alireza meshkin, Farjam Azizi, Khosro Goudarzi
    Background

    The cataract is the most prevalent cause of blindness worldwide and is responsible for more than 51 % of blindness cases. As the treatment process is becoming smart and the burden of ophthalmologists is reducing, many existing systems have adopted machine-learning-based cataract classification methods with manual extraction of data features. However, the manual extraction of retinal features is generally time-consuming and exhausting and requires skilled ophthalmologists.

    Material and Methods

    Convolutional neural network (CNN) is a highly common automatic feature extraction model which, compared to machine learning approaches, requires much larger datasets to avoid overfitting issues. This article designs a deep convolutional network for automatic cataract recognition in healthy eyes. The algorithm consists of four convolution layers and a fully connected layer for hierarchical feature learning and training.

    Results

    The proposed approach was tested on collected images and indicated an 90.88 % accuracy on testing data. The keras model provides a function that evaluates the model, which is equal to the value of 84.14 %, the model can be further developed and improved to be applied for the automatic recognition and treatment of ocular diseases.

    Conclusion

    This study presented a deep learning algorithm for the automatic recognition of healthy eyes from cataractous ones. The results suggest that the proposed scheme outperforms other conventional methods and can be regarded as a reference for other retinal disorders.

    Keywords: Cataract, Deep Learning, Convolutional Neural Network, Image Processing
  • Sina Garazhian, Alireza Meshkin
    Background

    Hypertensive Retinopathy (HR) is amongst the abnormalities occurred with high blood pressure. This high blood pressure level makes retinal arterial narrower, retinal hemorrhages and cotton wool spots more harmful. Based on what was mentioned, early detection of hypertensive retinopathy is pivotal to prevent its following disabilities and boost its treatment with more accurate methods.

    Material and Methods

    The main objective of this study is to investigate an appropriate deep learning method for improving the automatic diagnosis of hypertensive retinopathy in its early stages. The complete data used in this study have been obtained from integration of Structured Analysis of the Retina (STARE) and The Digital Retinal Images for Vessel Extraction (DRIVE) datasets.

    Results

    Interestingly, we reached an accuracy of 87.5 % after using the well-suited preprocessing method to integrate different images for further analysis by our designed convolutional neural network (CNN).

    Conclusion

    This model performs well with integration of two mentioned datasets.

    Keywords: Hypertensive Retinopathy, Convolutional Neural Network, Deep Learning
  • Affaf Khaouane*, Samira Ferhat, Salah Hanini
    Background

    Plasma protein binding is a key component in drug therapy as it affects the pharmacokinetics and pharmacodynamics of drugs.

    Objectives

    This study aimed to predict the fraction of plasma protein binding.

    Methods

    A quantitative structure-activity relationship, convolutional neural network, and feed-forward neural network (QSAR-CNN-FFNN) methodology was used. CNN was used for feature selection, which is known as a difficult task in QSAR studies. The values of the descriptors acquired without the preprocessing procedures were rearranged into matrices, and features from a deep fully connected layer of a pre-trained CNN (ALEXNET) were extracted. Then, the latest features learned from the CNN layers were flattened out and passed through an FFNN to make predictions.

    Results

    The external accuracy of the validation set (Q2=0.945, RMSE=0.085) showed the performance of this methodology. Another extremely favorable circumstance of this method is that it does not take a lot of time (only a few minutes) compared to the QSAR-Wrapper-FFNN method (days of hard work and concentration) and it automatically gives us the characteristics that are the best representations of our input.

    Conclusion

    We can say that this model can be used to predict the fraction of human plasma protein binding for drugs that have not been tested to avoid chemical synthesis and reduce expansive laboratory tests.

    Keywords: Quantitative structure-activity relationship, Convolutional neural network, Feed-forward neural network, Protein binding
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال