image processing
در نشریات گروه پزشکی-
زمینه و هدف کبد چرب غیرالکلی یکی از شایع ترین بیماری ها در جهان است. در صورت تشخیص ندادن و درمان نشدن، بیماری تشدید می شود. درحال حاضر بیوپسی کبد روش استاندارد برای تشخیص بیماری است که محدودیت های بسیاری دارد. الگوریتم های هوش مصنوعی می توانند از معیارهای بالینی و تصویربرداری برای تشخیص کبد چرب استفاده کنند. در این مطالعه به بررسی روش های تشخیص به کمک هوش مصنوعی پرداخته شده است.روش بررسی مطالعه ی حاضر مرور سیستماتیک استفاده از هوش مصنوعی برای تشخیص بیماری کبد چرب است. در مرحله ی اول، مقالات در پایگاه های داده ی استنادی با معیارهای تعیین شده انتخاب و سپس براساس الگوریتم استفاده شده، نوع داده های مورد مطالعه، حضور متخصص بالینی و ارزیابی بالینی بررسی شدند. جهت سنجش عملکرد الگوریتم های هوش مصنوعی، از معیارهای مختلف ارزیابی استفاده شده است. یافته ها پژوهشگران پس از بررسی مشاهده کردند که تشخیص در 42 درصد از مقالات با کمک پردازش تصویر و 58 درصد با کمک بیومارکرها بوده و 84 درصد از مقالات، پزشک متخصص حضور داشته است و تقریبا در هیچ مطالعه ای از ارزیابی بالینی استفاده نشده است و بیشتر، از الگوریتم های جعبه ی سیاه هستند. همچنین نتایج آزمون T-TEST آشکار کرد که عملکرد هوش مصنوعی در دو روش تشخیصی تفاوت زیادی با یکدیگر ندارد.نتیجه گیری استفاده از هوش مصنوعی برای تشخیص بیماری کبد چرب کمک بسیار زیادی می کند؛ ولی عدم ارزیابی بالینی و استفاده از الگوریتم های جعبه ی سیاه، چالشی است که استفاده از الگوریتم ها با آن مواجه است و محدودیت های قابل توجهی برای کاربرد فعلی هوش مصنوعی در محیط های بالینی ایجاد می کند.
کلید واژگان: کبد چرب غیرالکلی، هوش مصنوعی، بیومارکر، پردازش تصویرBackground and ObjectivesNon-alcoholic fatty liver disease (NAFLD) is one of the most common diseases worldwide. If not diagnosed and treated, the disease can progress and worsen. Currently, liver biopsy is the standard method for diagnosing NAFLD, though it has numerous limitations. Artificial intelligence (AI) algorithms can utilize clinical and imaging parameters to aid in the diagnosis of fatty liver disease. This study reviews AI-assisted diagnostic methods for fatty liver disease.
Subjects and MethodsThe present study is a systematic review of AI applications in diagnosing fatty liver disease. In the first phase, articles from citation databases were selected based on predefined criteria and then examined according to the algorithms used, types of data studied, involvement of clinical specialists, and clinical evaluation. Various evaluation parameters were used to assess the performance of the AI algorithms.
ResultsResearchers observed that, among the articles reviewed, 42% employed image processing, 58% relied on biomarkers, and in 84% of the studies, a specialist physician was involved. Nearly none of the studies included clinical evaluation, and most used black-box algorithms. Additionally, results from the T-TEST indicated no significant difference in AI performance between the two diagnostic approaches.
ConclusionAI significantly aids in diagnosing fatty liver disease; however, the lack of clinical evaluation and reliance on black-box algorithms pose challenges. These limitations significantly impact the current application of AI in clinical settings.
Keywords: Non-Alcoholic Fatty Acids, Artificial Intelligence, Biomarkers, Image Processing -
مقدمه
مدل سازی محاسباتی نقشی اساسی در پر کردن شکاف بین علوم اعصاب شناختی و عصب شناسی بالینی، به ویژه در زمینه بیماری های تخریب کننده عصبی مانند بیماری آلزایمر ایفا می کند. این مطالعه به بررسی کاربرد مدل های محاسباتی برای درک سیستم های شناختی و فرآیندهای آسیب شناختی منجر به کاهش شناختی در بیماری آلزایمر می پردازد.
مواد و روش هاما مجموعه ای از رویکردهای محاسباتی، از جمله شبکه های عصبی و مدل سازی سیستم های دینامیکی را برای شبیه سازی فعالیت عصبی، شکل پذیری سیناپسی و تعاملات بین عوامل ژنتیکی و محیطی پیشنهاد کردیم. ادغام داده های حاصل از تصویربرداری عصبی، ژنومیک و مطالعات رفتاری در افزایش دقت و قابلیت های پیش بینی این مدل ها بسیار مهم بود.
یافته هامدل های محاسباتی بینش های قابل توجهی در مورد مکانیسم های شناخت، شکل گیری حافظه و زوال آن ها در بیماری آلزایمر ارائه کردند. مدل های ما نشانگرهای زیستی بالقوه و استراتژی های آگاهانه برای مداخله درمانی را شناسایی کردند که اهمیت رویکرد چند رشته ای برای درک و درمان زوال شناختی را نشان می دهد.
نتیجه گیریمدل سازی محاسباتی برای ارتقای درک ما از آلزایمر و سایر اختلالات شناختی ضروری است. تحقیقات آینده باید بر اصلاح این مدل ها و تقویت همکاری های بین رشته ای بیشتر برای توسعه شبیه سازی های دقیق تر و جامع تر تمرکز کند.
کلید واژگان: مدل بیولوژیکی، نقشه برداری مغز، علوم شناختی، زوال عقل، پردازش تصویر پزشکیIntroductionComputational modeling plays a pivotal role in bridging the gap between cognitive neuroscience and clinical neurology, particularly in the context of neurodegenerative diseases like Alzheimer's disease (AD). This study explores the application of computational models to understand cognitive systems and the pathological processes leading to cognitive decline in AD.
Materials and MethodsWe proposed a set of computational approaches, including neural networks and dynamical systems modeling, to simulate neural activity, synaptic plasticity, and interactions between genetic and environmental factors. Data integration from neuroimaging, genomics, and behavioral studies was crucial in enhancing the accuracy and predictive capabilities of these models.
ResultsThe computational models provided significant insights into the mechanisms of cognition, memory formation, and their deterioration in AD. Our models identified potential biomarkers and informed strategies for therapeutic intervention, demonstrating the importance of a multi-disciplinary approach to understanding and treating cognitive decline.
ConclusionComputational modeling is essential for promoting our understanding of AD and other cognitive disorders. Future research should focus on refining these models and fostering greater interdisciplinary collaboration to develop more accurate and comprehensive simulations.
Keywords: Models, Biological, Brain Mapping, Cognitive Science, Dementia, Image Processing, Computer, Assisted -
ObjectivesAn alternative to both the time-consuming traditional and the expensive three-dimensional (3D) methods for splint design is to use two-dimensional (2D) images. The present study utilized image processing to achieve an automatic and practical method of anthropometry measurement to design and build a personalized and remote cock-up splint. This method is applicable for patients unable to personally attend clinic appointments.MethodsThe defined landmarks of the cock-up splint of 100 adult participants were measured manually. Each individual had a 2D image taken of their upper limb using a customized imaging device. The 2D image portions that corresponded to the manual measurements were then identified, and their sizes were retrieved in pixels using MATLAB software. To find equations between manual 3D measurements and 2D image processing ones, multiple linear regression analysis was performed on landmark variables.ResultsWe were able to determine equations to estimate manual dimensions based on 2D image data. In the men’s group, we could predict the third finger length, forearm circumference at three levels, and the largest forearm circumference. In the women’s group, in addition to variables predicted for men, hand circumference at the distal palmar crease and first web levels, as well as arm circumference, could be predicted using the identified equations.ConclusionBased on the findings, 2D image processing could be an appropriate method for designing personalized cock-up splints. Level of evidence: IIIKeywords: Customized Splint, Forearm Landmarks, Image Processing, MATLAB
-
International Journal of Medical Toxicology and Forensic Medicine, Volume:14 Issue: 2, Spring 2024, P 3Background
Wound detection is important in forensic science. The development of technology for automatic wound detection in the medical field has not been fully realized. This study aimed to compare the level of knowledge between medical students who utilized an application versus those who studied from textbooks.
MethodsAn image processing-based application was developed using a Python program with the YOLO software, version 4 algorithm. The research utilized a cross-sectional study design. The participants were medical students undergoing clinical clerkships at the Forensic and Medicolegal Departments of the Faculty of Medicine, Public Health, and Nursing at Gadjah Mada University. Each group (textbook and application group) comprised at least 35 individuals. Both groups had identical study periods and answered the same questions. Participants then filled out a questionnaire about the application.
ResultsThe mean scores of the post-test showed significant improvement in both groups compared to their pre-test scores (P=0.00). There was no significant difference in the post-test scores between the groups using the questionnaire and the application (P=0.207).
ConclusionThe application for wound type determination could serve as a preferable alternative to traditional textbooks for learning. It demonstrated an increase in learning outcomes that was significantly comparable to that achieved through textbook study.
Keywords: Evaluation, Application Wound Type Determination, Image Processing -
مقدمه
دستگاه درماتوسکوپ ابزاری است که با فراهم آوردن نمایی بزرگ از لکه های پوستی می تواند در تشخیص بهتر بیماری های پوستی از جمله سرطان خطرناک ملانوما تاثیرگذار باشد. افزودن یک ابزار نرم افزاری با قابلیت پردازش، تحلیل و درنهایت تشخیص لکه های پوستی می تواند به عنوان دستیار پزشک متخصص پوست عمل کند. در این مقاله، به معرفی نرم افزاری با عنوان Optoskin ورژن یک که به عنوان رابط کاربری گرافیکی درماتسکوپ ساخته شده در آزمایشگاه تصویربرداری نوری دانشگاه شهیدبهشتی و با حمایت شرکت پرتو آوای اطلس است، می پردازیم.
روش بررسیاین نرم افزار ضمن فراهم آوردن امکان تصویربرداری از پوست، به پردازش تصاویر گرفته شده پرداخته و براساس الگوریتم ABCD شاخص های مهم یک لکه پوستی شامل تقارن، بی نظمی مرزی، رنگ و قطر را اندازه گیری می کند. برای اندازه گیری هر یک از این شاخص ها ابتدا تصاویر پیش پردازش شده و بعد از یافتن مرزهای لکه، با کمک الگوریتم های طراحی شده در این نرم افزار ویژگی های ظاهری ضایعه پوستی شناسایی می شود.
یافته هابرای آزمودن عملکرد نرم افزار نیز از داده های تجربی و داده های موجود در دیتاست های HAM10000 و PH2استفاده شده است. در نهایت، با کمک این نرم افزار به 67% درستی تشخیص، 72% حساسیت و 61% شاخص ویژگی دست یافتیم.
نتیجه گیرینرم افزار Optoskin برای پردازش و تجزیه و تحلیل تصاویر درماتوسکوپی طراحی شده است و می تواند به طور خودکار شاخص های مهمی مانند عدم تقارن، بینظمی مرز، رنگ و قطر ضایعات را برمبنای قانون ABCD اندازه گیری کند. این نرم افزار همچنین قادر به طبقه بندی هوشمند ضایعات ملانومایی و خوش خیم است. هنگام آزمایش با داده های واقعی و مجموعه داده های استاندارد، Optoskin اثربخشی بالا، با دقت، حساسیت و ویژگی قابل توجه نشان داده است
کلید واژگان: درماتوسکوپی، پردازش تصویر، آنالیز تصویر، Optoskin V.0.1.Beta، الگوریتم ABCD، تشخیص ملانوماIntroductionDermoscopes are key in dermatology for magnifying skin lesion images, which is crucial for diagnosing diseases such as melanoma. Adding software that processes and analyzes these images significantly aids dermatologists. This article presents Optoskin version 1, software that provides a graphical interface to enhance dermoscopes, developed at Shahid Beheshti University's Optical Imaging Laboratory and supported by the Parto Ava Atlas Company.
MethodThis software is not only captures images of the skin but also processes them to analyze key features of skin lesions. By applying the ABCD algorithm, it measures symmetry, border irregularity, color, and diameter. After preprocessing the images and locating the lesion's edges, the software uses specialized algorithms to identify its visual characteristics.
Findingswe assess this software's effectiveness using real-world data and information from datasets like HAM10000 and PH2. The software demonstrated a solid performance, with a detection accuracy of 67%, a sensitivity of 72%, and a specificity of 61%.
ConclusionOptoskin is designed to process and analyze dermatoscopic images. It automatically measures critical indices such as asymmetry, border irregularity, color, and diameter of lesions using the ABCD algorithm. The software also automatically classifies lesions as benign or malignant. When tested with real data and standard datasets, Optoskin has shown high effectiveness, with substantial accuracy, sensitivity, and specificity.
Keywords: Dermatoscopy, Image Processing, Image Analysis, Optoskin V.0.1.Beta, ABCD Algorithm, Melanoma Diagnosis -
IntroductionComplexity metrics have been suggested to characterize treatment plans based on machine parameters such as multileaf collimator (MLC) position. Several complexity metrics have been proposed and related to the Intensity-modulated radiation therapy (IMRT) quality assurance results. This study aims to evaluate aperture-based complexity metrics on MLC openings used in clinicaland establish a correlation between plan complexity and the gamma passing rate (GPR) for the IMRT plans.Material and MethodsWe implemented the aperture-based complexity metric on MLC openings of the IMRT treatment plan for breast and central nervous system (CNS) cases . The modulation complexity score (MCS), the edge area metric (EAM), the converted area metric (CAM), the circumference/area (CPA), and the ratio monitor unit MU/Gy are evaluated in this study. The complexity score was calculated using Matlab. The MatriXX Evolution was used for dose verification. The dose distribution was analyzed using the OmniPro-I'mRT program and the gamma index was assessed using two criteria: 3%/3 mm and 3%/2 mm. The correlation between the calculated complexity score and the GPR is analyzed using SPSS.ResultsThe complexity score calculated by MCS, EAM, CAM, CPA, and MU/Gy shows breast plan is more complex than the CNS plan. The results of the correlation test of the complexity metric and GPR show that only the EAM metric shows a good correlation with GPR for both cases.ConclusionEAM strongly correlates with the gamma pass rate. The MCS, CAM, CPA, and MU/Gy have a weak correlation with the GPR.Keywords: Image Processing, Magnetic Resonance, Medical Imaging
-
مقدمه
یکی از بیماری های ریوی که معمولا تا سنین بالا ناشناخته می ماند و درمان قطعی ندارد، آمفیزم ریوی است که تشخیص سریع این بیماری به افراد درگیر در این بیماری کمک زیادی می کند و از رشد توده های آمفیزم جلوگیری می کند. این تحقیق سعی در تشخیص زودهنگام این بیماری به کمک روش های یادگیری عمیق دارد.
روش کاراین تحقیق با کمک شبکه عصبی Unet بهینه سازی شده با الگوریتم فراابتکاری GPC ، سعی در تشخیص سریع تر این بیماری دارد. داده های این پژوهش از بیمارستان های امام علی و بوعلی سینا شهرستان زاهدان استان سیستان و بلوچستان جمع آوری شده است. داده ها شامل 300 قطعه با آمفیزم شامل 65 مورد CLE ، 97 مورد PSE و 138 مورد PLE و 45 مورد داده نرمال است. این داده ها توسط شبکه عصبی عمیق Unet و با الگوریتم بهینه سازی GPC مورد بررسی قرار گرفتند و در نهایت به کمک معیارهای accuracy،recall، Specificity و F-measure با سایر روش های انجام شده مورد مقایسه و بررسی قرار گرفتند.
یافته هادر این پژوهش معیارهای به کاررفته به ترتیب با دقت 18/97، پیش بینی40/98، حساسیت 23/48 و امتیاز f 97/50 ، نتایج بسیار بهتری در مقایسه با سایر روش های تشخیص آمفیزم به کمک شبکه Unet بهینه شده داشته است که نشان از تشخیص سریع تر و دقیق تر و اثر بخش تر این بیماری به کمک روش پیشنهادی دارد.
نتیجه گیریاستفاده از روش های درست یادگیری عمیق در ترکیب با الگوریتم های بهینه سازی قوی می تواند درمان سریع تر و دقیق تر بیماری ها را امکان پذیر سازد.
کلید واژگان: هوش مصنوعی، شبکه های عصبی، آمفیزم ریوی، پردازش تصویر، یادگیری عمیقIntroductionPulmonary emphysema is one of the lung diseases that usually remains unknown until old age and does not have a definitive treatment. A quick diagnosis of this disease helps a lot to the people involved in this disease and prevents the growth of emphysema masses. This research tries to in early diagnosis of this disease with the help of deep learning methods.
MethodThis research tries to diagnose this disease faster with the help of Unet neural network optimized with GPC meta-heuristic algorithm. The data of this research were collected from Imam Ali and Bu Ali Sina hospitals, Zahedan city, Sistan and Baluchistan province. The data include 300 pieces with emphysema, including 65 cases of CLE, 97 cases of PSE, 138 cases of PLE, and 45 cases of normal data. These data were analyzed by Unet deep neural network and GPC optimization algorithm, and finally, with the help of accuracy criteria, recall, specificity, and F-measure were compared and investigated with other methods.
ResultsIn this research, the criteria used have much better results compared to other emphysema diagnosis methods with the help of the optimized Unet network, with accuracy of 18.97, prediction of 40.98, sensitivity of 48.23, and f score of 97.50, respectively, which shows a faster, more accurate, and more effective diagnosis of this disease with the help of the proposed method.
ConclusionUsing the right deep learning methods in combination with strong optimization algorithms can enable faster and more accurate treatment of diseases.
Keywords: Artificial Intelligence, Neural Networks, Pulmonary Emphysema, Image Processing, Deep Learning -
Background
Several approaches have been proposed to optimize the construction of an artificial intelligence-based model for assessing ploidy status. These encompass the investigation of algorithms, refining image segmentation techniques, and discerning essential patterns throughout embryonic development. The purpose of the current study was to evaluate the effectiveness of using U-NET architecture for embryo segmentation and time-lapse embryo image sequence extraction, three and ten hr before biopsy to improve model accuracy for prediction of embryonic ploidy status.
MethodsA total of 1.020 time-lapse videos of blastocysts with known ploidy status were used to construct a convolutional neural network (CNN)-based model for ploidy detection. Sequential images of each blastocyst were extracted from the time-lapse videos over a period of three and ten hr prior to the biopsy, generating 31.642 and 99.324 blastocyst images, respectively. U-NET architecture was applied for blastocyst image segmentation before its implementation in CNN-based model development.
ResultsThe accuracy of ploidy prediction model without applying the U-NET segmented sequential embryo images was 0.59 and 0.63 over a period of three and ten hr before biopsy, respectively. Improved model accuracy of 0.61 and 0.66 was achieved, respectively with the implementation of U-NET architecture for embryo segmentation on the current model. Extracting blastocyst images over a 10 hr period yields higher accuracy compared to a three-hr extraction period prior to biopsy.
ConclusionCombined implementation of U-NET architecture for blastocyst image segmentation and the sequential compilation of ten hr of time-lapse blastocyst images could yield a CNN-based model with improved accuracy in predicting ploidy status.
Keywords: Artificial Intelligence, Image Processing, Neural Networks, Ploidy Measurement -
Introduction
The presence of pigmented skin lesions is a significant global concern in the prevention of skin cancer. Detecting skin cancer at an early stage is essential for proper management and effective treatment. This study aimed to combine image processing and data mining to develop an intelligent model to screen skin cancer from skin lesions.
Material and MethodsThe images were taken in a clinic by smartphone. Patients over 40 years of age participated in the study. During the segmentation phase, the lesions were separated from the original images through machine vision techniques. Various features such as symmetry, border irregularity, color variation, and diameter were extracted from the images, while some features were also obtained through face-to-face examination. Finally, a neural network was employed to classify whether the lesion was cancerous or non-cancerous. In addition, MATLAB version 2022 was considered to design the model.
ResultsThe study results indicated excellent segmentation. Using a neural network-based model, skin lesions were classified with a high level of accuracy, with 98.4% accuracy and 97% sensitivity. The results indicated the designed model significantly screened skin cancer with high accuracy.
ConclusionThis model can help patients to manage self-care and become aware of their skin lesions before consulting a physician.
Keywords: Supervised Machine Learning, Image Processing, Skin Neoplasms, Early Detection of Cancer, Tele-Medicine -
Background
To investigate the success rate and quality of automatic airway segmentation using ultra-low dose CT (ULD-CT) images of different reconstruction algorithms.
Materials and MethodsFifty two children who underwent chest ULD-CT were divided into three groups for analysis based on age: group A (n=13, age, 1-2years), group B (n=19, age, 3-6years) and group C (n=20, age, 7-13years). CT images were reconstructed with filtered back-projection (FBP), 50% adaptive statistical iterative reconstruction-Veo (50%ASIR-V), 100%ASIR-V, deep learning image reconstruction (DLIR) with low (DLIR-L), medium (DLIR-M), and high (DLIR-H) strengths. Subjective image quality was evaluated using a 5-point scale. CT value, noise, and sharpness of the trachea were measured. The VCAR software was used to automatically segment airways and reported the total volume. Segmentation success rates were recorded, and segmentation images were subjectively evaluated using a 6-point scale.
ResultsThe average tracheal diameters were 8.53±1.88mm, 10.69±1.65mm, and 12.72±1.97mm, respectively for groups A, B, and C. The segmentation success rate depended on patient groups: group C reached 100%, while group A decreased significantly. In group A, 100%ASIR-V had the lowest rate at 7.69%, while DLIR-M and DLIR-H significantly improved the rate to 38.64% (P=0.03). For the segmented images, DLIR-H provided the lowest noise and highest subjective score while FBP images had the highest noise and 100%ASIR-V had the lowest overall score (P<0.05). There was no significant difference in the total airway volume among the six reconstructions.
ConclusionThe airway segmentation success rate in ULD-CT for children depends on the tracheal size. DLIR improves airway segmentation success rate and image quality.
Keywords: CT, pediatrics, deep learning, image processing, computer-assisted -
Background
The BEBIG Portio multi-channel applicator provides better target dose coverage and sparing organs-at-risk compared to a single-channel cylinder. However, artifacts and distortions of Portio in magnetic resonance images (MRI) have not yet been reported.
ObjectiveWe aimed to quantify the artifacts and distortions in its 1.5-Tesla MR images before clinical use.
Material and MethodsIn this experimental study, we employed a gelatin-filled phantom to conduct our measurements. T2-weighted (T2W) images were examined for artifacts and distortions. Computed tomography (CT) images were used as a reference to assess image distortions. Artifact severity was measured by recording the full-width-at-half-maximum (FWHM) image pixel values at various positions along the length of the applicator/channels. CT and MRI-based applicator reconstruction accuracy were then compared, and signal-to-noise ratio (SNR) and contrast were also determined for the applicator images.
ResultsThe applicator distortion level for the Portio applicator was less than the image spatial resolution (0.5±0.5 pixels). The average FWHM for the tandem applicator images was 5.23±0.39 mm, while it was 3.21±0.37 mm for all channels (compared to their actual diameters of 5.0 mm and 3.0 mm, respectively). The average applicator reconstruction difference between CT and MR images was 0.75±0.30 mm overall source dwell positions. The image SNR and contrast were both acceptable.
ConclusionThese findings indicate that the Portio applicator has a satisfactory low level of artifacts and image distortions in 1.5-Tesla, T2W images. It may, therefore, be a promising option for MRI-guided multi-channel vaginal brachytherapy.
Keywords: Magnetic Resonance Imaging, Radiotherapy, Brachytherapy, Portio Applicator, Image Processing, Gynecolgical, Endometrial cancer -
Background
Malaria remains a significant global health problem, with a high incidence of cases and a substantial number of deaths yearly. Early identification and accurate diagnosis play a crucial role in effective malaria treatment. However, underdiagnosis presents a significant challenge in reducing mortality rates, and traditional laboratory diagnosis methods have limitations in terms of time consumption and error susceptibility. To overcome these challenges, researchers have increasingly utilized Machine Learning techniques, specifically neural networks, which provide faster, cost-effective, and highly accurate diagnostic capabilities.
MethodsThis study aimed to compare the performance of a traditional neural network (NN) with a convolutional neural network (CNN) in the diagnosis and classification of different types of malaria using blood smear images. We curated a comprehensive malaria dataset comprising 1,920 images obtained from 84 patients suspected of having various malaria strains. The dataset consisted of 624 images of Falciparum, 548 images of Vivax, 588 images of Ovale, and 160 images from suspected healthy individuals, obtained from local hospitals in Iran. To ensure precise analysis, we developed a unique segmentation model that effectively eliminated therapeutically beneficial cells from the image context, enabling accurate analysis using artificial intelligence algorithms.
ResultsThe evaluation of the traditional NN and the proposed 6-layer CNN model for image classification yielded average accuracies of 95.11% and 99.59%, respectively. These results demonstrate that the CNN, as a primary algorithm of deep neural networks (DNN), outperforms the traditional NN in analyzing different classes of malaria images. The CNN model demonstrated superior diagnostic performance, delivering enhanced accuracy and reliability in the classifying of malaria cases.
ConclusionThis research underscores the potential of ML technologies, specifically CNNs, in improving malaria diagnosis and classification. By leveraging advanced image analysis techniques, including the developed segmentation model, CNN showcased remarkable proficiency in accurately identifying and classifying various malaria parasites from blood smear images. The adoption of machine learning-based approaches holds promise for more effective management and treatment of malaria, addressing the challenges of underdiagnosis and improving patient outcomes.
Keywords: Malaria Parasites, Image Processing, Artificial Neural Network, Deep Learning, Convolutional Neural Network -
Diagnostic accuracy of inverted grayscale mode in radiographs: a systematic review and meta-analysis
X-rays are routinely utilized for different diagnostic purposes but there is always the risk of an inaccurate diagnosis. This systematic review was designed to investigate whether inverse grayscale mode increased diagnostic accuracy. From inception to February 2022, MEDLINE, Embase, Scopus, Web of Science, and CENTRAL were searched for studies comparing grayscale inversion diagnostic accuracy to the conventional method. Quality assessment was performed using the Quality Assessment of Diagnostic Accuracy Studies version 2 (QUADAS-2) tool. Eighteen studies were included with an overall patient population of 1704. The number of studies investigating each lesion are as follows, lung masses: 13, pneumothoraces: 4, bony lesions: 3, interstitial lung diseases: 3, orthopedic studies: 2, bullous lung disease: 1, pleural effusion: 1, urinary calculus: 1, and large vascular occlusion: 1. Two studies had an overall moderate risk of bias and the remainders had low risk. The combined mode, featuring the conventional mode with the addition of the inverse grayscale, demonstrated better performance or insignificant difference in comparison with the conventional mode in all studies except one, which showed lower sensitivity in detecting pulmonary nodules. Also, meta-analysis of 250 patients in four pulmonary nodule studies showed better area under the ROC curve (AUC) of inverse mode (0.83, 95% CI: 0.75,0.90) in comparison with conventional mode (0.80, 95% CI: 0.72,0.88). Application of inverse mode when using radiography for detection of pulmonary nodules might improve diagnostic accuracy. Also, the inverse/combined mode showed better performance for lesions other than pulmonary nodule in some studies. However, there was insufficient evidence to draw a consistent conclusion.
Keywords: Accuracy, Image Processing, Radiograph -
زمینه و هدف
در دسته چهارم از سیستم طبقه بندی BI-RADS یا به عبارت دیگر دسته 4 BI-RADS احتمال میزان بدخیمی ضایعات بافت پستان بین 2 تا 95 درصد است که این امر تشخیص و در نتیجه درمان مورد نیاز برای بیمار را با چالش روبه رو خواهد ساخت. تصویربرداری ماموگرافی طیفی با کنتراست یکی از روش های کارآمد در تشخیص سرطان پستان است؛ بنابراین مطالعه حاضر با هدف ارزیابی عملکرد تشخیصی تصاویر ماموگرافی طیفی با کنتراست در تعیین دقیق میزان بدخیمی دسته BI-RADS 4 در مراحل اولیه درمان سرطان پستان به مقایسه کارایی روش های یادگیری عمیق در این رابطه پرداخته است.
روش بررسیدر این مطالعه از 1408 تصویر ماموگرافی طیفی با کنتراست شامل ضایعات مشکوک خوش خیم و بدخیم سرطان پستان دسته BI-RADS 4 استفاده شده است. ابتدا مرحله پیش پردازش با هدف حذف موارد نامطلوب و بهبود کیفیت تصویر و پس از آن ناحیه بندی با هدف تمایز ناحیه تومور از بقیه تصویر اعمال شد. مرحله بعد در مطالعه حاضر، استخراج ویژگی از ناحیه تومور با استفاده از 3 شبکه عصبی کانولوشن و در نهایت طبقه بندی تصاویر با استفاده از روش های یادگیری ماشین است.
یافته هابا توجه به روش کار پیشنهادی عملکرد شبکه Densenet-201 در استخراج ویژگی و روش نزدیک ترین همسایه (KNN) در طبقه بندی با مقادیر حساسیت 99/2 درصد، اختصاصیت 97/5 درصد، صحت 98/57 درصد و 0/987 AUC بهتر از سایر مدل های پیشنهادی گزارش شد.
نتیجه گیریمدل پیشنهادی یادگیری عمیق با استفاده از تصاویر ماموگرافی طیفی با کنتراست، کارایی قابل توجهی در تشخیص میزان بدخیمی BI-RADS 4 در تشخیص زودهنگام و درمان به موقع سرطان پستان دارد.
کلید واژگان: تصاویر ماموگرافی طیفی باکنتراست، سیستم های کمکی، شبکه های عصبی کانولوشنBackground and ObjectivesAccording to the Breast Imaging-Reporting and Data System (BI-RADS), category 4 breast lesions have a 2-95% probability of malignancy. Such estimation can cause challenges in planning for the treatment of women with breast cancer. Contrast-enhanced spectral mammography (CESM) is one of the best imaging modalities in breast cancer detection. In this study, we aim to assess the efficiency of deep learning methods in determining the malignancy degree of BI-RADS 4 breast lesions using CESM images.
MethodsIn this study, 1408 CESM images of BI-RADS 4 breast lesions were used. The image pre-processing step was first done to remove noises and improve image quality. Then, segmentation was done for the region of interest extraction. Feature extraction was done using three different conventional classifiers. Finally, the classification of images was done using deep learning methods.
ResultsAmong the applied methods, the Densenet-201 network used for feature extraction and K-nearest neighbor (KNN) used for Classification showed the best results with accuracy, sensitivity, specificity, and area under the curve of 98.57%, 99.20%, 97.50% and 0.987 respectively.
ConclusionThe proposed method (Densenet-201 and KNN) using CESM images is effective in estimating the malignancy of BI-RADS 4 breast lesions and thus in timely treatment of breast cancer.
Keywords: Mammography, Deep learning, Breast cancer, Image processing -
Background
Automatic segmentation of the choroid on optical coherence tomography (OCT) images helps ophthalmologists in diagnosing eye pathologies. Compared to manual segmentations, it is faster and is not affected by human errors. The presence of the large speckle noise in the OCT images limits the automatic segmentation and interpretation of them. To solve this problem, a new curvelet transform‑based K‑SVD method is proposed in this study. Furthermore, the dataset was manually segmented by a retinal ophthalmologist to draw a comparison with the proposed automatic segmentation technique.
MethodsIn this study, curvelet transform‑based K‑SVD dictionary learning and Lucy‑Richardson algorithm were used to remove the speckle noise from OCT images. The Outer/Inner Choroidal Boundaries (O/ICB) were determined utilizing graph theory. The area between ICB and outer choroidal boundary was considered as the choroidal region.
ResultsThe proposed method was evaluated on our dataset and the average dice similarity coefficient (DSC) was calculated to be 92.14% ± 3.30% between automatic and manual segmented regions. Moreover, by applying the latest presented open‑source algorithm by Mazzaferri et al. on our dataset, the mean DSC was calculated to be 55.75% ± 14.54%.
ConclusionsA significant similarity was observed between automatic and manual segmentations. Automatic segmentation of the choroidal layer could be also utilized in large‑scale quantitative studies of the choroid.
Keywords: Choroidal segmentation, curvelet transform, graph theory, image processing, opticalcoherence tomography -
مقدمه
COVID-19 تاثیرات مخربی بر سلامت عمومی در سراسر جهان ایجاد کرده است. از آنجایی که تشخیص زودهنگام و درمان به موقع بر کاهش مرگ و میر بر اثر ابتلاء به COVID-19 تاثیر گذار است و روش های تشخیصی موجود ازجمله آزمایش RT-PCR مستعد خطا است، راه حل جایگزین استفاده از تکنیک های هوش مصنوعی و پردازش تصویر است. هدف کلی معرفی مدل هوشمند مبتنی بر یادگیری عمیق و شبکه عصبی پیچشی برای شناسایی موارد مبتلا به COVID-19 و ذات الریه به منظور اقدامات درمانی بعدی به کمک تصاویر پزشکی ریه است.
روشمدل پیشنهادی شامل دو مجموعه داده رادیوگرافی و CT-scan است. تصاویر مجموعه داده مورد پیش پردازش قرار می گیرند و فرآیند تقویت داده روی تصاویر اعمال می شود. در مرحله بعد از سه معماری EfficientNetB4, InceptionV3 و InceptionResNetV2 با استفاده از روش یادگیری انتقالی استفاده می شود.
نتایجبهترین نتیجه به دست آمده برای تصاویر CT-scan متعلق به معماری InceptionResNetV2 با دقت 99/366% و برای تصاویر رادیولوژی مربوط به معماری InceptionV3 با دقت 96/943% می باشد. علاوه بر آن نتایج حاکی از آن است که تصاویر CT-scan از ویژگی های بیشتری نسبت به تصاویر رادیوگرافی برخوردار بوده و تشخیص بیماری روی این نوع داده ها با دقت بیشتری انجام می پذیرد.
نتیجه گیریمدل پیشنهادی مبنتی بر شبکه عصبی پیچشی دقت بالاتری نسبت به سایر مدل های مشابه دارد. همچنین با اعلام نتایج آنی می تواند در ارزیابی اولیه مراجعان به مراکز درمانی به خصوص در زمان اوج همه گیری ها که مراکز درمانی با چالش هایی مانند کمبود نیرو متخصص و کادر درمان مواجه می شوند کمک نماید.
کلید واژگان: پردازش تصویر، هوش مصنوعی، کووید19، شبکه عصبی پیچشیIntroductionCOVID-19 has had a devastating impact on public health around the world. Since early diagnosis and timely treatment have an impact on reducing mortality due to infection with COVID-19 and existing diagnostic methods such as RT-PCR test are prone to error, the alternative solution is to use artificial intelligence and image processing techniques. The overall goal is to introduce an intelligent model based on deep learning and convolutional neural network to identify cases of COVID-19 and pneumonia for the purpose of subsequent treatment measures with the help of lung medical images.
MethodThe proposed model includes two datasets of radiography and CT-scan. These datasets are pre -processed and the data enhancement process is applied to the images. In the next step, three architectures EfficientNetB4, InceptionV3, and InceptionResNetV2 are used using transfer learning method.
ResultsThe best result obtained for CT-scan images belongs to the InceptionResNetV2 architecture with an accuracy of 99.366% and for radiology images related to the InceptionV3 architecture with an accuracy of 96.943%. In addition, the results indicate that CT-scan images have more features than radiographic images, and disease diagnosis is performed more accurately on this type of data.
ConclusionThe proposed model based on a convolutional neural network has higher accuracy than other similar models. Also, this method by generating instant results can help in the initial evaluation of patients in medical centers, especially during the peak of epidemics, when medical centers face various challenges, such as lacking specialists and medical staffs.
Keywords: Image Processing, Artificial Intelligence, COVID-19, Convolutional Neural Network -
Background
The cataract is the most prevalent cause of blindness worldwide and is responsible for more than 51 % of blindness cases. As the treatment process is becoming smart and the burden of ophthalmologists is reducing, many existing systems have adopted machine-learning-based cataract classification methods with manual extraction of data features. However, the manual extraction of retinal features is generally time-consuming and exhausting and requires skilled ophthalmologists.
Material and MethodsConvolutional neural network (CNN) is a highly common automatic feature extraction model which, compared to machine learning approaches, requires much larger datasets to avoid overfitting issues. This article designs a deep convolutional network for automatic cataract recognition in healthy eyes. The algorithm consists of four convolution layers and a fully connected layer for hierarchical feature learning and training.
ResultsThe proposed approach was tested on collected images and indicated an 90.88 % accuracy on testing data. The keras model provides a function that evaluates the model, which is equal to the value of 84.14 %, the model can be further developed and improved to be applied for the automatic recognition and treatment of ocular diseases.
ConclusionThis study presented a deep learning algorithm for the automatic recognition of healthy eyes from cataractous ones. The results suggest that the proposed scheme outperforms other conventional methods and can be regarded as a reference for other retinal disorders.
Keywords: Cataract, Deep Learning, Convolutional Neural Network, Image Processing -
ObjectivesThe primary aim of this study was to assess the reliability of the ten -segment classification system proposed (TSC) by Krause et al. and see how it compares with the traditionally used Schatzker classification, AO classification system, and Luo’s “Three columns” classification (ThCC) system. The second aim of this study was to assess the inter-observer reliability of the above classifications based on professional experience by comparing the entry level of residents (1 year into postgraduation), senior residents (1 year after postgraduation completion), and faculty (>10 years after postgraduation completion).Methods50 TPFs were classified by a 10-segment classification system, and its intra-observer (at 1-month interval) and inter-observer reproducibility was checked using k values by three different groups with varying levels of experience (Group I, II, and III comprised of 2 juniors residents, senior residents and consultants each), and the same was compared for three other common classification systems (Schatzker, AO and 3 –column).Results10-segment classification showed least k for both inter-observer (0.08) and intra-observer (0.03) reliability. Highest individual inter-observer (k= 0.52) and intra-observer reliability (k= 0.31) was for Schatzker classification in Group I. Lowest individual inter-observer and intra-observer reliability was seen for 10-segment classification (k= 0.07) and AO classification system (k= -0.03) respectively.Conclusion10-segment classification showed the lowest k for both inter-observer and intra-observer reliability. The inter-observer reliability for the Schatzker, AO, and 3- column classifications reduced with increasing experience of the observer (JR>SR>Consultant). A possible reason could be a more critical evaluation of the fractures with increasing seniority. Level of evidence: IKeywords: Clinical competence, Computer-assisted, Image Processing, Observer variation, Tomography, X-Ray Computed
-
Gastroenterology and Hepatology From Bed to Bench Journal, Volume:15 Issue: 4, Autumn 2022, PP 366 -376Aim
The TBS-derived image processing method, based on the observer's diagnosis, has been developed in the current investigation. Image parametrization is proposed for both novel description and convergent shreds of evidence.
BackgroundCondensed X-ray images of the esophageal timed barium swallow (TBS) provide substantial implications for elucidating the pathophysiological dimensions of esophageal motility disorders.
MethodsThroughthe simultaneous study on TBS and high-resolution manometry (HRM) findings, we performed a retrospective cohort study on 252 patientsfrom March 2018 to October 2019. Interventions, irrelevant information, and insufficient patient data were excluded. Only subjects with adequate data and acceptable test accuracy were considered for participation. We reviewed 117 Dicom (digital imaging and communications in medicine) X-ray images from patients with confirmed diagnoses of achalasia type II, esophagogastric junction outflow obstruction (EGJOO), or non-achalasia.
ResultsThe results suggested a cut-off level of 47% in DDi (dilated diameter index) as a sign of the dilated body. In achalasia type II patients (n=66 images), the mean DDi was 55.6%. Our method presented a sensitivity of 95% and a specificity of 93% compared to images of the non-achalasia findings. The mean DDi in EGJOO patients was 50.4%, according to the 27 images. Moreover, results from EGJOO patients provided a sensitivity of 85% and specificity of 87%.
ConclusionTBS is an efficacious method and a prominent component in the process of achalasia diagnosis. Standard parametrization might develop radiological exports proposed by DDi. Our method could assist in obtaining a non-invasive medical diagnosis and help advance diagnostic reports to identify achalasia subtypes somewhat earlier. To the best of our knowledge, this interface is an innovative parametrization for TBS image review.
Keywords: High-resolution esophageal manometry, Esophageal barium time swallow, Image processing, Achalasia -
Background
Accurate and early diagnosis of internal root resorption is essential for determining the outcome of treatment and prognosis. Several digital processing algorithms have been introduced for the diagnosis of internal root resorption. The aim of the present study was to evaluate the effect of gamma values in the detection of internal root resorption in cone-beam computed tomography (CBCT) images.
MethodsA total of 45 healthy extracted single-rooted teeth were selected for the study. The teeth were mesiodistally sectioned at the central groove of their occlusal surface using a diamond disc (0.1 mm diameter). Internal root resorption was simulated at the cervical, middle, and apical regions of a root canal. CBCT images were prepared by gamma values (low, medium, and high modes). Data were analyzed using McNemar tests, Cohen’s kappa coefficient, and the receiver operating characteristic curve. A P value less than 0.05 (typically≤0.05) was considered statistically significant.
ResultsSensitivity and overall accuracy of CBCT images with all gamma modes (low, medium, and high) were high, and they were slightly higher in the high gamma mode (Sensitivity: 94% and overall accuracy: 100%) than in two gamma modes (low and medium). The sensitivity and specificity of high and medium gamma modes in three regions of the root canal (cervical, middle, and apical) were at optimal thresholds compared to the low gamma mode.
ConclusionsGamma modes, particularly the high gamma mode, in CBCT imaging can be adopted as a promising processing filter for the detection of internal root resorption.
Keywords: Gamma value, Internal root resorption, CBCT, Image processing
- نتایج بر اساس تاریخ انتشار مرتب شدهاند.
- کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شدهاست. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
- در صورتی که میخواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.