به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت
جستجوی مقالات مرتبط با کلیدواژه

algorithms

در نشریات گروه پزشکی
  • زهرا کرباسی، میکاییل متقی نیکو، مریم زحمت کشان*
    زمینه و هدف

    آب مروارید  به عنوان عامل 51 درصد نابینایی در سطح جهان شناخته شده است. به دنبال نتایج امیدوارکننده ی اولیه سیستم های هوش مصنوعی در بیماری های چشمی، الگوریتم های هوش مصنوعی در تشخیص آب مروارید، درجه بندی شدت آب مروارید، محاسبات مربوط به لنزهای داخل چشمی و حتی به عنوان ابزار کمکی در جراحی آب مروارید کاربرد دارد. این مطالعه به صورت یک مرور سیستماتیک به بررسی تکنیک های هوش مصنوعی در مدیریت بیماری آب مروارید پرداخته است.

    روش بررسی

    این مطالعه، مرور سیستماتیک با هدف بررسی تکنیک های هوش مصنوعی به منظور مدیریت بیماری آب مروارید تا تاریخ 20 آبان 1402 و بر اساس دستورالعمل پریزما انجام شد. تمام مقالات مرتبط منتشرشده به زبان انگلیسی و از طریق جستجوی سیستماتیک در پایگاه های اطلاعاتی آنلاین PubMed، Scopus و Web of Science استخراج شد.

    یافته ها

    در جستجوی اولیه در پایگاه های اطلاعاتی 192 رکورد شناسایی شد و در نهایت 23 مقاله جهت بررسی وارد مطالعه شدند. نتایج نشان داد که الگوریتم های شبکه عصبی پیچشی(6 مقاله)، شبکه عصبی بازگشتی(1 مقاله)، شبکه پیچشی عمیق(1 مقاله)، ماشین بردار(2 مقاله)، یادگیری انتقالی(1 مقاله)، درخت تصمیم(4 مقاله)، جنگل تصادفی(4 مقاله)، رگرسیون لجستیک(3 مقاله)، الگوریتم های بیز(3 مقاله)، XGBoost (3 مقاله) و الگوریتم خوشه بندی K نزدیک ترین همسایه(2 مقاله) از تکنیک ها و الگوریتم های شبکه عصبی مصنوعی و یادگیری ماشین بودند که از آن ها به صورت ترکیبی در مطالعات به منظور تشخیص(70%)، مدیریت(17%) و پیش بینی بیماری آب مروارید(13%) استفاده نموده بودند.

    نتیجه گیری

    تکنیک ها و الگوریتم های مختلف هوش مصنوعی و یادگیری ماشین می توانند در تشخیص، درجه بندی، مدیریت و پیش بینی آب مروارید با دقت بالا عمل کرده و موثر باشند. در این مطالعه، تکنیک های یادگیری عمیق و شبکه عصبی پیچشی بیشترین سهم را در تشخیص آب مروارید داشته اند. در مدیریت آب مروارید تکنیک های یادگیری عمیق، درخت تصمیم و الگوریتم بیزی دخیل بودند. الگوریتم های یادگیری ماشین مانند رگرسیون لجستیک، جنگل تصادفی، شبکه عصبی مصنوعی، درخت تصمیم، K نزدیک ترین همسایه، XGBoost و تقویت سازگار نیز در پیش بینی آب مروارید نقش داشتند. همان طورکه پیش بینی و تشخیص زودهنگام و مراجعه به موقع می تواند عوارض بیماری در آینده را کاهش دهد، به کارگیری سیستم های مبتنی بر مدل های هوش مصنوعی که دقت قابل قبولی دارند، می توانند در جهت پشتیبانی از فرایند تصمیم گیری پزشکان و مدیریت این بیماری موثر واقع گردند.

    کلید واژگان: هوش مصنوعی، آب مروارید، الگوریتم
    Zahra Karbasi, Michaeel Motaghi Niko, Maryam Zahmatkeshan*
    Background and Aim

    Cataracts are recognized as the cause of 51% of blindness worldwide. Following the promising initial results of artificial intelligence systems in eye diseases, AI algorithms have been applied in the diagnosis of cataracts, grading the severity of cataracts, intraocular lens calculations, and even as an assistive tool in cataract surgery. This study presents a systematic review of AI techniques in the management of cataract disease.

    Materials and Methods

    This systematic review study was conducted to investigate artificial intelligence techniques to manage cataract disease until November 11, 2023, and based on PRISMA guidelines. We retrieved all relevant articles published in English through a systematic search of PubMed, Scopus, and Web of Science online databases.

    Results

    In our initial search, 192 records were identified in the databases, and eventually, 23 articles were selected for review. The results indicated that convolutional neural network algorithms (6 articles), recurrent neural networks (1 article), deep convolutional networks (1 article), support vector machines (2 articles), transfer learning (1 article), decision trees (4 articles), random forests (4 articles), logistic regression (3 articles), Bayesian algorithms (3 articles), XGBoost (3 articles), and K-nearest neighbors clustering algorithms (2 articles) were the artificial neural network and machine learning techniques and algorithms utilized. These techniques were employed in the studies for the diagnosis (70%), management (17%), and prediction (13%) of cataract disease.

    Conclusion

    Various artificial intelligence and machine learning techniques and algorithms can be effective and efficient in diagnosing, grading, managing, and predicting cataracts with high accuracy. In this study, deep learning techniques and convolutional neural networks have made the greatest contribution to cataract diagnosis. Deep learning techniques, decision trees, and Bayesian algorithms were involved in cataract management. Machine learning algorithms such as logistic regression, random forest, artificial neural network, decision tree, K1-nearest neighbor, XGBoost, and adaptive boosting also played a role in cataract prediction. Just as early prediction, diagnosis, and timely referral can reduce future complications of the disease, the use of systems based on artificial intelligence models that have acceptable accuracy can be effective in supporting the decision-making process of physicians and managing this disease.

    Keywords: Artificial Intelligence, Cataract, Algorithms
  • Abdurrahman Gülmez, Özgür Appak, Arzu Nazlı Zeka, Nuran Esen, Ayça Arzu Sayıner
    Background

    We aimed to evaluate the performance of the current algorithm the HIV diagnosis that has been performed for four years. Results of HIV Ag/Ab tests, anti-HIV 1/2 confirmatory tests, HIV-1 RNA tests and the time for official results to be approved were evaluated.

    Methods

    The relationship of HIV Ag/Ab tests, anti-HIV 1/2 confirmation tests and HIV-1 RNA PCR tests, their result times and suitability to the algorithm were examined at Izmir Dokuz Eylül University between May 2017 and June 2021.

    Results

    HIV Ag/Ab ELISA was reactive repetitively in 165/54628 (0.30%) serum samples. Anti-HIV 1/2 confir-mation test was reactive in 54.42% (80/147) of samples. The most common pattern (18.2%) in the con-firmation tests was the positivity of the antibodies against gp160 - gp41 envelope glycoproteins together. The mean reporting time of the confirmation test result was 3h 50 min after the ELISA test. The mean reporting time of the HIV-1 RNA PCR was 12.79 d (±10.22) after the ELISA test and 12.63 (± 10.12) day after the confirmation test. In ROC analysis, the estimated rate of the ELISA test for the confirmation test was highest when S/CO was >13.16 (sensitivity: 97.59 %, specificity: 97.59%).

    Conclusion

    The confirmation test in the current algorithm enabled the rapid test results, early diagnosis of HIV and early antiretroviral therapy. To use the new algorithm effectively, decentralization of the validation tests would be appropriate.

    Keywords: HIV Infection Diagnosis, Serodiagnosis, Algorithms, Testing HIV
  • Mohebali Rahdar *, Hamidreza Esmaeili
    Background

    Smart hospitals today use Artificial Intelligence to improve the quality of their services. In this sense, optimizing the patient's electronic medical record is one of the most significant issues that these hospitals face.

    Objectives

    This study aimed to determine the role of AI in patient electronic records in a smart hospital.

    Methods

    This study was a systematic review, with keywords searched in PubMed, Scopus, Google Scholar, and SID databases. In Persian and English, the keywords were artificial intelligence algorithms, electronic medical records, service quality, and hospital. The inclusion criteria included publication in Persian or English, full-text papers, current publications, and a focus on the use of AI in electronic medical records. Finally, about 57 papers related to the investigation were picked.

    Results

    After reviewing previous related studies, it was discovered that AI can play a role in various aspects of electronic patient records, such as disease diagnosis, predicting relapse and recovery periods, improving treatment accuracy and reducing medical errors, digital care, and decision-support systems. This can result in a 20-30% improvement in resource planning, a 30% decrease in wait times, better resource use, and more accurate predictions.

    Conclusion

    Leveraging AI in electronic patient records is critical for maximizing benefits while minimizing hazards. Despite the limitations, AI has the potential to become a critical tool for smart hospitals in improving healthcare delivery and efficiency. Accordingly, healthcare leaders that incorporate AI algorithms into their systems can give more effective and up-to-date care to their patients.

    Keywords: Electronic Record, Artificial Intelligence, Patient, Algorithms, Hospitals
  • Atul Mishra *, Ramji Pathak, Teerth Raj Verma, Anoop Kumar Srivastava, Surendra Prasad Mishra, Kailash Kumar Mittal, Sudesh Kumar Singh
    Background
    In Radiotherapy, computation of dose is important since in a small field with heterogeneity, dose is usually computed with discrepancies.
    Objective
    The present study was aimed to evaluate the dosimetry of treatment planning algorithms in lung equivalent heterogeneous medium for Volumetric Modulated Arc Therapy (VMAT) with step and shoot Intensity-Modulated Radiation Therapy (ss-IMRT), and dynamic Intensity-Modulated Radiation Therapy (d-IMRT).
    Material and Methods
    In this experimental study, Computerized Imaging Reference System (CIRS) phantom was used with an inhomogeneous Racemosa wood cylinder for two types of tumors, namely, Left Lung Central Tumor (LCT) and Left Lung Peripheral Tumor (LPT) in the CIRS left lung cavity. The computed tomography (CT) datasets were employed with the generation of VMAT, d-IMRT and ss-IMRT plans for the LCT and LPT irradiated with 6 MV photon beams. In this study, the accuracy and efficacy of two algorithms: Monte Carlo (MC) and the Pencil Beam (PB), from the Monaco treatment planning system (TPS), were tested by using Gafchromic EBT3 films and CIRS thorax phantom.
    Results
    Regardless of treatment techniques, both algorithms exhibited higher divergence in LPT than LCT. In both LCT and LPT, the highest deviation was near the tumor-lung junction. However, the deviation was higher in the PB algorithm than MC algorithm, with a minimally acceptable variation of -0.8%. 
    Conclusion
    The MC algorithm shows more consistency for EBT3 measured dose in lung equivalent heterogeneous medium. However, accurate dose predictions are complicated due to electronic disequilibrium within and at the interface of inhomogeneity. These constraints may cause variations from the anticipated outcomes of the treatments.
    Keywords: Radiotherapy, Algorithms, Gafchromic Film, Computed Tomography, Monte Carlo, Lung
  • هادی اکبری*، مجید مزینانی، الیاس مزروعی راد
    مقدمه

    هدف اصلی این پ‍ژوهش بررسی و آنالیز تغییرات باندهای فرکانسی مختلف سیگنال های مغزی در بین دو گروه حافظین و غیر حافظین قرآن است.این تحقیق بر اجرای آزمون های حافظه دیداری با استفاده از نرم افزار کنتب با تاکید بر انتخاب کانال های ویژگی بهینه و استفاده از طبقه بندی کننده های مختلف تمرکز دارد.

    مواد و روش ها

    ابتدا سیگنال های مغزی از 15 حافظ قرآن و 15 غیر حافظ قرآن در طول اجرای آزمون های تطبیق تاخیری با نمونه، یادگیری جفت های متداعی و حافظه تشخیص فضایی، آزمون حافظه تصویری نرم افزار کنتب ثبت شد. به دنبال پیش پردازش مناسب، ویژگی های غیر خطی مانند مشخصه لیاپانوف، بعد همبستگی،آنتروپی و پارامترهای تحلیل نوسانات روند زدایی شده استخراج شد،انتخاب کانال های مربوطه با استفاده از روش های آزمون t، الگوریتم ژنتیک، جستجوی مستقیم ترتیبی انجام شد،طبقه بندی کننده ها شامل استفاده از پرسپترون چند لایه، ماشین بردار پشتیبان و الگوریتم های ساده بیز بودند.

    یافته ها

    کانال های بهینه انتخاب شده عمدتا نواحی پیشانی،پس سری و اکسی پیتال مرتبط با شبکه توجه و حافظه دیداری حافظه قرآن بود، در بیشتر موارد میانگین توان مولفه های فرکانس پایین در سیگنال های مغزی در حافظان قرآن بیشتر از غیر حافظان بود. شبکه عصبی پرسپترون چند لایه با استفاده از کانال های بیهنه انتخاب شده به کمک روش الگوریتم ژنتیک بین دو گروه حافظان و غیر حافظان قرآن دارای بالاترین صحت با درصد 79/94 می باشد.

    نتیجه گیری

    تجزیه و تحلیل داده های مغزی نشان داد که نسبت توان مولفه های فرکانس پایین به بالا و نسبت توان باندهای تتا به بتا نشان دهنده افزایش آرامش و صبر در بین دو گروه حافظان و غیر حافظان در طول دوره بازیابی حافظه تصویری است، این افزایش تمرکز و توجه، منجر به درصد پاسخ های صحیح و افزایش زمان واکنش در گروه حافظین در حین اجرای حافظه تصویری با استفاده از نرم افزار کنتب شد، در روش شبکه عصبی پرسپترون چند لایه با استفاده از ویژگی های انتخاب شده توسط روش الگوریتم ژنتیک، به ویژه آنتروپی نمونه و تقریبی در باندهای فرعی A5 و D در نواحی اکسی پیتال،پس سری و مرکزی به درصد صحت بالاتری در حین اجرای آزمون DMS دست یافتیم.

    کلید واژگان: حافظه، الکتروانسفالوگرافی، الگوریتم ها
    Hadi Akbari*, Majid Mazinani, Elias Mazrooei Rad
    Introduction

    The main purpose of this research is to explore and analyze changes in various frequency bands of brain signals among two distinct groups: memorizers and non-memorizers of the Quran. This investigation focuses on the execution of visual memory tests using Cantab software, with an emphasis on selecting optimal feature channels and employing different classifiers.

    Materials and Methods

    First, brain signals were recorded from 15 Quran memorizers and 15 non-Quran memorizers during the performance of delayed matching to sample (DMS), Paired Associates Learning (PAL), and Spatial Recognition Memory image memory tests using Cantab software. Following appropriate pre-processing, non-linear features such as Lyapunov profile, correlation dimension, entropy, and detrended fluctuation analysis parameters were extracted. The selection of relevant channels was performed using T-TEST, Sequential Forward Selection, and Genetic Algorithm (GA) methods. Classification involved the use of multi-layer perceptron (MLP), Support Vector Machine, and naïve Bayes algorithms.

    Results

    The selected optimal channels were primarily associated with frontal, parietal, and occipital brain regions involved in the attention network and visual memory of Quran memorizers. In most instances, the average power of low-frequency components in brain signals was found to be higher in memorizers than in non-memorizers. The MLP neural network, utilizing optimal channels selected by the GA method, demonstrated the highest accuracy between memorizers and non-memorizers at 94.79%.

    Conclusion

    Analysis of EEG data revealed that the power ratio of low-frequency components, the power ratio of low-to-high-frequency components, and the power ratio of theta to beta bands indicated an increase in relaxation and patience among the memorizer group during the retrieval phase of visual memory. This enhanced concentration and attention, leading to a higher percentage of correct answers and increased reaction time in the memorizer group during the implementation of visual memory tests using Cantab software. The MLP neural network, employing features selected by the GA method, particularly sample and approximate entropy in D, A5 sub-bands, and in the occipital, parietal, and central brain regions, achieved a superior accuracy percentage in the implementation of the DMS test.

    Keywords: Memory, Electroencephalography, Algorithms
  • Somayeh Hekmatfar, Fatemeh Nouri, Parisa Negahdar*
    Introduction

    Early diagnosis and interceptive treatment of the maxillary canine impaction is crucial as it reduces treatment complexity and decreases complications and adverse outcomes. The aim of the study was to the prediction of impacted maxillary canines, and early diagnosis by using panoramic.

    Materials and Methods

    This investigation was a cross-sectional study performed on 385 panoramic radiographs, which were evaluated to assess the position of canines. Two methods Ericson and Kurol (EK/L) and the Power and Short (PS) geometric measurement analyses used in each radiograph. Thus, the prevalence was calculated from each method. The normality of the data was subsequently analyzed by application of a one-sample non-parametric Kolmogorov-Smirnov test. Thereafter, Fisher’s exact and Mann-Whitney U tests were conducted to determine differences in the permanent tooth impaction of the subjects.

    Results

    Five permanent canines were classified as high risk through the EK/L method. While the PS method was used, 20 high risks of impaction were found. The statistical difference is detected between the right and left sides. It was found statistical difference detected between EK/L and Ps methods (p = 0.004). 

    Conclusion

    The EK/L method determined a canine impaction prevalence on pa noramic radiographs of 1.3%, while in the PS Method, the prevalence was 5. 2%. In addition, a significant predilection of canine impaction to the gender was not found.

    Keywords: Animals, Humans, Radiography-Panoramic, Algorithms
  • Mohsen Masoumian Hosseini, Seyedeh Toktam Masoumian Hosseini, Karim Qayumi, Soleiman Ahmady, HamidReza Koohestani
    Introduction

    Artificial Inteligence (AI) application in emergency medicine is subject to ethical and legal inconsisten-cies. The purposes of this study were to map the extent of AI applications in emergency medicine, to identify ethicalissues related to the use of AI, and to propose an ethical framework for its use.

    Methods

    A comprehensive literature col-lection was compiled through electronic databases/internet search engines (PubMed, Web of Science Platform, MED-LINE, Scopus, Google Scholar/Academia, and ERIC) and reference lists. We considered studies published between 1January 2014 and 6 October 2022. Articles that did not self-classify as studies of an AI intervention, those that were notrelevant to Emergency Departments (EDs), and articles that did not report outcomes or evaluations were excluded. De-scriptive and thematic analyses of data extracted from the included articles were conducted.

    Results

    A total of 137 outof the 2175 citations in the original database were eligible for full-text evaluation. Of these articles, 47 were included inthe scoping review and considered for theme extraction. This review covers seven main areas of AI techniques in emer-gency medicine: Machine Learning (ML) Algorithms (10.64%), prehospital emergency management (12.76%), triage,patient acuity and disposition of patients (19.15%), disease and condition prediction (23.40%), emergency departmentmanagement (17.03%), the future impact of AI on Emergency Medical Services (EMS) (8.51%), and ethical issues (8.51%).

    Conclusion

    There has been a rapid increase in AI research in emergency medicine in recent years. Several studies havedemonstrated the potential of AI in diverse contexts, particularly when improving patient outcomes through predic-tive modelling. According to the synthesis of studies in our review, AI-based decision-making lacks transparency. Thisfeature makes AI decision-making opaque.

    Keywords: Algorithms, Artificial intelligence, Emergency service, hospital, Emergency medicine, Machine learning, Neuralnetworks, computer, Ethics
  • Saeed Saeedbakhsh, Mohammad Sattari, Maryam Mohammadi, Jamshid Najafian, Farzaneh Mohammadi
    Background

    Coronary artery disease (CAD) is known as the most common cardiovascular disease. The development of CAD is influenced by several risk factors. Diagnostic and therapeutic methods of this disease have many and costly side effects. Therefore, researchers are looking for cost‑effective and accurate methods to diagnose this disease. Machine learning algorithms can help specialists diagnose the disease early. The aim of this study is to detect CAD using machine learning algorithms.

    Materials and Methods

    In this study, three data mining algorithms support vector machine (SVM), artificial neural network (ANN), and random forest were used to predict CAD using the Isfahan Cohort Study dataset of Isfahan Cardiovascular Research Center. 19 features with 11495 records from this dataset were used for this research.

    Results

    All three algorithms achieved relatively close results. However, the SVM had the highest accuracy compared to the other techniques. The accuracy was calculated as 89.73% for SVM. The ANN algorithm also obtained the high area under the curve, sensitivity and accuracy and provided acceptable performance. Age, sex, Sleep satisfaction, history of stroke, history of palpitations, and history of heart disease were most correlated with target class. Eleven rules were also extracted from this dataset with high confidence and support.

    Conclusion

    In this study, it was shown that machine learning algorithms can be used with high accuracy to detect CAD. Thus, it allows physicians to perform timely preventive treatment in patients with CAD.

    Keywords: Algorithms, artificial intelligence, coronary artery disease, data mining, diagnosis, machine learning
  • معرفت سیاه کوهیان، لیلا فصیحی، بهمن ابراهیمی ترکمانی
    زمینه و  هدف

     بیماری عروق کرونر (CAD) را می‫توان به عنوان یکی از علل اصلی مرگ و میر در نظر گرفت. شیوع CAD امروزه در حال افزایش بوده و منجر به هزینه های بالای نظام سلامت در بسیاری از کشورها می‫شود. جامعه پزشکی و تحقیقاتی به طور فزاینده‌ای به تشخیص بیماری عروق کرونر قلب از طریق کامپیوتر و روش‫های یادگیری ماشین علاقمند است. هدف این مطالعه پیش بینی بیماری عروق کرونر قلبی با استفاده از الگوریتم آنالیز افتراقی در مردان سالمند فعال بود.

    روش کار

     این مطالعه تحلیلی بر روی 351 سالمند مراجعه کننده به بیمارستان آیت الله کاشانی تهران انجام شد. در این پژوهش، از الگوریتم آنالیز افتراقی برای تشخیص بیماری عروق کرونر استفاده شد. از نرم‫افزار پایتون برای تجزیه و تحلیل داده‌ها استفاده شد.

    یافته ها

     نتایج نشان داد که با استفاده از 14 ویژگی به عنوان عامل خطر مربوط به اطلاعات آزمایشگاهی، شخصی و سبک زندگی آزمودنی‌ها، الگوریتم آنالیز افتراقی می‌تواند با دقت 94.4 درصد و صحت 88.9 درصد بیماری عروق کرونر را پیش بینی کند.

    نتیجه گیری

     نتایج پژوهش حاضر نشان داد که این سیستم احتمالا بتواند به عنوان یک روش موثر و هوشمند در کنار سایر روش‫های تشخیصی توسط متخصصین قلب و عروق برای پیش بینی بیماری عروق کرونر مورد استفاده قرار گیرد. همچنین روش‫های جدید داده‌کاوی می‌تواند در کاهش خطرات تهاجمی موثر باشد.

    کلید واژگان: بیماری عروق کرونر، الگوریتم، سالمند فعال
    Marefat Siahkohian, Leila Fasihi, Bahman Ebrahimi Torkamani
    Background & objectives

    Coronary heart disease (CHD) is an important medical disorder and one of the most common heart diseases worldwide, which causes disability and economic burden. The medical and research community is increasingly interested in computer-aided coronary heart disease diagnosis through the use of machine learning methods. This study aimed to diagnose coronary heart disease using a discriminant analysis algorithm in active elderly men.

    Methods

    This analytical study was conducted on 351 patients of Ayatollah Kashani Hospital in Tehran. This work used discriminant analysis algorithm to diagnose coronary artery disease. Python software was used for data analysis.

    Results

    The results showed that by using 14 characteristics as risk factors related to the subjects' laboratory, personal and lifestyle information. The discriminant analysis algorithm could distinguish healthy and sick people with 94.4% accuracy and 88.9% precision.

    Conclusion

    The results of the present study showed that this system can probably be used as an effective and intelligent method along with other diagnostic methods by cardiologists to predict coronary artery disease. Also, new data mining methods can be effective in reducing invasive risks.

    Keywords: Coronary Disease, Algorithms, Active Elderly
  • Shraddha Srivastava, Navin Singh *, Varun Kumar Kashyap
    Background
    Conventional optimization techniques are based on the planning approach in which positions and weights are varied to generate the desired dose distribution. Inverse planning simulated annealing (IPSA) is an advanced optimization method developed to automatically determine a suitable combination of positions to design an acceptable plan. 
    Objective
    In this study, three optimization techniques namely IPSA, graphical optimization (GROPT), and geometrical optimization (GOPT) methods are compared in high-dose-rate interstitial brachytherapy of cervical carcinoma.
    Material and Methods
    In this retrospective study, twenty computed tomography (CT) data sets of 10 cervical cancer patients treated with Martinez Universal Perineal Interstitial Template-based interstitial brachytherapy were studied. The treatment plans generated were optimized using the IPSA, and GOPT methods. The prescribed dose was 24 Gy in 4 fractions. Plans produced using IPSA, GrOPT, and GOPT techniques were analyzed for comparison of dosimetric parameters, including target coverage, homogeneity, conformity, and organs at risk (OAR) doses. 
    Results
    V100 values for IPSA, GrOPT and GOPT plans were 95.81±2.33%, 93.12±2.76% and 88.90±4.95%, respectively. The mean D90 values for the IPSA, GrOPT, and GOPT plans were 6.45±0.15 Gy, 6.12±0.21 Gy, and 5.85±0.57 Gy, respectively. Significantly lower doses of OAR were in the IPSA plans that were more homogeneous (HI=0.66). Conformity was comparatively higher in IPSA-based plans (CI=0.75).  
    Conclusion
    IPSA plans were superior and resulted in better target coverage, homogeneity, conformity, and minimal OAR doses.
    Keywords: Brachytherapy, Cervical Cancer, Conformity, Algorithms, Tomography Uterine Cervical Neoplasms
  • Vinod Kumar Gangwar *, Om Prakash Gurjar, Lalit Kumar, Avinash Agarwal, Vineet Kumar Mishra, Surendra Prasad Mishra, Saket Pandey
    Background
    Modern radiotherapy techniques are using advanced algorithms; however, phantoms used for quality assurance have homogeneous density; accordingly, the development of heterogeneous phantom mimicking human body sites is imperative to examine variation between planned and delivered doses.
    Objective
    This study aimed to analyze the accuracy of planned dose by different algorithms using indigenously developed heterogeneous thoracic phantom (HT).
    Material and Methods
    In this experimental study, computed tomography (CT) of HT was done, and the density of different parts was measured. The plan was generated on CT images of HCP with 6 and 15 Megavoltage (MV) photon beams using different treatment techniques, including three-dimensional conformal radiotherapy (3D-CRT), intensity-modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). Plans were delivered by the linear accelerator, and the dose was measured using the ion chamber (IC) placed in HT; planned and measured doses were compared.
    Results
    Density patterns for different parts of the fabricated phantom, including rib, spine, scapula, lung, chest wall, and heart were 1.849, 1.976, 1.983, 0.173, 0.855, and 0.833 g/cc, respectively. Variation between planned and IC estimated doses with the tolerance (±5%) for all photon energies using different techniques. Acuros-XB (AXB) showed a slightly higher variation between computed and IC estimated doses using HCP compared to the analytical anisotropic algorithm (AAA).
    Conclusion
    The indigenous heterogeneous phantom can accurately simulate the dosimetric scenario for different algorithms (AXB or AAA) and be also utilized for routine patient-specific QA.
    Keywords: Algorithms, Computed Tomography, Human body, Lung Phantoms, Volumetric-Modulated Arc Therapy, Ribs
  • Bijan Elmi, Naghi Shoja, Abbas Toloie Ashlaghi, Soleiman Iranzadeh
    Background

    The ability of ambulance centers to respond to emergency calls is an important factor in the recovery of patients' health. This study aimed to provide a model for the establishment of emergency relief in the road network in 2020 in East Azerbaijan province.

    Methods

    This applied-descriptive and experimental research with an explanatory modelling approach used the comments of 70 experts to run a model, which was based on the use of a metaheuristic (genetic) algorithm ,Simulation for the number of ambulances and the composition of the monitoring list simultaneously , objective and subjective data combined ,the agent and environmental variables, were determined and modelled through a meta-hybrid approach during the agent-based simulation and the metaheuristic algorithm.

    Results

    To travel the initial structure for 40 dangerous points and five stations, the initial time was equal to 7860 Minutes, which reached a number between 2700 and 4000 Minutes after genetic optimization, production of a new list, and the mutation of ambulances from one station to another.

    Conclusion

    This type of optimization can be used to accelerate activities and reduce costs. Due to the dissimilar traffic of the areas, the ambulance does not arrive at dangerous points at equal times. The travel time of all dangerous points can be reduced by changing the location of points, moving forward or backwards depending on the conditions, customizing the features of ambulances and dangerous points, and combining the list of areas to find the best location for emergencies according to the interaction between agents, environmental constraints, and different behavioral features.

    Keywords: Algorithms, Computer Simulation, Emergency Service, Hospital, Workplace
  • Mohamad Hadi Mazidi *, Mohammad Eshghi, Mohammad Reza Raoufy
    Background
    The Electrocardiogram (ECG) is an important measure for diagnosing the presence or absence of heart arrhythmias. Premature ventricular contractions (PVC) is a relatively large arrhythmia occurring outside the normal tract and being triggered outside the Sino atrial (SA) node of heart.
    Objective
    This study has focused on tunable Q-factor wavelet transform (TQWT) algorithm and statistical methods to detect PVC.
    Material and Methods
    In this analytical and statistical study, 22 ECGs records were selected from the MIT/BIH arrhythmia database. In the first stage the noise of signal remove and then five sub-bands create by TQWT. In the second stage nine features (minimum, maximum, root mean square, mean, interquartile range, standard deviation (SD), skewness, and variance) extracted of ECG and then the best features selected by using analysis of variance (ANOVA) test. Finally, the system is evaluated by using the learning machines of support vector machine (SVM), the K-Nearest Neighbor (KNN), and artificial neural network (ANN).
    Results
    The best results were verified with KNN learning machine: the sensitivity Se= 98.23% and accuracy Ac= 97.81%.
    Conclusion
    A comparative analysis with the related existing methods shows the method proposed in this study is higher than the other method for classification PVC and can help physicians to classify normal and PVC heart signals in the screening of the patients with coronary artery diseases (CADs).
    Keywords: Tunable Q-factor wavelet transform (TQWT), Electrocardiogram (ECG), Algorithms, Wavelet Analysis, Support Vector Machine
  • Leila Mohammadian, Mohsen Bakhshandeh *, Elham Saeedzadeh, Ali Jabbari Arfaee
    Introduction
    Bolus-type materials are needed in case of superficial lesions radiotherapy. This work determined the dosimetric accuracy of two commercial treatment planning systems (TPS) for calculating photon dose distribution in the presence of eXaSkin bolus.
    Material and Methods
    Dose calculations were performed on collapsed cone convolution/superposition (CCC) and anisotropic analytical algorithm (AAA) using computed tomography (CT) images of heterogeneous CIRS phantom. EBT3 film was used to obtain percentage depth dose (PDD) curves and gamma index was utilized to compare the accuracy of the two algorithms. The passing rate of the global gamma index with the passing criterion of 3mm/3% as the standard criterion was considered 95% in this study.
    Results
    Surface dose in PDD curves increased in the presence of 0.5 cm thick eXaSkin bolus. The passing rates of gamma index with standard passing criterion between AAA algorithm and EBT3 film measurements without and with bolus were 95% and 95.5%, respectively, while they were equal to 96% and 97.5% for CCC algorithm.
    Conclusion
    There was a good agreement in dose calculation between AAA and CCC algorithms. Furthermore, eXaSkin bolus increased the surface dose by a factor of 25%.
    Keywords: Radiotherapy, Algorithms, Dosimetry, Bolus
  • Moslem Mohammadi Jenghara *, Majid Iranpour Mobarakeh, Hossein Ebrahimpour Komleh
    Background
    Dynamic protein-protein interaction networks (DPPIN) can confirm the conditional and temporal features of proteins and protein complexes. In addition, the relation of protein complexes in dynamic networks can provide useful information in understanding the dynamic functionality of PPI networks.
    Objective
    In this paper, an algorithm is presented to discover the temporal association rule from the dynamic PPIN dataset.
    Material and Methods
    In this analytical study, the static protein-protein interaction network is transformed into a dynamic network using the gene expression thresholding to extract the protein complex relations. The number of presented proteins of the dynamic network is large at each time point. This number will increase for extraction of multidimensional rules at different times. By mapping the gold standard protein complexes as reference protein complexes, the number of items decreases from active proteins to protein complexes at each transaction. Extracted sub graphs as protein complexes, at each time point, are weighted according to the reference protein complexes similarity degrees. Mega-transactions and extended items are created based on occurrence bitmap matrix of the reference complexes. Rules will be extracted based on Mega-transactions of protein complexes.
    Results
    The proposed method has been evaluated using gold standard protein complex rules. The amount of extracted rules from Biogrid datasets and protein complexes are 281, with support 0.2.
    Conclusion
    The characteristic of the proposed algorithm is the simultaneous extraction of intra-transaction and inter-transaction rules. The results evaluation using EBI data shows the efficiency of the proposed algorithm.
    Keywords: Protein Interaction Network, Temporal Association Rule, Dynamic Network, Protein Complexes, Proteins, Algorithms, Gene expression
  • Ummu Zahro, Choirul Anam *, Wahyu Budi, Pandji Triadyaksa, Jhon Saragih, Dito Rukmana
    Introduction
    A simple noise reduction algorithm, i.e. a selective mean filter (SMF), had been previously introduced. The aim of this study is to investigate the image qualities filtered by a SMF and its comparison to an adaptive statistical iterative reconstruction (ASIR).
    Material and Methods
    To assess the basic image quality, an American Association of Physicists in Medicine Computed Tomography (AAPM CT) performance phantom was used. The phantom was scanned by 128 Multiple Slices Computed Tomography. The tube current varied from 50 mA to 100, 150, and 200 mA. The images of a phantom were reconstructed by filtered back projection (FBP) followed by SMF and ASIR (20, 40, 60, 80, and 100%). The image quality assessment was in terms of noise level, noise power spectrum (NPS), and modulation transfer function (MTF).
    Results
    The noise level and NPS of SMF was similar with ASIR 100%. The values of the MTF10 of the ASIR filter at any level and SMF were comparable. The MTF10 values of ASIR 60%, and SMF with 50 mA (low) were 0.76 ± 0.02 and 0.75 ± 0.02 cycle/mm, respectively. Meanwhile, the MTF10 of ASIR 60% and SMF with 200 mA (high) were 0.74 ± 0.00 and 0.73 ± 0.00 cycles/mm, respectively.
    Conclusion
    Our results indicated that the performance of the SMF in reducing noise is equivalent to the maximum level of ASIR strength, i.e., ASIR 100%.
    Keywords: Computed Tomography, Image Quality, Algorithms, Image Processing, selective mean filter, adaptive statistical iterative reconstruction
  • Trio Pambudi Utomo, Nuryani Nuryani *, Anto Satriyo Nugroho
    QT-interval prolongation is an important parameter for heart arrhythmia diagnosis. It is the time interval from QRS-onset to the T-end of electrocardiogram (ECG). Manual measurement of QT-interval, especially for 12-leads ECG, is time-consuming. Hence, an automatic QT-interval measurement is necessary. A new method for automatic QT-interval measurement is presented in this paper, which mainly consists of three parts, including QRS-complex detection, determination of QRS-onset, and T-end determination. The QRS-complex detection is based on the modified Pan-Tompkins algorithm. The T-end is defined based on Region of Interest (ROI) maximum limit. We compare and test our proposed QT-interval measurement method with reference measurement in term of correlation coefficient and range of 95% LoA. The correlation coefficient and the range of 95% LoA are 0.575 and 0.290, respectively. The proposed method is successfully implemented in ECG monitoring system using smartphone with high performance. The accuracy, positive predictive, and sensitivity of the QRS-complex detection in the system are 99.70%, 99.78%, and 99.92%, respectively. The range of 95% LoA for the comparison between manual and the system’s QT-interval measurement is 0.216. The results show that the proposed method is dependable on the measure of the QT-interval and outperforms the other methods in term of correlation coefficient and range of 95% LoA.
    Keywords: Electrocardiography, Smartphone, Algorithms, Monitoring, Physiologic
  • Choirul Anam *, Winda Kusuma Dewi, Masdi Masdi, Freddy Haryanto, Toshioh Fujibuchi, Geoff Dougherty
    Background
    Estimation of eye lens dose is important in head computed tomography (CT) examination since the eye lens is a sensitive organ to ionizing radiation.
    Objective
    The purpose of this study is to compare estimations of eye lens dose in head CT examinations using local size-specific dose estimate (SSDE) based on size-conversion factors of the American Association of Physicists in Medicine (AAPM) Report No. 293 with those based on size-conversion factors of the AAPM Report No. 220.
    Material and Methods
    This experimental study is conducted on a group of patients who had undergone nasopharyngeal CT examination. Due to the longitudinal (z-axis) dose fluctuation, the average global SSDE and average local SSDE (i.e. particular slices where the eyes are located) were investigated. All estimates were compared to the measurement results using thermo-luminescent dosimeters (TLDs). The estimated and measured doses were implemented for 14 patients undergoing nasopharyngeal CT examination.
    Results
    It was found that the percentage differences of the volume CT dose index (CTDIvol), average global SSDE based on AAPM No. 220 (SSDEo,g), average local SSDE based on AAPM No. 220 (SSDEo,l), average global SSDE based on AAPM No. 293 (SSDEn,g) and average local SSDE based on AAPM No. 293 (SSDEn,l) against the measured TLD doses were 22.5, 21.7, 15.0, 9.3, and 2.1%, respectively. All comparisons between dose estimates and TLD measurements gave p -values less than 0.001, except for SSDEn,l (p -value = 0.566).
    Conclusion
    SSDE based on AAPM Report No. 293 can be used to accurately estimate eye lens radiation doses by performing the calculations on a number of specific slices containing the eyes.
    Keywords: Radiation, Ionizing, X-rays, Computed Tomography, Algorithms, Eye Lens Dose, Organ Dose, Size-Specific Dose Estimates
  • Hamideh Rezaei Nezhad, Farshid Keynia *, Amir Sabagh Mola Hoseini
    Background

    This paper presents open-source computer simulation programs developed for simulating, tracking, and estimating the COVID-19 outbreak.

    Methods

    The programs consisted of two separate parts: one set of programs built in Simulink with a block diagram display, and another one coded in MATLAB as scripts. The mathematical model used in this package was the SIR, SEIR, and SEIRD models represented by a set of differential-algebraic equations. It can be easily modified to develop new models for the problem. A generalized method was adopted to simulate worldwide outbreaks in an efficient, fast, and simple way.

    Results

    To get a good tracking of the virus spread, a sum of sigmoid functions was proposed to capture any dynamic changes in the data. The parameters used for the input (infection and recovery rate functions) were computed using the parameter estimation tool in MATLAB. Several statistic methods were applied for the rate function including linear, mean, root-mean-square, and standard deviation. In addition, an adaptive neuro-fuzzy inference system (ANFIS) was employed and proposed to train the model and predict its output.

    Conclusion

    This procedure is presented in such a way that it can be generalized and applied in other parts and applications of estimating the scenarios of an event, including the potential of several models, including SIR, which is sensitive to pollution, etc. This program can be used as an educational tool or for research studies and this article promises some lasting contributions to field of COVID-19.

    Keywords: Algorithms, COVID-19, Epidemiology, Pandemics, Viruses
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال