- Volume:16 Issue: 4, 2020
- ISSUE Special Issue
- تاریخ انتشار: 1398/12/06
- تعداد عناوین: 41
A Deep Learning-Based Approach for Breast BI-RADS Prediction on Shear Wave Elastography ImagesPage 1Background
Breast cancer is the most common type of cancer among women. About one of every eight women is diagnosed with breast cancer during her lifetime. Malignant tissue is stiffer than normal and benign tissues. This stiffness could be evaluated by elastography. The American College of Radiology (ACR) has published a quality assurance tool named Breast Imaging-Reporting and Data System (BI-RADS) to standardize breast cancer reporting. Although it was originally designed to use with mammography, it now contains several features for various imaging modalities. Among technologies, shear wave elastography (SWE) has shown promising results in breast lesion classification.Objectives
In this paper, we present the capability of the convolutional neural network in the prediction of B-RADS using SWE images.Methods
A comprehensive dataset of SWE images of breast tissue was provided using Esaote MyLab™ 9 and Aupersonic Aixplorer systems. Two hundred images related to breasts with different BI-RADS stages were gathered from the Cancer Institute, Imam Khomeini Medical Center (UICC). The data augmentation with a factor of 10 was applied to the prepared dataset. Some patients had multiple lesions and for each lesion, one or two images were acquired and stored in DICOM standards. The gold standard for the evaluation of the proposed algorithm was a biopsy, which was performed on all the examined lesions. A novel convolutional neural network was applied to the dataset to extract the visual features of images. The architecture was based on Densenet architecture, which was modified for our purpose. We used the network in both pre-training and end-to-end training strategies and the results were compared. The network was pre-trained on the Imagenet dataset due to the lack of a sufficient dataset. On the other hand, with data augmentation, the network underwent a full training strategy. Finally, the classification layer was a softmax layer, which was used to decide on the benignity or malignancy of the lump. The training and testing procedures for tumor classiﬁcation were employed with ﬁve-fold cross-validation. The entire dataset was randomly divided into ﬁve equal-sized subsets on the premise that multiple images acquired from the same patient were assigned to the same subset. Four subsets together were used for training and the remaining one for testing and this process was repeated ﬁve times such that each subset was used once as the test set.Results
The processing hardware had a 12 GB RAM, a GPU-based graphics card with 2496 CUDA cores (Tesla K80), and an Intel Xeon CPU. The network implementation was done in the Python environment with Tensorflow r1.12 and Keras 2.2.4. The results of the proposed methods were satisfying in both pre-training and end-to-end training approaches. We used various evaluation metrics including precision, recall, F1-score, ROC curve, and training time for both strategies. The precision, recall, and F1-score were 0.93, 0.95, and 0.94 for the Densenet architecture trained from scratch and 0.97, 0.94, and 0.95 for the transfer learning approach (see Table 1). The ROC curve was plotted for both approaches and the area under the curves (AUCs) were calculated. The transfer learning approach yielded an AUC of 0.98, whereas this parameter was 0.94 for the fully-trained approach (see Figure 1). Finally, the training time of transfer learning approach was one-fifth the time of training from scratch, as it was anticipated.Conclusion
The results showed the superiority of the transfer learning approach in tumor classification. Higher statistical metrics with lower training time makes this approach more compatible with SWE images.
Deep Learning Approach for Left Atrium SegmentationPage 2Background
Cardiovascular disease is one of the most common causes of death in the world. Cardiovascular risk is often assessed by analyzing parameters extracted from image data taken from different parts of the heart. Transthoracic echocardiography is a non-invasive imaging modality that is usually the first step of diagnosis procedure. Extracting more information from epical and parasternal views can lead the doctor to early diagnosis. The examination analysis, although has a valuable role, relies on the operator experience. To have a more standard analysis, it is important to reduce the variability of the examination analysis. Automatic image analysis, which is very helpful in this context, can be implemented in image segmentation, parameter measurement, and even diagnosis levels. Left atrial measurements (mostly the atrial volume) are strong predictors of cardiac events. Elevated atrial pressure or increased flow can lead to atrial enlargement. There is evidence that patients with high left atrial volume are at risk of ischemic stroke, heart failure, and atrial fibrillation.Objectives
The purpose of this paper was to conduct automatic segmentation of left atrium in four-chamber view images to make measurements reliable and independent from expert experience. Generally, most methods of automatic segmentation are based on image processing algorithms that are sometimes too complicated and less accurate. Deep learning models have been widely used in different computer vision areas. These models are mathematically simple and are proven to have better accuracy in computer vision problems. Accordingly, we used one of the famous neural network structures named Unet for left atrium segmentation in 2D echocardiography. Comparing the trained network’s results with the ground truth segmentations made by experts, we showed that neural networks were strong choices for automation.Methods
The dataset used in this research was the public echocardiography image dataset published in CAMUS (Cardiac Acquisitions for Multi-structure Ultrasound Segmentation) Challenge. The data included four-chamber view end-systole and end-diastole frames from 450 patients. Unet is a kind of pyramidal network with encoding and decoding paths. The encoder or contraction path was used to capture the context and features in the image. The second path was the symmetric expanding or decoder path that was used to enable precise localization. The whole task of the network was to classify each pixel in the image to the background or left atrium classes. The loss function in the learning procedure was defined as the Dice coefficient, which depicted the ratio of the number of pixels that were correctly classified. For comparing the method with experts, five-fold cross-validation was used. Each time, 10% of the dataset was used as a test set and 90% as a train set. In the training step, the network learned the train set annotations and in the testing procedure, the performance of the trained network was evaluated based on its predictions on the test set. Dice coefficient, Hausdorff distance, and mean absolute distance (MAD) were used to evaluate the accuracy of the method in echocardiography images.Results
The calculated metrics were dice coefficient of 91.17%, Hausdorff distance of 3.31, and MAD of 0.71 for the train set and dice coefficient of 88.14%, Hausdorff distance of 4.75, and MAD of 1.67 for the test set.Conclusion
We showed that using Unet architecture in four-chamber view echocardiography images for bordering left atrium can be a powerful alternative to remove the variability between different experts. Evaluation metrics analysis proved that we can rely on the method predictions and have a more accurate shape of the left atrium for further analysis and measurements such as volume and other features measurement.
Automatic Myocardial Segmentation in Four-Chamber View Echocardiography ImagesPage 3Background
Most quantitative features in analyzing echocardiography images are elicited from the shape of different parts of the heart. One of the challenging tasks in this area is detecting the border between the left ventricle and its wall. Segmentation that is a process to extract the shape of objects in an image is a way to have a better observation of epicardial and endocardial parts of the left ventricle. Today, manual segmentation is performed by expert radiologists in most cases, but there is some research in the field of automatic echocardiography image segmentation by the use of image processing and computer vision methods. Automatic segmentation is desired because it is more accurate and less operator-dependent. It leads to further quantifications such as the measurement of LV volumes, ejection fraction, myocardial volumes, and thickness. It can be also used for evaluating myocardial perfusion by analyzing myocardial intensity changes over time. Due to the intrinsic limitations of echo imaging such as low image intensity, contrast traditional segmentation methods such as edge-based and region-based image processing algorithms could not be accurate enough to overcome the segmentation complexities. Deep learning that is a branch in the computer vision area has been shown to outperform the image processing methods in many tasks.Objectives
In this study, we used a novel image segmentation neural network (Unet) first introduced in 2014 to segment the myocardium in the left ventricle in 2D four-chamber view echocardiography images.Methods
The dataset used in this research was the public echocardiography image dataset published in CAMUS (Cardiac Acquisitions for Multi-structure Ultrasound Segmentation) Challenge. The data contained four-chamber view end-systole and end-diastole frames from 450 patients. We used Unet architecture for the segmentation task. Unet is a kind of pyramidal network with encoding and decoding paths. The encoder or contraction path was used to capture the context and features in the image. The second path was the symmetric expanding or decoder path used to enable precise localization. The whole task of the network was to classify each pixel in the image to the background or epicardium classes.Results
Five-fold cross-validation was used to report the accuracy metrics for the automatic segmentation task. The data were split into the train and test sets several times to evaluate the performance of the neural network. The test and train sets contained 90 and 360 images, respectively. Dice coefficient, Hausdorff distance, and mean absolute distance (MAD) were used to evaluate the accuracy of the method in echocardiography images. The calculated metrics included a dice coefficient of 90%, Hausdorff distance of 5.01, and MAD of 0.91 for the training set and dice coefficient of 83.14%, Hausdorff distance of 6.55, and MAD of 1.47 for the test set.Conclusion
We used a novel neural network architecture for the myocardial segmentation task in 2D four-chamber view echocardiography images. We showed that deep learning algorithm automated segmentation can be an accurate alternative to extract the geometric features from images. Using this method can lead the operator to better analyze for LV and myocardial measurements. An approach for future work is expanding the automation to the measurement level from the segmented part.
Automatic Fetal Biometry Evaluation in Ultrasound Images Using a Deep Learning-Based ApproachPage 4Background
The 2D fetal ultrasound biometrics have been extensively used to establish (or confirm) the gestational age of the fetus, estimate its size and weight, and identify growth patterns and abnormalities. Typically, an ultrasound examination is routinely performed between 18 and 22 weeks of pregnancy to evaluate the growth of the fetus by measuring its head, abdomen, and femur. Automatic methods for fetal biometric measurements have been investigated recently to reduce intra- and inter-observer variability and create more accurate and reproducible measurements.Objectives
In this paper, we proposed a deep learning-based approach to calculate fetal biometry parameters automatically.
The fetal biometry parameters came from the fetal head, abdomen, and femur evaluation. Head circumference (HC) and biparietal diameter (BPD) were related to the fetal head, whereas abdominal circumference (AC) was related to the fetal abdomen and femur length (FL) was related to the fetal femur. Figure 1 shows these parameters in ultrasound images. Our prepared dataset included three parts, as follows: (1) 1334 2D ultrasound images of the fetal head in the standard plane. This dataset was publically available from the automated measurement of fetal head circumference challenge. (2) 158 2D ultrasound images of the fetal abdomen in the standard plane. The dataset was gathered from Alvand Medical Imaging Center, Tehran, Iran, by expert radiologists. (3) 315 2D ultrasound images of the fetal femur in the standard plane. The dataset was gathered from two distinct centers: (i) Alvand Medical Imaging Center, Tehran, Iran, and (ii) Laleh Hospital, Tehran, Iran. We trained and evaluated a novel convolutional network for the segmentation of fetal head and abdomen. The proposed network, called MFP-Unet, was a combination of Unet and feature pyramid network (FPN). The network architecture is depicted in Figure 2. For fetal femur, we had a pre-processing step. We used the superpixel algorithm to remove darker parts of the image, as the femur was typically the brightest part of the US image. Then, we applied an image saliency algorithm to represent the salient features of the image. Finally, MFP-Unet was trained on these pre-processed images to segment the femur. After the segmentation process, image analysis algorithms achieved all of the required measures. An ellipse detection algorithm on the segmented area of the fetal head was to measure the HC and BPD while an ellipse detection algorithm on the segmented area of the fetal abdomen was to measure the AC. Finally, a skeletonization algorithm achieved the femur length.Results
We used the mean absolute difference (MAD) and root mean square error (RMSE) for the measurement errors. The values of MAD and RMSE were 0.23 mm and 0.11 mm for BPD, 0.13 mm and 0.09 mm for HC, 0.17 mm and 0.08 mm for AC, and 0.18 mm and 0.12 mm for FL, respectively. Table 1 shows the results. The correlation between automatic and manual measurements was evaluated by correlation graphs. The R values were 0.97, 0.91, and 0.97 for HC, BPD, and FL, respectively.Conclusion
According to the results, we proposed a robust and useful algorithm for automatic fetal biometry evaluation that could be extended to nuchal translucency (NT) measurement based on the providing dataset.
Prediction of General Intelligence Using DTI DataPage 5Background
Intelligence can be defined as the capability of a person to purposeful performance, logical thinking, and effective interaction with the environment. What we study as intelligence is a collection of talents, such as memory, accuracy, learning, and perception, which are different in different individuals.Objectives
Using diffusion tensor imaging (DTI) data, we studied relationships between mean fractional anisotropy (FA) in brain structural pathways and full-scale general intelligence (FSIQ) scores.Methods
The structural connectivity matrix for the whole brain was estimated using 116 regions defined by the AAL atlas. The superiority of this study to others was that the ROIs covered the whole brain and each node represented a small region of the brain; thus, the related connections would be more specific. We identified connections and features that played key roles in FSIQ. Besides, FSIQ was predicted using four regression techniques and the results were compared. The dataset consisted of the data of 29 subjects (15 males and 14 females), with the age range of 18 - 28 years, and the mean age of 25.10 years, recorded in our national brain mapping lab (NBML), Tehran, Iran. All the subjects were healthy, without any brain damage. In addition, none of them had a history of mental illness, brain surgery, or brain-damaging disease. All DTI data were processed using Explore-DTI software to perform eddy current correction and motion correction. Then, all images were registered to the AAL atlas template with 116 nodes using the Explore-DTI software (version 4.8.6). For each subject, DTI-based tractography was done for the whole brain by a deterministic approach based on streamline algorithms. FSIQ was measured using the WAIS-III test. This score was calculated by assessing cognitive abilities such as verbal reasoning, attention to verbal principles, fluid reasoning, spatial processing, and visual stimulations. The dataset mean (standard deviation) of the FSIQ scores was 96.93 (± 10.24). To identify the most important connections for FSIQ prediction, we performed a linear regression between each variable and the FSIQ scores. The significant variables were selected as the connections related to the FSIQ scores. To remove collinearity among variables, a correlation analysis was performed and variables that had a high correlation with another variable (r > 0.95) were deleted. Moreover, to reduce the number of features, PCA was applied to the remaining connections. Then, four regression models including linear regression, a support vector regression, a convolutional neural network, and a multilayer perceptron were applied to the data to predict the intelligence scores. In this case, 48 connections were selected after applying correlation analysis and 14 features were the outcome of the PCA step. Highly connected nodes were in the prefrontal, cerebellum, parietal, and limbic regions. These regions may play a key role in human intelligence.Results
After applying PCA, the SVR model with a linear kernel and a constant parameter of 10 was optimal for our dataset. It was utilized to predict FSIQ using leave-one-out cross-validation. The correlation between the predicted and measured intelligence scores was r = 0.72 (P = 9.03 × 10-6). It is noteworthy that the proposed model for predicting intelligence had a higher determination coefficient (R2 = 0.52) than previous methods. Moreover, previous methods widely used the functional MRI technique to extract features for predicting intelligence. In contrast, we used DTI data and mean FA of connections to predict intelligence and the results indicated that these data conveyed more information about intelligence.
An Efficient Framework for Accurate Arterial Input Selection in DSC-MRI of Glioma Brain TumorsPage 6Background
Arterial input function (AIF) accurate extraction is an important step in the quantification of cerebral perfusion hemodynamic parameters using dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI).Objectives
In this study, using machine learning methods, an optimal automatic algorithm was developed to accurately detect AIF in DSC-MRI of glioma brain tumors with a new pre-processing method.Methods
DSC-MR images of 43 patients with glioma brain tumors were retrieved retrospectively. Our proposed method consisted of a pre-processing step to remove non-arterial curves such as tumorous, tissue, noisy, and partial-volume affected curves and a clustering step through agglomerative hierarchical (AH) clustering method to cluster the remaining curves. The performance of automatic AIF clustering was compared with the performance of manual AIF selection by an experienced radiologist, based on curve shape parameters, i.e., maximum peak (MP), full-width-at-half-maximum (FWHM), M (= MP / (TTP × FWHM)), and root mean square error (RMSE).Results
The mean values of AIFs shape parameters were compared with those derived from manually selected AIFs by a two-tailed Paired t-test. The results showed statistically insignificant differences in MP, FWHM, and M parameters and lower RMSE, confirming the resemblance of the selected AIF with the gold standard. The intraclass correlation coefficient and percentage coefficient of variation showed a better agreement between manual AIF and our proposed AIF selection method rather than previously proposed methods.Conclusion
The results of the current work suggest that by using efficient preprocessing steps, the accuracy of automatic AIF selection could be improved and this method appears to be promising for efficient and accurate clinical applications.
Diagnostic Accuracy of Multi-Parametric Magnetic Resonance Imaging for Differentiation of Benign and Malignant Lesions of Prostate Using Radiomics AnalysisPage 7Background
Prostate cancer is the second most common cancer-related cause of death in men. Accurate diagnosis of prostate cancer plays an important role in decreasing mortality rates. European Association of Urology (EAU) suggests multiparametric MRI (mp-MRI) of the prostate as a noninvasive method to evaluate prostate lesions. To leverage the interbreeder variability in the interpretation of mp-MRI, computer-aided diagnostic (CAD) systems can be used for automatic detection and characterization of prostate lesions.Objectives
The goal of this article was to design a quantification method based on mp-MRI for the discrimination of benign and malignant prostatic lesions with MR imaging/transrectal ultrasonography fusion-guided biopsy as a reference for pathology validation.Methods
Mp-MR images, including T1- and T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast enhancement imaging (DCE) MRI were acquired at 1.5T from 27 patients. Then, 106 radiomic features (first-order histogram (FOH), gray-level co-occurrence matrix (GLCM), run-length matrix (RLM), and Gabor filters) were calculated from mp-MRI. Statistical analysis was performed using receiver-operating-characteristic curve analysis for feature filtering, linear discriminant analysis (LDA) for feature extraction, and leave-one-out cross-validation for evaluation of the method in the differentiation of benign and malignant lesions.Results
An accuracy of 96.6% was achieved for discriminating benign and malignant prostate lesions from a subset of texture features derived from ADC and DCE maps (radiomics-based method) with sensitivity and specificity of 100% and 85.7%, respectively.Conclusion
A radiomic quantification method based on T2-weighted images, ADC maps, and quantitative and semiquantitative DCE maps can discriminate benign from malignant prostate lesions with promising accuracy. This method is helpful to avoid unnecessary biopsies in patients and may provide information for CAD systems for the classifications of prostate lesions as an auto-detection technique.
Epilepsy Presurgical Evaluation of Patients with Complex Source Localization by a Novel Component-Based EEG-fMRI ApproachPage 8Background
The precise localization of epileptic foci is an unavoidable prerequisite for epilepsy surgery. Simultaneous EEG-fMRI recording has recently created new horizons to locate foci in patients with epilepsy and, in comparison with single-modality methods, has yielded promising results although it is still subject to a few limitations such as the lack of access to information between interictal events. This study assessed its potential added value in the presurgical evaluation of patients with complex source localization. Adult candidates considered ineligible for surgery on account of an unclear focus and/or presumed multifocality based on EEG underwent EEG-fMRI.Objectives
Adopting a component-based approach, this study attempted to identify the neural behavior of the epileptic generators and detect the components of interest to be later used as inputs in the GLM model, substituting the classical linear regressor.Methods
Nine IED sets from five patients were analyzed. These patients were rejected for surgery because of an unclear focus in two, presumed multifocality in one, and a combination of both in two of them.Results
Component-based EEG-fMRI improved localization in three out of four patients with unclear foci. In patients with presumed multifocality, component-based EEG-fMRI advocated one of the foci in five patients and confirmed multifocality in one out of five patients. In two patients, component-based EEG-fMRI opened new prospects for surgery. In these complex cases, component-based EEG-fMRI either improved source localization or corroborated a negative decision regarding surgical candidacy.Conclusion
As supported by the statistical findings, the developed EEG-fMRI method led to a more realistic estimation of localization than the conventional EEG-fMRI approach, making it a tool of high value in the presurgical evaluation of patients with refractory epilepsy.
A Diagnostic Machine Classifier Using Multi-Parametric MRI to Differentiate Benign from Malignant Myometrial TumorsPage 9Background
There are many conditions in medicine that decision making has crucial importance to differentiate between binary diagnoses, such as preoperative discrimination of benign from malignant tumors, e.g. uterine neoplasms. Physicians are not usually able to pool multiple parameters affecting the diagnosis, while “machine learning” techniques, especially “decision trees” with human-readable results, can process such amounts of data. Previous studies have shown that MRI could be helpful in the differentiation of uterine leiomyosarcoma from leiomyoma.Objectives
Hereby, for preoperative diagnosis of these tumors and to reduce unnecessary costs and surgeries, we applied a machine classifier using multi-parametric MRI to construct practical diagnostic algorithms.Methods
A total of 105 myometrial lesions were included in two groups of benign and malignant, according to postoperative tissue diagnosis. Multi-parametric MRI including T1-, T2-, and diffusion-weighted images (DWI) with apparent diffusion coefficient (ADC) map, contrast-enhanced images, as well as MR spectroscopy, were performed for each lesion. Thirteen singular MRI features were extracted from the mentioned sequences. Various combination sets of selective features were fed into a machine classifier (coarse decision-tree) to predict malignant or benign tumors. The accuracy metrics of either singular or combinational models were assessed (Figures 1 and 2). Eventually, two diagnostic algorithms, including a simple decision-tree and a complex one, were proposed using the most accurate models by MATLAB 2017.Results
Among all singular features, the visual assessment of DWI-ADC restriction, followed by the T2 map, achieved the best accuracies as 96.2% and 92.0%, respectively. Our final simple decision-tree comprised three features including T2, central necrosis (CN), and DWI-ADC restriction with accuracy of 96.2%, sensitivity of 100%, and specificity of 95%. However, the complex tree made up of four features including T2 map, CN, DWI-ADC restriction, and Tumor to Myometrium Contrast-Enhanced Ratio yielded accuracy, sensitivity, and specificity values of 100%.Conclusion
The complex diagnostic algorithm, compared to the simple model, could differentiate tumors, with equal sensitivity but higher specificity. However, it needs more advanced calculations and a high level of patient cooperation; moreover, it might be a time-consuming method. Therefore, physicians should wisely trade-off in different clinical situations and request imaging modalities in a way to reduce additional costs and most importantly, to prevent unnecessary surgeries, by helping early accurate diagnosis.
Using Kalman Filter to Improve the Accuracy of Diffusion Coefficients in MR Imaging: A Simulation StudyPage 10Background
The diffusion coefficient of water molecules in different tissues is a biomarker to diagnose and differentiate diseases, tumors, injuries, etc.. Using a motion-sensitizing gradient, it is possible to map the diffusion coefficient into diffusion-weighted MRI (DWI). Although DWI is a good tool for diagnosing, its accuracy in low regime SNR and even especially in high blood consumer organs is in question.Objectives
We aimed to improve the accuracy of the diffusion coefficient, especially in the presence of noise.Methods
Diffusion of water molecules at each voxel causes a signal intensity decay that can be measured using a motion-sensitizing gradient. Blood perfusion in capillary network artifacts and measurement noise can affect the real amount of D. In light of considering process noise and measurement noise, Kalman filter is used to cancel perfusion artifacts and measurement noise. Based on the diffusion model (S = S0 exp (-b D)), signal intensity was produced several times and a complex Gaussian noise was added to it. Using a Kalman filter, a noise cancelation process was designed to improve the quality of results. The Kalman filter solved a linear problem in the form of state space. Therefore, the diffusion model was rewritten as log(S/S0)/-b = D in state space. The Kalman filter predicted the amount of D and then modified it based on measurements iteratively (Figure 1). Finally, the results of the proposed method and conventional method were compared with true values (Figure 2).Results
Statistical tests showed that the proposed method was significantly better than the conventional method (P < 0.01). The conventional method caused a bias in the results of DWI due to eliminating micro-vessel perfusion in the capillary network. However, Kalman filter could consider the effects of microvessel perfusion as a process noise and reduce its effects on results. Kalman filter results (Table 1) were compared with true values and the t-test showed no significant alteration (P = 0.25). Conventional method results were significantly different from true values and Kalman filter results (P < 0.0001).Conclusion
Diffusion coefficients in the presence of noise and capillary network suffer from bias; however, the proposed method can be used in these situations to improve the quality of DWI images.
Designing a Structured System for Mammography ReportingPage 11Background
Breast cancer is among the top causes of cancer-related mortality among females in both developed and developing countries. Screening for breast cancer plays a crucial role in the prevention of disease burden. Among various imaging modalities, mammography is currently considered the first step for the detection of breast lesions. The mammography data of breast cancer patients encompass a wide array of texts related to specific visual findings. Accordingly, unorganized mammography documents usually impede physicians’ efforts to fully review the previous information in follow-up visits; hence, this not only affects clinical judgment and treatment planning adversely, but also adds to the financial burdens and workload imposed on the healthcare system. This further highlights the great potential of structured mammography reporting in the management of breast cancer patients.Methods
In the first step, an expert panel consisting of two attending radiologists and a health informatician (all affiliated to the Shiraz University of Medical Sciences, Shiraz, Iran) reviewed related guidelines to determine the appropriate items, ontologies, and standard formats for data entry. These included radiology textbooks and the systematized nomenclature of medicine (SNOMED). In addition, 100 mammography reports were completely examined for data extraction and the creation of a template report. In the next step, the structure of the data registry was discussed among a group of radiologists and breast cancer surgeons during several sessions using Delphi technique.Results
Overall, 119 fields were selected for data entry. Our survey showed that all of the contributing physicians believed that a structured reporting system for mammography can help standardize and reduce reporting time and errors. Following the completion of the template, a user interface was developed by the expert panel for integration in the cloud software workflow, which will be deployed and assessed in the next phase of the project.Conclusion
Structured mammography reporting helps radiologists and surgeons to efficiently and confidently track the management course of their patients. In addition, with the advancement of artificial intelligence, especially deep learning for image classification, clean and labeled image databases can be used for designing computer-assisted decision support systems without significant data preprocessing.
A Systematic Review for Evaluating the Effects of TeleradiologyPage 12Background
The use of telemedicine has received considerable attention in recent decades and it has partaken different applications such as teleradiology, telepathology, remote surgery, and so on. Telemedicine has been an approach with broad publications and a technology adopted by broad categories of healthcare users. The use of new technologies at each stage of teleradiology creates new challenges. Like any system, the lifecycle of the system development from production to maintenance requires actions to take.Methods
Based on the PRISMA checklist, a robust search strategy was developed to identify all available studies meeting the inclusion and exclusion criteria. A literature search was conducted in March 2018 for a week for searching in PubMed/MEDLINE, Scopus, and Web of Science. We excluded articles published before 2000, non-English articles, all kinds of reviews, conference papers without full texts, notes and news, books, letters to editors, gray literature, authors’ opinion, educational materials, and ad hoc implementation in the technical scope. All full-text literature in English concerning the assessment, evaluation, patient outcome, economics, and managerial aspects with a local or wide implementation in the form of original articles and case studies were included in this review. Then, data were imported into an Excel file.Results
The articles were published between January 2000 and October 2018. A full, deep review revealed that most articles were related to the hospital environment. The financial evaluations comprised cost-effectiveness, cost prediction, initial cost reduction, and plummeting transportation and referrals of hospitals all over the world. The comparative evaluation studies usually compared and reviewed the type of pictures sent, the type of image formats, and the various tools for viewing the images in the hospitals. Therefore, the researchers decided to focus on the following themes: economic evaluation, technical evaluation of the system, evaluation of the PACS/RIS system, comparison of teleradiology and non-teleradiology systems, comparison of different instruments used in radiology systems, and the impact of these systems. Themes that were extracted in this article are as follow: comparison of methods or instruments, economic evaluation, technical evaluation, radiology information system (RIS) or picture archiving and communication system (PACS), and teleradiology impacts on hospital settings. With regard to the comparison of methods or instruments, articles compared two methods of teleradiology and non-teleradiology. In some papers, the authors examined various data entry tools and methods of compression. The reviewed articles indicated the impact of RIS or PACS on different aspects, such as workflow, efficiency, and effectiveness. Some articles aimed to evaluate economic, cost-effectiveness, cost-benefit, or any other cost estimation of the application of RIS or PACS.Conclusion
In this paper, based on reviewing the context of existing literature, only were one or two of these dimensions discussed due to fewer articles published in relation to the broad aspect of teleradiology. In order to have a comprehensive view concerning the effect of radiological systems, it seems that we have to wait until various papers discuss different aspects of radiological systems and their impacts on economic or performance improvement.
Cobb Angle Measurement Decision Support System of Radiography Images in Patients with Idiopathic ScoliosisPage 13Background
Scoliosis is a three-dimensional deformity of the spine that is commonly assessed through measuring the Cobb angle.Objectives
In this study, a Cobb angle measurement decision support system (CaMDSS) was presented to provide a repeatable and reproducible procedure for Cobb angle measurement in idiopathic scoliosis patients.Methods
We used the OpenCV and the Numpy library for image processing and system design. A series of 98 anterior-posterior radiographs from patients diagnosed with idiopathic scoliosis were used to assess the repeatability and reproducibility of CaMDSS. Five independent observers performed the measurements, and each image was analyzed by each observer three times with a minimum interval of two weeks between measurements. Both the intra- and inter-observer reliability were obtained using the single measure intraclass correlation coefficient (ICC) value. The mean absolute difference (MAD) and the standard error measurement (SEM) were calculated for all corresponding intra- and inter-observer reliability estimates.Results
Statistical results for the inter-observer analysis showed that the MAD between manual and CaMDSS was less than 3º, and the ICCs ranged from 0.94 to 0.99. The combined SEM between all five observers for intra-observer measurements of the manual method and CaMDSS was 1.79º and 1.27º, respectively. The inter-observer reliability of CaMDSS was excellent as the ICC value of 0.97 with 95% CI was obtained. The CaMDSS mean absolute difference was 2.18 ± 2.01 degrees.Conclusion
Our study showed CaMDSS was an efficient and reliable method to assess the scoliotic curvature in Thoraco-Lumbar standing radiographs with the possibility of expediting clinic visits, ensuring the reliability of calculation, and decreasing the patient’s exposure to radiation.Keywords: Cobb Angle, Decision Support System, Scoliosis, Spine Deformity, Radiography, Image Processing
Fully Automated Computer-Assisted Diagnostic Method for Mitosis Detection on Histology Slide Images of Breast CancerPage 14Background
Nowadays, advances in the field of medical science, especially the branch of histology, have made it possible to detect cancer, its growth rate, type, and extent of cancer malignancy. According to GLOBOCAN 2012, breast cancer ranks second in terms of prevalence and mortality . The number of mitoses in histology slide images is considered as one of the three significant factors in grading breast cancer. The mitosis count is done manually by pathologists but automating the mitosis count process can decrease its time and costs. Different automatic techniques have been proposed in the literature for breast cancer mitotic counting [2-5].Objectives
In this paper, we propose an automated method for accurate mitotic cell detection in breast cancer histology slide images.Methods
To evaluate experiments, we used the Mitos-ICPR 2012 dataset consisting of 50 HPFs with the train-to-test ratio of 70% to 30%, accounting for 35 images for training and 15 images for testing. These 50 HPFs were obtained by analyzing the texture of five different patients. Each slide was stained with standard Hematoxylin and Eosin (H&E) stains and two expert pathologists marked the mitotic cells with labels in 10 selected distinct microscopic HPFs at 40x magnification. The slides were scanned by two slide scanners, including Aperio Scanscope XT and Hamamatsu Nano-zoomer. Totally, there were 326 mitotic cells in this dataset . As shown in Figure 1, the proposed fully automated system consisted of the following stages: segmentation and extraction of mitosis candidates using statistical Gaussian Mixture Model (GMM), feature extraction, and classification using SVM classifier (with different kernels) and decision tree classifier. After candidate extraction using GMM, to identify candidates as mitoses or non-mitoses, it was necessary to extract discriminant and reliable features. In the proposed feature extraction stage, we focused on the extraction of textural features including GLCM, CLBP, and statistical moments of filtered images by the Gabor filter in the RGB color space. Moreover, several shape features were proposed to achieve a better distinction of mitosis from non-mitosis candidates. The proposed shape features were based on calculating the Euclidean distance of boundary pixels from the center of each cell. Two types of textural and shape features were combined with each other to provide the final feature vector with a length of 214.Results
The visual results of the proposed automated mitosis detection system for a sample histopathology slide image are shown in Figure 2.Conclusion
The detailed experimental results demonstrated the promising performance of the proposed fully automated method for mitosis detection with F-measure (91.49%).Keywords: Breast Cancer Grading, Mitosis Detection, Histopathology Image, Gaussian Mixture Model, Feature Extraction
Segmentation of Diabetic Retinopathy Lesions in Retinal Fundus Images Using Multi-View Convolutional Neural NetworksPage 15Background
Diabetic retinopathy is one of the leading causes of blindness worldwide. Furthermore, it is considered the most important complication of diabetes mellitus, which creates various lesions in the retina at its different stages. Moreover, these lesions appear in different forms of hemorrhages, exudates, and microaneurysms. The count and type of these lesions can determine the severity and progression of the disease. Early detection of these lesions can lead to better treatment and blindness prevention. The accurate segmentation of these lesions is required to detect them and specify their counts and types. Since the manual segmentation of retinal lesions is tedious and time-consuming, automated segmentation is preferred. Furthermore, in screening programs where a large population needs to be considered, automated segmentation is inevitable. Therefore, automatic segmentation of retinal lesions is the first stage of any typical computer-aided diagnosis system for early diagnosis of the disease. Automated segmentation of retinal lesions is a challenging task due to the shape diversity and inhomogeneity existing in these lesions. Hence, more advanced segmentation techniques capable of modeling lesion complexities are required to tackle difficulties regarding the automated segmentation of diabetic retinopathy lesions in retinal fundus images.Objectives
In this study, we proposed an automated pixel-based method for the segmentation of different types of lesions on the retinal fundus images.Methods
This method utilized convolutional neural network with a particular architecture to describe and label the pixels of fundus images as either normal or lesion. The proposed method had four phases of pre-processing, view generation, segmentation, and post-processing. The pre-processing stage attempted to enhance input images for better segmentation. In the view generation phase, multiple views that described a pixel form different perspectives were extracted for all pixels on images. The segmentation phase, which indeed was a convolutional neural network capable of handling multi-view data, received multiple views corresponding to each pixel and decided if it belonged to a normal or a lesion area. The segmentation network with its unique architecture could handle diversities and complexities existing in the retinal lesions, leading to accurate segmentation. Finally, the post-processing phase refined the segmentation results by reducing false positives. In addition to segmentation, the proposed method detected lesion types in the segmentation process.Results
The proposed method was implemented and its performance was evaluated using standard performance measures including accuracy, sensitivity, specificity, dice similarity coefficient, and Jaccard coefficient. The segmentation network was trained with 54 images and tested with 27 images. The experimental results were very promising and comparable to the state-of-the-art methods of fundus lesion segmentation.Conclusion
A method based on convolutional neural networks for the segmentation of retinal lesions on fundus images was proposed. Alongside the promising experimental results, the method could jointly produce different lesion masks for different lesion types as a significant functionality.
Multi-Institutional Medical Imaging Research Data Collection: Challenges of Standardization of Protocols and Header Information to Make an Imaging BiobankPage 16Background
Iranian brain mapping biobank (IBMB) has three major sources of data to be processed, categorized, and tagged for recurrent use by researchers, including research image acquisition, large scale cohort studies, and routine clinical samples from collaborating institutions. A major limitation of samples coming from routine clinical centers is the potential diversity of data parameters that may prevent to merge databases create biobanks.Objectives
The study was performed to find out the reliability of a multi-institutional case collection.Methods
Voluntary case collection was performed from four institutions that signed an agreement with the national brain-mapping laboratory. The centers operated machines from different vendors including Siemens and GE and the scanning protocols were diverse according to the operating technologists. An in-house developed application based on MATLAB 2018b was used to extra DICOM header information from the donated studies. All DICOM headers were imported to a unified database to be analyzed according to the modality, vendor type, and operator protocols.Results
A total number of 1581 cases were entered into the project with 2414 procedures performed over a six-month period. This collection included 199509 series of images for which all tags were extracted. Except for the modality-specific tags (Table 1), all other tags were found to be uniform regardless of the machine and protocol. The most important tag diversity was seen in the MRI scanning parameters: “Protocol Name”, “Scan Options”, “Scanning Sequence”, “Sequence Name”, and “Sequence Variant”.Conclusion
DICOM 3.0 standard has an invaluable role in the standardization of information incorporated into the image files as a header, which makes multi-institutional data collection feasible to a large extent. Practice-based data elements need to be unified at the time of acquisition or at the time of importing samples into a biobank to make the dataset homogeneous.
Design of Multivariate Hotelling’s T2 Control Chart Based on Medical Images ProcessingPage 17Background
In the healthcare area of cancer patients, the diagnosis procedure of cancerous tumors and metastases is a valuable and popular research subject in magnetic resonance imaging. A highly accurate diagnosis procedure can be support for doctors in interpreting and diagnosing medical data.Methods
To address this subject, we used a two-dimensional discrete wavelet transform. First, some features of the image texture were extracted by statistical and transform methods. Then, a genetic algorithm was used for data reduction and feature selection. Afterward, to diagnose bone marrow metastatic patients, we used two methods including a fuzzy c-Means clustering algorithm and a multivariate Hotelling’s T2 control chart. In this paper, we employed ADC and T1-weighted images of the pelvic region. From 204 bone marrow samples, 76 features were extracted, six of which were selected and a 204 × 6 feature vector matrix was generated. Finally, the performance of the two proposed methods was compared in terms of diagnosis and accuracy measures.Results
The results showed that the diagnosis (100%) and accuracy (100%) of the multivariate Hotelling’s T2 control chart were better than those of the other method, with a diagnosis of 99.49% and accuracy of 99.51%.Conclusion
In this paper, instead of classification and clustering methods, for the first time, we used a multivariate control chart with the Hotelling’s T2 statistic for the diagnosis of patients suspected of bone marrow metastasis. Then, using some patient samples, the performance of this phase I control chart was evaluated, and the results showed the validity of the proposed method. The validation results revealed that the accuracy and specificity metrics were better for the multivariate Hotelling’s T2 control chart than for the fuzzy clustering method.
Automatic Bone Age Determination Using Wrist MRI Based on FIFA Grading System for Athletes: A Deep Learning ApproachPage 18Background
Young athletes need to comply with fair play principles including age-specific rules for each category of matches (1). Although birth certificates are considered the main document indicating the age of players, in some regions of the world, the registration of birth is subject to variation, which makes the certificates unreliable. Therefore, FIFA has tried to use imaging methods without ionizing radiation to find out the bone age as the basis for fair play confirmation. FIFA has developed a grading system consisting of I - VI levels, which can be used in teenage athletes (2). The grading system is currently used as the standard bone age determination method in football players (3,4). All national and club matches are obliged to follow screening procedures strictly like anti-doping procedures.Objectives
The purpose of this study was to evaluate the performance of a deep learning-based automatic system that provides FIFA grades upon receiving DICOM images of the MRI study to facilitate and speed up the bone age determination.Methods
The FIFA bone age determination system consists of six grades starting from a totally unfused epiphyseal plate (Grade I) to a completely fused plate (Grade VI) where variable progressive degrees of fusion are considered the basis for Grade II to V. The protocol includes nine slices in the coronal plane with 3 mm gaps between the slices. The recommended MR sequence is T1. Since the middle image in the nine-picture dataset is considered the most informative slice containing the largest image of the distal radius, the study was done using this single slice as the basic source of grading. Then, another volumetric set of slices 4, 5, and 6 was used as the second group. A convolutional neural network was designed in four convolutional layers including pooling, ReLU, and fully connected layers (Figure 1). Next, 55 teenage football players of the national U17 team were examined using a 1.5 Siemens Avanto Machine. The studies were interpreted by an MSK radiologist member of the AFC panel of radiologists who was aware of the FIFA scoring and grading system, as the ground truth. Thirty-six cases were used for training and 19 cases for testing of the CNN. To increase the number of training images, augmentation was performed by rotating and moving the original images. Therefore, a total number of 613 images were obtained for training and 267 images for testing.Results
Images introduced to the neural network resulted in sequential layers of meaningful output (Figure 2). The final outcome of the network, as the FIFA grade of the case, was compared with the interpretation of the radiologist (Table 1). The findings indicated high accuracy of a single slice dataset while the accuracy approached 100% when the volumetric three slice sets were used.Conclusion
The findings of this research indicated that CNN could be used for automatic bone age determination and FIFA grading of wrist MRI by reasonably high accuracy.Keywords: Breast Cancer Grading, Mitosis Detection, Histopathology Image, Gaussian Mixture Model, Feature Extraction
Abnormality Detection in Musculoskeletal Radiographs by DenseNet and Inception-v3Page 19Background
One of the most remarkable applications of deep learning emerges in medical diagnosis. New improvements in this field have shown that with large enough datasets and the right methods, one can achieve results as reliable as diagnoses made by experienced doctors. One of such developments is MURA, which is a dataset of musculoskeletal radiographs consisting of 14863 studies from 12173 patients, resulting in 40561 multi-view radiograph images. Each one of these studies concerns one of the seven standard upper extremity radiographic study types, namely finger, forearm, elbow, hand, shoulder, homeruns, and wrist. Each study was categorized as normal or abnormal by board-certified radiologists in the diagnostic radiology environment between 2001 and 2012. Abnormality detection in muscular radiography is of great clinical application. This gains more importance in cases in which abnormality detection is difficult for physicians. If the proposed model can help us in detection, the process of treatment will precipitate. This model is termed Inception-v3.Methods
In this study, we evaluated the MURA dataset through Dense NET and inception-v3 methods.Results
The results indicated that the former had better performance and we added a pre-processing module to it to improve the accuracy of the DenseNet method in detecting the abnormality. In this context, we trained the machine to be sensitive to the presence of external objects to be distinguished from actual abnormality such as bone fraction. We achieved this condition by many various radiographs as machine inputs. By this strategy, both techniques (DenseNet and Inception-v3) showed improvements in accuracy. Thus, we sub-grouped abnormality into with or without the presence of external objects.Conclusion
Although the average opinion of radiologists still shows better results, in images with delicate fracture detection, such as finger fracture, the proposed model worked more accurately, and it could be a decision support assistant for physicians in the final detection of fracture. The precision of the proposed model will enhance if the image is separated from normal images using Platinum, a new class is made, and pre-processing is done. Therefore, the model can automatically detect abnormality by identifying that part of the image that is detected to be abnormal. An efficient model can interpret images more efficiently, can reduce errors, and can enhance quality. More studies are needed to evaluate the integration of this model with other models of deep learning in clinical settings.Keywords: Musculoskeletal Radiographs, Deep Learning, Medical Image Processing, Abnormal Detection
Automatic Assessment of Ki-67 Proliferation Index in LymphomaPage 20Background
Haematopathological Ki-67 is used principally to measure the proliferation rate in the assessment and grading of malignancies. Ki-67 is based on a powerful staining method for distinguishing benign from malignant proliferation. The index uses a nuclear protein expression and it has been widely used to evaluate the proliferative activity of lymphoma. The clinical value of Ki-67 includes defining prognosis (among lymphomas), predicting drug response, and setting eligibility criteria for clinical trials. The Ki-67 score or index should be expressed as the percentage of positively stained cells among the total number of invasive cells in the area scored. With the Ki-67 marker, the proliferation fraction of low-grade follicular lymphomas (FLs) is usually less than 20% (as shown here) and that of high-grade FLs is greater than 30% . Manual Ki-67 proliferation assessment is a very time-consuming and operator-dependent task at the same time. Therefore, several studies have examined the use of image analysis software to measure faster the nuclear staining index of Ki-67 in lymphomas. A few studies have focused on the measurement of proliferation index in FLs and found that automated Ki-67 counts were similar to manual counts [2-3]. A major source of difference between automatic and manual Ki-67 scores is the scoring method that depends on the strategy of counting or the estimation and choice of the area to count.Methods
In this research, an automatic unsupervised learning-based system was proposed for accurate and fast Ki-67 scoring in lymphoma. The proposed methods were designed to use image processing tools and detect robustly the positive and negative cells for Ki-67 antibody. The goal of the proposed method was to assess the proliferation index (percentage of Ki-67 positive lymphoma cells) to provide better treatment options for lymphoma patients. The proposed system consisted of the following sections: pre-processing, feature extraction, segmentation, and post-processing (Figure 1). To highlight specific histological structures of Ki-67 stained images such as positives cells (brown color ones), we performed pre-processing such as color transform from RGB space to brown-ratio space. For smoothing and filling the region of each cell on the image, the morphological filling was used. After the pre-processing section, color features, such as the mean of brown-ratio color space and blue channel of RGB image in a 3 × 3 block, were extracted from the image. In the next section of the proposed system, using the extracted color features, the image was segmented into three clusters by k-means clustering. After image segmentation, each of the positive and negative cells was post-processed. In the post-processing section, to split the merged cell, the morphological opening was used and finally, false segmented regions with small areas were removed. To evaluate experiments, we used five Ki-67 stained whole slide images of lymphoma from the Pathology Department of the Medical University of Vienna (AKH).Results
Figure 2 shows the results of the proposed system for a sample region of the Ki-67 stained image. The manually labeled positive and negative cells were considered as ground-truth. The ground-truth was compared with the automatically segmented cells obtained from the proposed system. The computational complexity of the proposed system was very low so that the average time needed for the assessment of a high-power field image with a resolution of 1239 × 1239 was 5.7 seconds using a workstation with a 2.50 GHz Intel® i7 - 2450 CPU and a 16 GB memory. The efficiency of the proposed system was evaluated for estimating the Ki-67 index from Ki-67 stained whole slide images. A dataset was used containing five whole slide images. Figure 3 shows one of the whole slide images. To compare the results, each slide was analyzed by an expert pathologist and the Ki-67 index was estimated manually. Table 1 shows the performance of the proposed system for the Ki-67 index estimation. The results confirmed the efficacy of the proposed system for Ki-67 index estimation from whole slide images.Conclusion
The detailed experimental analysis reflected the promising results of Ki-67 scoring based on the proposed system.Keywords: Lymphoma Cancer, Ki-67 Proliferation Index, Image Clustering, Immunohistochemistry
Skin Lesion Diagnosis Using Ensemble Deep Learning ModelsPage 21Background
Skin cancer is a serious public health concern (1). With over 5 million newly diagnosed cases every year, it is the most common form of cancer worldwide (3). Among the different categories of skin cancer, melanoma is the deadliest and most dangerous form. It is estimated to be responsible for 7230 cases of death in 2019 globally (4). Although the melanoma mortality rate is significant, its survival rate exceeds 95% if detected early (5,6).Objectives
We propose a deep learning framework that could perform skin lesion diagnosis with precision and accuracy. Such a framework could work by classifying dermoscopic images based on skin lesion categories. These categories include melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesion.Methods
We used the convolutional neural network models to classify dermoscopic images. The selected models had high Top-1 accuracy on the ImageNet dataset. These models included InceptionResNetV2, Xception, and EﬃcientNetB3. All of these models were initialized with their pre-trained weights on the ImageNet dataset. They all used categorical cross-entropy for loss function and Adam optimizer with standard parameters. Two different ensemble methods were employed in our study. The first one, called Softmax-only, used Xception and EﬃcientNetB3 with Softmax activation for both models’ prediction layer. In this method, during the training process, each model created a checkpoint of itself and recorded its balance accuracy at each epoch. After training completion, checkpoints corresponding to some of the highest balance accuracies for each model would be selected. The average balance accuracy of the selected model checkpoints was reported as the method’s performance. The second method was called Sigmoid-only that used InceptionResNetV2, Xception, and EﬃcientNetB3. This method performed pretty much the same as the first, except that all the models had Sigmoid activation for their prediction layers and the evaluation criterion was the F1-score. The used dataset was the 2018 ISIC archive, which included 25331 dermoscopic images for the training set and 1516 images for the test (evaluation) set (1). For the Softmax-only method, the training set was split 80% - 20% for cross-validation and for the Sigmoid-only method, it was split 85% - 15% for fixed-validation. The training set was also augmented using random crop, random rotation, and random flipping.Results
The Softmax-only method had a balanced accuracy of 0.901 (± 0.12) and the Sigmoid-only method had an average F1-score of 0.932 on the seven classes. Finally, our framework managed to achieve a balanced accuracy of 0.866 on the test set. Furthermore, our framework participated in Task 3 of the “ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection” challenge, which brought us the first rank on the challenge’s live leaderboards (up until 18 September 2019; link: https://challenge2018.isic-archive.com/live-leaderboards/).Conclusion
We proposed a deep learning framework to classify the ISIC archive 2018 dataset of dermoscopic images based on skin lesion categories. Our methods had the validation results of 0.901 (± 0.12) for balance accuracy and 0.932 for average F1-score, as well as a balanced accuracy of 0.866 on the test set. This framework also managed to achieve the first rank of the “ISIC 2018 Task 3” challenge.
Domain-Specific AI Application in Medical Imaging: Use CasesPage 22Background
In the context of ongoing digitization in healthcare, due to the uprise of machine learning and deep learning, new tools are being developed for implementation in radiology practice. These AI-based applications can be used not only for image analysis in different domains, but also for other parts of the radiological workflow. This will be illustrated with several use cases.Objectives
By listening to this lecture, the audience is expected to:1. Understand the basic principles of machine learning and deep learning.2. Understand the possibilities by which these techniques can intervene in different parts of the radiological workflow.3. Understand the pathways that need to be followed for developing and implementing AI-based solutions for clinical use.Outline: AI-based applications can be used for many different purposes in radiology. In each clinical practice, it is essential, however, to define the right use cases for implementing such tools. Furthermore, it is crucial to evaluate the accuracy and value of these tools since the real-world data can be different from the data by which the algorithms are trained. In the Netherlands cancer institute, AI tools are being developed and tested, for both improving patient care and optimizing the radiological workflow. A concise overview is given of the potency of these new tools and different challenges that this project is being confronted with.
Medical Image Sharing with Portable DevicesPage 23Background
In this presentation, we look at available solutions for medical services on portable devices and the benefits and features that are available on these devices today.Objectives
1. Electronic devices for medical image viewing 2. Portability advantages for image sharing 3. Image viewing features available on portable devices Outline: Medical imaging has been evolved with the advent of electronic and digital technologies over the past decade. While films and printers were used to transmit images in the past, the use of PACS systems has been spread over the past decade and medical images have been viewed and browsed in digital devices. Nowadays, digital devices are a wide range of mobile devices -Handheld- to large devices and Wide Screens -which are daily demanded by users and professions for viewing and browsing medical images. Digital access to most up-to-date versions of patient medical images, medical reports, and disease history can accelerate the development of treatment, reduce the likelihood of medical and prescription errors, and prevent additional testing. Furthermore, digital access can save time and money for both patients and healthcare centers. On the other hand, the rapid expanding of services in imaging and medical image sharing is one of the challenges of ensuring the accuracy and integration of information and work processes in the way that professions always have access to accurate online data of patients. This data integration, in particular, should be done between PACS, RIS/HIS systems, and imaging systems to circulate data automatically as much as possible without the need for manual data entry. To achieve this critical capability, other systems have been developed in the DICOM standard to serve as interfaces between the above elements, enabling the process of automating data transfer and integration. In the remainder of this presentation, the existing solution for the realization of a digital hospital and its capability in diagnosing and working with portable devices is outlined.
AI Startups: The Need for Collaborative Research and DevelopmentPage 24Background
In recent years, the rapid development of Artificial Intelligence (AI) has had a remarkable impact on the medical imaging domain. However, there remain challenges in utilizing state-of-the-art models in clinical practice. This talk focuses on challenges faced by AI startups in using machine learning in clinical practice.Objectives
1. How the machine learning methods solve a real clinical problem and what are its opportunities and challenges? 2. What are the future directions of AI in medical imaging? Outline: The first part of the talk provides an overall review of some machine learning models developed for solving medical imaging problems. The second part of the talk presents some of the main challenges in utilizing state-of-the-art machine learning in medical imaging applications. These challenges include interpreting complex models, incorporating causality in our models, working with longitudinal data, model generalization, and robustness. The last part of the talk focuses on opportunities and future directions of AI in medical domain advances such as automatically identifying potential responders to treatment, leading to the possibility of personalized medicine.
AI in Radiology: From Theory into PracticePage 25Background
Radiology is at the forefront of the revolution in medical imaging, which is mainly based on the progress made in machine learning and deep learning. New tools are being developed and made commercially available for implementation in radiology practice. AI solutions can intervene in different parts of the entire radiological workflow, and thus are likely to have a significant impact on the way that radiology services are being offered.Objectives
By listening to this lecture, the audience is expected to:1. Understand the basic principles of machine learning and deep learning. 2. Understand the different ways and possibilities by which these techniques can be applied in radiology. 3. Understand the advantages, disadvantages, and risks of implementing AI-based tools in radiology practice.
Outline: In this presentation, a brief historical overview is provided of the progress that has been made in the past few years in the field of artificial intelligence. The basic principles of machine learning and deep learning are explained. Radiology is at the forefront of these developments, with the ability to provide a huge resource of data. The way these new AI-based applications can be applied is explained, accompanying with advantages, disadvantages, and risks. Advice is provided on how to use these tools in clinical practice.
Templates, Modules, and Common Data Elements: Building Blocks of Structured ReportingPage 26Background
The report is mostly considered the main product of a radiology department, the quality of which affects the success and impact of the institution and radiologist. The current trend of using electronic tools for the enhancement of the quality of reports reveals that structured reporting has undeniable advantages over free-text reporting. But, why this method has not been widely adopted?Objectives
By listening to this lecture, the audience is expected to:1. List the barriers to using structured reporting in clinical practice.2. Describe the advantages of modular vs. template-based structured reporting.3. Explain the importance of common data elements in standardized reporting.Outline: There are multiple technical, conceptual, professional, and cultural reasons preventing radiologists to make use of structured reporting in their day to day practice. From the professional or clinical point of view, the flexibility of the system to provide the most relevant items while being reasonably short is the main reason why radiologists cannot report in a structured manner even if they like. Templates are the most popular containers of the predefined elements every radiologist plans to include in his/her report. But, the templates available for structured reporting are not comprehensive enough to cover all potential pathologies. In addition, there should be a basic standard to define how everybody describes a particular situation. These standards can be defined through the common data elements concept.
Experience of Enterprise Imaging in IranPage 27Background
The use of digital imaging has substantially grown in recent decades in traditional services, new specialties, and departments. The need to share these data among departments and caregivers necessitates central archiving systems to communicate with various viewing applications and electronic medical records. This has promoted the development of modern vendor-neutral archive (VNA) systems. The need to aggregate and share imaging data from various departments has promoted the development of enterprise-imaging (EI) solutions that replace departmental silos of data with central healthcare enterprise databases. The term “enterprise imaging” is relatively new and continues to evolve. Currently, enterprise imaging usually means collecting all patients’ data, including images and reports, into one place. There are lots of research demonstrating that image exchange can reduce unnecessary redundancy and provide other compelling values, including cost reduction, patient care improvement, patient satisfaction, research and educational benefits, and so on. Enterprise imaging and VNA are on the horizon for every institution as a mainstream expectation of patients, hospitals, and health care providers. Such a program comes with a significant capital cost, and in these times of economic constraints in the health care industry, investments must be wisely chosen. We review the background of VNA and EI solution development and describe the characteristics and advantages of such systems. We, then, describe our experience in the implementation of Enterprise PACS of Shiraz University of Medical Sciences since 2015.
Referral Guidelines in Radiology: Point of Care AdvicePage 28Background
Referral guidelines are becoming more and more appreciated in health management primarily to optimize resource utilization. But in the case of medical imaging, the wise selection of the most appropriate study is also important from the patient’s safety perspective. However, the problem is not limited to the availability of guidelines. Even when they are already developed and published, their practical use should be feasible to improve health care. IT tools are important to bring the guidelines to the point of care and tailor the decision to the patient's problem.Objectives
By listening to this lecture, the audience is expected to:1. Explain available imaging referral guidelines.2. Describe the features of IT-based decision support tools.3. List the process of using BARTAR application as a national imaging referral guideline platform.Outline: The most comprehensive repository of imaging referral guidelines is theappropriateness criteria developed by the American College of Radiology. There are other national or multinational collections of guidelines but no one is practical. There are IT tools to help practitioners select the best imaging procedure according to the guidelines and considering patient’s problems, including ACR Select and iGuide.Every national health system requires an adapted version of the guidelines considering available modalities and the health system. By developing an Iranian collection of imaging referral guidelines, a Persian mobile application is designed to interactively advise medical practitioners by answering questions related to the clinical problem of the case and considering available imaging services in the institution or the region of practice, leading to a list of potential choices in order of appropriateness.
Added Value of Structured Reporting for Medical Practice and ManagementPage 29Background
Structured reports are getting popular gradually. To increase the adaptation of the technology, we will briefly go over the benefits that structured reports can provide to almost all medical staff and the medical community in general.Objectives
Learning objectives include:1. What are the benefits of SR for medical doctors?2. What are the benefits of SR for patients?3. How can SR boost high quality research?
Outline: First, I will briefly go over some of the known benefits of SR, as follows:• Disease and domain-specific report templates can increase the clarity and quality of the report.• The use of common data elements ensures the consistent use of terminology across practices.• The use of checklists inherently in structured reports reduces diagnostic errors.• Less grammatical and nongrammatical errors may be introduced into SR even when digital speech recognition is used.• Preserving the completeness of report documentation improves insurance and other reimbursements.• It improves quality.• It may promote evidence-based medicine by integrating clinical decision support tools with radiology reports.However, the most important factor is to improve research. Each population based on genetic background and ethnicity may require different or specific medical protocols or practice for certain diseases. High quality medical research is needed to address the differences and to build the foundation for more appropriate medical procedures and knowledge generation. The importance of high impact and high-quality research in medicine and medical practice is felt in Iranian universities but irrespective a large amount of government investments on different aspects of medical fields is not clearly observable. The universities have abundant numbers of erudite and competent researchers but not enough tagged or labeled data are available for high impact publications. Medical doctors in Iran are mainly practitioners. Although research has gained momentum within the last few years, mainstream respected researchers in medicine do not put research in their first priority. Structured reporting, if performed properly, can provide the main feed for quality research since while medical practitioners perform their regular medical practice. Their diagnosis and observations can be used directly as input to data mining and machine learning algorithms and at the same time be used for population studies.
Creating an AI Ecosystem for Multimodal Data Analysis in the MedGIFT GroupPage 30Background
The MedGIFT research group is on the border between medical sciences and computer science, namely medical image analysis and machine learning. The group was created in 2002 and has always had the aim to combine medical image data with other sources of information for medical decision support.Objectives
Learning objectives include:1. What is required to position a research group in medical image analysis?2. How to get credibility in a multi-disciplinary domain?3. How do research topics evolve over time and how to assure to stay relevant?Outline: The talk will start with an overview of my personal profile and the history of how the MeGFT research group was started because this has had a strong influence on how the group evolved. Even though the initial name was kept, the topics in the group evolved much over time, mainly around medical topics. The development of systematic evaluation in scientific challenges has had a strong influence on the impact of our research group and thus, I will highlight the history of the ImageCLEF benchmark and how it has made datasets available for a large community and helped in further data-sharing efforts. The multidisciplinary nature of the research groups also helped in creating an ecosystem where researchers could flourish in several related disciplines. The close collaboration with several hospitals also assured that relevant topics are treated and in this respect also the final impact of the work.
Collaborative AI Research in Medical Imaging: Trends and ChallengesPage 31Background
Artificial intelligence research and development are speeding up after sound evidence proving the examples of reliability of automated tools to increase speed, accuracy, and reproducibility of imaging services. Who should lead the process?Objectives
By listening to this lecture, the audience is expected to:1. List potential members of an AI research team.2. Define the major roles of teams with clinical and engineering background.3. Explain the requirements of collaborative AI research infrastructure.Outline: The AI research and development is framed by a multi-step lifecycle before being accepted for practical clinical use. Finding the most needed solution for a real clinical problem is probably the most important ring in the chain. But, the next steps are also critically important: collecting the appropriate dataset, annotating the data, selecting the best AI architecture to address the question in mind, stepwise training, testing and improvement of the model, integrating the tool with currently used IT tools in clinical environment, user interface justification, and regulatory approval and marketing that are the others rings of the chain. However, one important issue is to find out who should take care of each step. Regardless of roles and names, AI research needs hardware, software, service and leadership infrastructure to pave the road for research. High-performance computing tools, software applications to run the model, data repositories, clinical tagging services, and validation/licensing services are needed to make AI research a continuous, productive, and improving process.
Overview of VISCERAL Benchmarks in Intelligent Medical Data AnalysisPage 32Background
Large datasets of annotated medical images are important to train machine learning algorithms. Segmentation is the first step in many decision support applications.Objectives
Learning objectives include:1. How to organize a scientific challenge on organ segmentation on a large scale?2. How to develop research infrastructures where data do not need to be moved anymore?3. How research infrastructures can help in reproducibility and semi-automatic generation of annotations?Outline: Organ segmentation is the first step in many decision support tools in radiology. To obtain good segmentation results, most often, large annotated datasets need to be available. Also, most segmentation algorithms are very organ-specific and modality-specific. The VISCERAL benchmark makes data from 20 organs available both with and without contrast agent and for CT and MR. An architecture for manual segmentation including quality control was developed for this purpose.VISCERAL also introduces a novel Evaluation as a Service (EaaS) architecture to compare algorithm performance. The data remained in a fixed and inaccessible place in the cloud. Participants obtained a virtual machine and access to the training data. They could, then, install all necessary tools and test on the training data. When the VMs finished, the organizers took control of VMs and ran the algorithms on the test data. Like this no manual optimization is possible on the test data, the actual data are never released to the research and with the availability of executables and data, a full reproducibility is given.The results of the benchmark compared many algorithms on all four modalities. The CT quality was often much better than MRI quality. Results on some organs such as the lungs and liver were sometimes better than the inter-rater disagreement. The availability of code also allowed to run the segmentation on new images and created fusion of algorithmic segmentations that we call a silver standard to generate large-scale training data.
Making EEG Experiments Retrievable for Research Purpose: The Preliminary Experience of Standardization of EEG Data in Iranian Brain Mapping BiobankPage 33Background
Increasing technological advances in the field of biological signal recording, along with diverse available data storage and sharing facilities, has made it much easier for researchers to access extensive biological data for use in their studies. Today, data once recorded in a study can be repeatedly reused by other researchers through access to shared databases. Access to biosignal pools, on the one hand, can save considerable energy and reduce costs by preventing duplicate studies. On the other hand, it improves opportunities for meta-analysis and in-depth studies using diverse datasets with greater statistical power, which provides more reliable results, as well as new insights into biological issues. However, the lack of some agreed-upon data standardization and consistency across the research community creates some barriers to reusing data. Data from different studies often have different formats and structures, which may impose extensive data reformatting for meta-analysis and comparative studies. Moreover, there is no standardized structure for organizing biosignals-associated information (e.g. subject demographics or recording technical information) throughout the research community, which may impair subsequent data reporting and analysis due to the lack of some necessary information.Objectives
In this article, we briefly report on the efforts made by the Iranian Brain Mapping Biobank (IBMB) to develop a standardized format and structure for recording and archiving electroencephalography (EEG) signals.Methods
In the process of developing a new EEG data structure in IBMB, we focused on three main issues, as follows: (1) what information should be combined with EEG signals as metadata? There is still no agreement on the content of EEG metadata. Thus, in many cases, the recording of information needed for subsequent EEG signal analysis is neglected. By reviewing the international guidelines on EEG performing and reporting (e.g. [1, 2]) along with by consulting the experts from various fields, we proposed a structured template for recording EEG metadata. (2) Which file format is best suited for storing EEG data? To date, many different data formats for storing EEG have been introduced (e.g. EDF, GDF, and TXT). These formats differ in terms of data type, combined metadata content, storage structure, and storage requirements (for an overview see ). Although some of these formats are widely accepted, there is no comprehensive format that can meet all the requirements. The format considered by IBMB addresses the needs for a basic format, which is compact while it can save numerical data with high precision, can be easily used in popular processing applications, and can accommodate the suggested EEG metadata. (3) How to organize EEG datasets structurally? The brain imaging data structure (BIDS) project  and EEG study schema (ESS)  are among a few recently important efforts to create an infrastructure for structured EEG storage. In line with such efforts, we developed a new hierarchical data structure to store EEG data, which can facilitate EEG data retrieval and sharing.Findings
IBMB has developed a customized EEG header consisting of 60 elements, including subject demographics, and data technical information, which covers data syntactic, semantic, and pragmatic aspects. This metadata combined with EEG is organized into three main hierarchical levels of Study, Session, and Task, corresponding to the same levels of physical EEG storage.Conclusion
We introduced a new EEG data structure hierarchy and file content with embedded header information. This data structure encompasses all the information needed for reporting and analyzing EEG; thus, it can facilitate EEG data reuse, as well as large-scale analysis. We propose this approach for archiving EEG datasets in research-oriented EEG repositories.
Calibration of Probabilistic Model Output: Introduction and Online ToolPage 34
Many machine learning algorithms provide probabilistic predictions as their outputs. Analysis techniques familiar to physicians (e.g., calculation of sensitivity and specificity and construction of receiver operating characteristics curves) do not allow for the assessment of model calibration and prevent proper evaluation of these models. We reviewed statistical and graphical (shown in Figure 1) methods for calibration analysis and presented a framework for the implementation of these techniques using open-source codes and an online tool.
Mobile Devices for Viewing Medical Images: A Review of the LiteraturePage 35Background
The use of portable computing devices, in particular smartphones, is growing rapidly in healthcare. Several studies have reported that physicians can use tablet computers and smartphones for viewing medical images, but it is not clear to what extent and under which circumstances this approach is acceptable.Objectives
The purpose of this study was to summarize the current evidence on the use of mobile computing devices (tablet computers, smartphones, and personal digital assistants) in viewing medical images.Methods
We systematically searched PubMed, Scopus, and Web of Science for original studies that reported the use of any kind of portable computing devices, including tablet computers, smartphones, and personal digital assistants, for viewing radiologic examinations and other medical images. The keywords included mobile phone, m-health, radiology, tele-radiology, radiography, smartphone PACS, and PACS viewer. The electronic search was limited to papers in the English language and the publication date of 2008 onward. After removing duplicates and screening of 327 unique records at the title/abstract level, the full texts of 137 potentially relevant papers were retrieved and checked against inclusion criteria. Finally, 37 papers were included in this study and reviewed.Results
Both smartphones and tablet computers have been used by radiologists and physicians with other specialties including surgeons, orthopedists, emergency physicians, cardiologists, and neurologists. This usage was more dominant where image viewing played an important role in clinical decision making. A range of modalities of medical imaging from plain radiography, to angiography, computed tomography scan, and magnetic resonance imaging (MRI) were reported in the reviewed studies. Although the level of evidence was not high, it was indicated that the size of the smartphone’s screen did not affect the clinical performance. More than half of the studies compared the outcome of images viewing using PACS workstations with smartphones and they concluded that there was no significant difference between them. A number of studies have reported that the use of smartphones was associated with the faster interpretation of medical images.Conclusion
Current literature indicates that smartphones can be used for viewing medical images by clinicians and the outcome is comparable with that of desktop workstations, but further research is needed to confirm these findings.
Brain Tumor Classification Using Deep Learning MethodsPage 36Background
A World Health Organization (WHO) February 2018 report recently has shown that the rate of deaths because of brain or central nervous system (CNS) cancer has the highest rate in the Asian continent. Timely and accurate diagnosis of brain tumor is crucial where small errors pose many risks to treatment. Classifying the types of tumors is an important factor in targeted treatment. Since tumor diagnosis is highly invasive, time-consuming, and costly, there is an urgent need for a precise tool to develop a non-invasive, cost-effective, and efficient tool for brain tumor description and grade estimation. Brain scans by using magnetic resonance imaging (MRI), computed tomography (CT), and other imaging techniques are fast and safe to detect tumors.Methods
In this paper, we used a standard dataset containing 3064 images from different skull views. The size and position of tumors at different angles make it difficult to detect the tumor in the specimens. This MRI dataset consisted of 3064 slices and 1047 coronal images. Coronal images were recorded from behind. Axial images taken from above included 990 images. Also, there were 1027 sagittal images extracted from the skull side. Images in this dataset belonged to 233 patients. The dataset consisted of 708 Meningioma, 1426 Glioma, and 930 Pituitary tumors; thus, we isolated images from different angles of sagittal, coronal, and axial images and then trained them in different categories by using Inception-V3 and Resent, which are deep learning classification methods to make this process more accurate and faster.Results
Finally, by adjusting the hyper-parameters of each of these methods with performing pre- processing and weighting combinations, we obtained an acceptable evaluation compared to previous methods.
Automatic Detection and Classification of White Blood Cells in Blood Smear Images Using Convolutional Neural NetworkPage 37Background
Blood cell identification and counting are very important in the diagnosis and treatment of diseases. Of the blood cells, the identification of white blood cells (WBC) and their changes is of particular importance due to their role in the immune system. Manual cell counting is time-consuming and dependent on expert experience. Also, the accuracy of blood cell counting can be influenced by human limitations such as fatigue and mental problems. Automatic systems can be a convenient and cost-effective choice for routine clinical services and can be used for fast and accurate blood disease diagnosis. In the automated systems, blood samples are analyzed using microscopic images of stained blood cells. There are various studies on automatic blood cell segmentation based on blood smear images [1-4]. Also, some studies have focused on WBC image classification [5-6].Objectives
In this paper, the main objective is to provide the implementation of a deep learning-based automatic system to identify five main groups of WBCs in human peripheral blood smear, including Eosinophils, Basophils, Monocytes, Lymphocytes, and Neutrophils.Method
The block diagram of the proposed method is shown in Figure 1. As can be seen, the proposed method consisted of three pre-processing, segmentation, and deep learning-based classification stages. In the pre-processing stage, color normalization was used to normalize the color appearance variability. The color appearance can significantly vary between different labs due to differences in slide digitization conditions and staining protocols. Automatic image analysis methods can be significantly affected by different smear color appearances. In the color normalization stage, images to be examined were normalized to match the color appearance of a target image with standard calibrated staining. The second task was the removal stage of the background and segmentation of the desired region of WBCs. In this stage, by subtracting the B color channel of RGB blood smear image from G color channel and then by using morphological erosion and morphological reconstruction, the WBC probability map was obtained. The value of the WBC probability map showed that the pixels how likely were related to WBCs. In WBC probability map image, the pixels belonging to the WBCs had larger values than the pixels of non-WBCs. Finally, WBCs were segmented by applying optimal Otsu’s thresholding  on the probability map image. Detected WBCs were cropped from entire image by considering a patch with size 131 × 131 around all detected cells. The patches for segmented WBCs were then passed through a convolutional neural network (CNN) called CellDiff-Net, which returned the class of WBCs. The structure of the CNN architecture is shown in Figure 2.Results
Our blood smear image database used to train the proposed method was composed of 216 images with size 1536 × 2048. They were collected and labeled by experts at Avicenna Infertility Clinic (ACECR), Tehran, Iran. Stepwise processing of a sample blood smear image for WBC segmentation is shown in Figure 3. By the approach shown in Figure 3, all WBC patches were extracted from training images. Image augmentation (flips, rotations, and shears) was used to increase the size of the training set and balance out the classes. We tested our model for a test set of 10 blood smear samples. Then, 100 images were captured from each blood sample and all images were analyzed by the proposed method. Visualizing feature space in convolution layers for a test WBC image pass through learned CellDiff-Net is shown in Figure 4. To evaluate cell differential counts, the performance of the proposed method was compared with the results of manual counting and Sysmex kx-21 analyzer. Figure 5 compares three automated, Sysmex, and manual differential cell count results for a test sample. For objective evaluation of the proposed system, three criteria of sensitivity, specificity, and accuracy were used. The manually labeled WBCs were considered as ground-truth. The ground truth for all the images was determined by an expert and used to validate the proposed method. Table 1 shows the performance of the automated proposed WBC detection and classification method.Conclusion
In this paper, a novel automated system was proposed for WBC detection and classification in blood smear images. The experimental results proved the performance of the proposed system in WBC detection and classification.
Keywords: White Blood Cell Classification, Image Classification
A Proposal to Approach Cloud-Based Enterprise Imaging for Medical Universities in Iran Using Existing DICOM InfrastructurePage 38Background
Exchange and share of different medical images as one of the critical parts of patients’ medical records between different departments of the same hospital or even different hospitals have always been in demand in recent years. Medical universities in Iran include affiliated hospitals and medical centers. Most of those hospitals were equipped with picture archiving and communication systems (PACS) from different vendors in the past years. The main challenges concerning the functionality of current PACS-based workflow with those vendors are the lack of functionality to capture, store, and view DICOM and non-DICOM images from other departments other than radiology, as well as the lack of communication and exchange of medical imaging among hospitals since each vendor has its own protocol of communication.Objective
Some advantages of this demand include allowing healthcare providers for on-demand access to medical imaging acquired at other affiliated hospitals using authentication, reducing unnecessary repeated imaging exams and unnecessary exposure to radiation, better managing the information technology resources in the Cloud environment, satisfying patients and healthcare providers, and providing timely access to patient medical imaging history.Methods
In this study, we proposed a private Cloud-based enterprise imaging solution for each medical university based on the existing DICOM infrastructure, which was supported by commonly accepted standards of all vendors. In each hospital, non-DICOM medical images were converted via a “Dicomizer” to DICOM and beside native DICOM images were sent via a “Router” module to the Central Archiving solution located in private Cloud of the Medical University. By utilizing the Router module, it was possible to transfer images even in narrow bandwidth lines via lossless/lossy compression under TLS/SSL protocol, making necessary DICOM coercions and adding DICOM tags and facilitating the possibility of having some other useful features. A zero-footprint viewer was considered in the Cloud environment for the purpose of anytime/anywhere viewing of patient studies.Results
The primary test of the Router module even through the internet network was encouraging. It seemed this module could work much more effective through the intranet of medical universities and the affiliated hospitals.Discussion
Besides transferring images by Router, this module could add or coerce some necessary DICOM tags, facilitating the categorization of studies, automatic pseudonymization of patient data, and support of all DICOM store classes including non-image classes like structured reports.Conclusion
The proposed method of archiving Enterprise Imaging in private cloud using the existing DICOM infrastructure seems feasible, cost-effective, and convenient since it does not affect the current workflow of archiving of medical images in affiliated hospitals of medical universities in Iran and there is no concern about different communication protocols. By full implementation of this proposal, each healthcare provider would have on-demand access to patient studies upon authentication without further need to store departmental silos of data.
Keywords: DICOM, PACS, Enterprise Imaging, Cloud Computing
Classification of Brain MRI for Alzheimer’s Disease Detection Based on Ensemble Machine LearningPage 39Background
Machine learning is now a powerful tool to help improve medical disorders diagnosis. One of its critical applications is the classification or clustering of neurodegenerative disease by pattern recognition methods based on biomedical signals and medical images. Early detection of these diseases is always useful and vital. In this study, we focused on Alzheimer’s disease (AD) as a type of dementia leading to problems with memory, thinking, and behavior. This disease was named after Dr. Alois Alzheimer in 1906 when he inspected a female patient who died of an unusual mental illness. According to recent studies, four stages are introduced for AD, including pre-dementia, early AD, moderate AD, and advanced AD. There are several methods for AD diagnosis that include mental status evaluation, physical exam, and neurological exam, based on different imaging techniques such as magnetic resonance imaging (MRI). Several methods have been introduced until now for the classification and detection of AD using machine learning algorithms, such as the classification of AD with discrete wavelet transform (DWT) and single linear discriminant analysis (LDA) classifiers and differentiation of AD from normal based on T2-weighted MRI with shearlet transform (ST) and K-nearest neighbors (KNN) classifiers.Objectives
In this work, we proposed a methodology based on DWT with three-level decomposition feature extraction (Figure 1).Methods
Based on statistics (mean, variance, skewness) of features and principal component analysis (PCA) for dimension reduction, we used five classifiers, including multi-layer perceptron neural network (MLPNN) (Figure 2), KNN, support vector machine (SVM) (Figure 3), and naïve Bayesian (NB) with the majority vote method to fuse them into one ensemble classifier.Results
The proposed methodology was evaluated using 100 T2-weighted MRI of AD and cognitive normal (CN) subjects, which were chosen from the Harvard Medical School website. The accuracy, specificity, and sensitivity achieved from our methodology were 95%, 90%, and 100%, respectively by using a 10-fold cross-validation strategy.Conclusion
Our study showed that the stacking method for classification of AD and CN was better than using one classifier and comparable with state-of-the-art methods.
Keywords: Ensemble Machine Learning, Alzheimer’s Disease, Discrete Wavelet Transform, Principal Component Analysis, Statistics Features, Magnetic Resonance Imaging
Medical Image Fusion Based on Deep Convolutional Neural NetworkPage 40Background
Medical image fusion plays an important role in helping doctors for effective diagnosis and treatment.Objectives
The purpose of image fusion is to combine information from various different medical modalities into a single image with preserving salient features and details of the source image.Methods
In this article, we present an approach for fusion MRI and CT images based on a deep convolutional neural network with four layers that was trained with medical images. In the beginning, images were decomposed to high and low frequencies by applied nonsubsampled shearlet transform (NSST). Then, for high-frequency sub-band, we used deep convolution neural networks for extracting feature maps. Low-frequency sub-band became fusion using the law of local energy fusion and in the end, the fused images were reconstructed by reverse NSST.Results
Experimental results indicated that the proposed scheme had better functionality in terms of image preservation, visual quality, and subjective and objective assessment.Conclusion
In this work, a medical image fusion method based on deep convolutional neural networks was proposed. The main novelty of this approach was the use of a deep convolutional neural network with four layers that was trained to extract source image features. To achieve good results, we used the nonsubsampled shearlet transform technique for multi-scale decomposition. Based on the experimental results, the proposed method achieved the best fusion performance.
Evaluation of IoT Capability in Detecting Kidney Malformations on Ultrasound Imaging SystemPage 41Background
Remote radiology is used in the remote areas today to diagnose scanned ultrasound data due to the lack of trained radiologists. The availability of online radiography experts and the availability of portable ultrasound communication facilities are some of the issues in remote radiology for the use of ultrasound scanning in telehealth.Objectives
The purpose of the present study was to investigate the ability of IoT to detect computer abnormalities of kidneys on ultrasound imaging.Methods
The study was conducted systematically by searching the Scopus, Science Direct, PubMed, and Google Scholar search engine databases using the PRISMA flow diagram to select articles. The English language input and the time range of 2013 to 2018 were used for the search. There were about 123 articles, 42 of which were included in the study. Then, the qualitative evaluation of articles was done based on the 12-question CASP diagnostic test study checklist and finally, 15 articles related to the study were selected.Results
The results of studies showed that IoT was more acceptable and satisfactory than other imaging modalities and had a significant role in the diagnosis of kidney disease, in terms of both cost and time.Conclusion
The results of the study showed that in the absence of a radiologist in the therapeutic environment or the patient’s inability to visit the hospital or clinic, using IoT is the best way to solve the mentioned problems.Keywords: IoT, Kidney Disease, Ultrasound Imaging