فهرست مطالب
Journal of Medical Signals and Sensors
Volume:13 Issue: 4, Oct-Dec 2023
- تاریخ انتشار: 1402/07/04
- تعداد عناوین: 8
-
-
Pages 253-260Background
Optical coherence tomography (OCT) imaging significantly contributes to ophthalmology in the diagnosis of retinal disorders such as age-related macular degeneration and diabetic macular edema. Both diseases involve the abnormal accumulation of fluids, location, and volume, which is vitally informative in detecting the severity of the diseases. Automated and accurate fluid segmentation in OCT images could potentially improve the current clinical diagnosis. This becomes more important by considering the limitations of manual fluid segmentation as a time-consuming and subjective to error method.
MethodsDeep learning techniques have been applied to various image processing tasks, and their performance has already been explored in the segmentation of fluids in OCTs. This article suggests a novel automated deep learning method utilizing the U-Net structure as the basis. The modifications consist of the application of transformers in the encoder path of the U-Net with the purpose of more concentrated feature extraction. Furthermore, a custom loss function is empirically tailored to efficiently incorporate proper loss functions to deal with the imbalance and noisy images. A weighted combination of Dice loss, focal Tversky loss, and weighted binary cross-entropy is employed.
ResultsDifferent metrics are calculated. The results show high accuracy (Dice coefficient of 95.52) and robustness of the proposed method in comparison to different methods after adding extra noise to the images (Dice coefficient of 92.79).
ConclusionsThe segmentation of fluid regions in retinal OCT images is critical because it assists clinicians in diagnosing macular edema and executing therapeutic operations more quickly. This study suggests a deep learning framework and novel loss function for automated fluid segmentation of retinal OCT images with excellent accuracy and rapid convergence result.
Keywords: Customized loss function, deep learning, fluid accumulation, optical coherencetomography, semantic segmentation -
Pages 261-273Background
Medical images of cancer patients are usually evaluated qualitatively by clinical specialists which makes the accuracy of the diagnosis subjective and related to the skills of clinicians. Quantitative methods based on the textural feature analysis may be useful to facilitate such evaluations. This study aimed to analyze the gray level co‑occurrence matrix (GLCM)‑based texture features extracted from T1‑axial magnetic resonance (MR) images of glioblastoma multiform (GBM) patients to determine the distinctive features specific to treatment response or disease progression.
Methods20 GLCM‑based texture features, in addition to mean, standard deviation, entropy, RMS, kurtosis, and skewness were extracted from step I MR images (obtained 72 h after surgery) and step II MR images (obtained three months later). Responded and not responded patients to treatment were classified manually based on the radiological evaluation of step II images. Extracted texture features from Step I and Step II images were analyzed to determine the distinctive features for each group of responsive or progressive diseases. MATLAB 2020 was applied to feature extraction. SPSS version 26 was used for the statistical analysis. P value < 0.05 was considered statistically significant.
ResultsDespite no statistically significant differences between Step I texture features for two considered groups, almost all step II extracted GLCM‑based texture features in addition to entropy M and skewness were significantly different between responsive and progressive disease groups.
ConclusionsGLCM‑based texture features extracted from MR images of GBM patients can be used with automatic algorithms for the expeditious prediction or interpretation of response to the treatment quantitatively besides qualitative evaluations.
Keywords: Glioblastoma multiform, gray level co‑occurrence matrix, texture feature, treatmentresponse -
Pages 272-279Background
Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative.
MethodsIn this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice.
ResultsAverage classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset.
ConclusionComparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice.
Keywords: Convolutional neural network, electroencephalography, embedded system, emotionrecognition -
Pages 280-289Background
Simulation of tomographic imaging systems with fan‑beam geometry, estimation of scattered beam profile using Monte Carlo techniques, and scatter correction using estimated data have always been new challenges in the field of medical imaging. The most important aspect is to ensure the results of the simulation and the accuracy of the scatter correction. This study aims to simulate 128‑slice computed tomography (CT) scan using the Geant4 Application for Tomographic Emission (GATE) program, to assess the validity of this simulation and estimate the scatter profile. Finally, a quantitative comparison of the results is made from scatter correction.
MethodsIn this study, 128‑slice CT scan devices with fan‑beam geometry along with two phantoms were simulated by GATE program. Two validation methods were performed to validate the simulation results. The data obtained from scatter estimation of the simulation was used in a projection‑based scatter correction technique, and the post-correction results were analyzed using four quantities, such as: pixel intensity, CT number inaccuracy, contrast‑to‑noise ratio (CNR), and signal‑to‑noise ratio (SNR).
ResultsBoth validation methods have confirmed the appropriate accuracy of the simulation. In the quantitative analysis of the results before and after the scatter correction, it should be said that the pixel intensity patterns were close to each other, and the accuracy of the CT scan number reached <10%. Moreover, CNR and SNR have increased by more than 30%–65% respectively in all studied areas.
ConclusionThe comparison of the results before and after scatter correction shows an improvement in CNR and SNR while a reduction in cupping artifact according to pixel intensity pattern and enhanced CT number accuracy
Keywords: Correction, computed tomography, geant4 application for tomographic emission, scatter -
Pages 290-299Background
Monitoring and evaluation of the techniques used in weightlifting are based on the subjective observation of the coach, which can ignore important aspects of short duration. This study aimed to implement an embedded system to register the angular variation of the hip, knee, and ankle joints, and plantar pressure during training.
MethodsFour professional and four amateur athletes performed five snatch lifts. To evaluate the angular measurement, the tests were simultaneously videotaped and the results were contrasted.
ResultsThe angular data presented a correlation coefficient of 0.92 and a delay of 495 ± 200 ms. The characterization of the sensors was implemented in a microcontroller with a mean absolute percentage error of 18.8% in the measurements. When comparing the average results between the elite and amateur groups, the amateur group performed a delayed descent in the first three phases of the lift and an accelerated descent in the fourth phase. A not uniform plantar pressure was registered in the same group, causing a reduction in the final speed of recovery with the barbell.
ConclusionsThe proposed system has been developed for biaxial angular registration of hip, knee, ankle, and plantar pressure during weightlifting snatch. The option to contrast between signals presented by the system met the requirements requested by the coaching staff and is seen as a promising quantitative analysis tool to support the coach and the athlete.
Keywords: Inertial sensor, plantar pressure, snatch, weightlifting -
Pages 300-306Background
Metal screws are commonly used for fracture fixations. However, the high modulus of elasticity relative to bones and releasing metallic ions by the metal screw needed a second surgery to remove the implant after the healing period. Furthermore, the removal of metal screws following the healing of the bone is a serious problem that can lead to refracture due to the presence of holes in the screw. Bioresorbable screws can overcome most of the problems associated with metallic screws which motivated research on manufacturing nonmetallic screws.
MethodsIn this study, three‑layer poly L‑lactic acid/bioactive glass composite screws were manufactured according to functionally graded material theory, by the forging process. All of the physical and chemical parameters in the manufacturing stages from making composite layers to the forging process were optimized to obtain suitable mechanical properties and durability off the screw in load‑bearing positions.
ResultsThe tri‑layer composite screw with unidirectional, ±20° angled, and random fibers orientation from core to shell shows a flexural load of 661.5 ± 20.3 (N) with a decrease about 31% after 4‑week degradation. Furthermore, its pull‑out force was 1.8 ± 0.1 (N) which is considerably more than the degradable polymeric screws. Moreover, the integrity of the composite screws was maintained during the degradation process.
ConclusionsBy optimizing the manufacturing process and composition of the composite and crystallinity, mechanical properties (flexural, torsion, and pull‑out) were improved and making it a perfect candidate for load‑bearing applications in orthopedic implants. Improving the fiber/matrix interface through the use of a coupling agent was also considered to preserve the initial mechanical properties. The manufactured screw is sufficiently robust enough to replace metals for orthopedic load‑bearing applications.
Keywords: Composite screw, degradation, flexural strength, forging, mechanical property -
Pages 307-318
In this article, a smart visual acuity measurement (VAM) system is designed and implemented. Hardware of the proposed VAM system consists of two parts: a wireless remote controller, and a high‑resolution LCD controlled through a Raspberry‑Pi mini‑computer. In the remote controller, a 3.5” graphical LCD with a touch screen is used as a human‑machine interface. When a point is pressed on the touch screen, the unique identifier (ID) code of that point as well as its page number is transmitted to the Raspberry‑Pi. In the Raspberry‑Pi, data are received and processed by a smart application coded in visual studio software. Then, the commanded tasks are executed by the Raspberry‑Pi’s operating system. Numerous charts, characters, and pictures are stored in the proposed VAM system to provide various VAM options while the size of the optotypes is adjusted automatically based on the distance of the patient from the LCD. The performance of the proposed VAM system is examined practically under the supervision of an expert optometrist where the results indicate that visual acuity, astigmatism, and color blindness of patients can be examined precisely through the proposed VAM system in an easier and more comfortable manner.
Keywords: LCD E‑chart, Raspberry‑Pi, visual acuity, visual acuity measurement, wirelesscommunication network