فهرست مطالب

Journal of Medical Signals and Sensors
Volume:3 Issue: 1, Jan-Mar 2013

  • تاریخ انتشار: 1392/01/20
  • تعداد عناوین: 8
|
  • Shaghayegh Haghjooy Javanmar, Hossein Rabbani Page 1
  • Farzaneh Shayegh, Jean, Jacques Bellanger, Saeed Sadri, Rasoul Amirfattahi, Karim Ansari, Asl, Lotfi Senhadji Page 2
    Neural mass models are computational nonlinear models that simulate the activity of a population of neurons as an average neuron, in such a way that different Inhibitory Post Synaptic Potential (IPSP) and Excitatory Post Synaptic Potential (EPSP) signals could be reproduced. These models have been developed either to simulate the recognized neural mechanisms or to predict some physiological facts that are not easy to realize naturally. The role of the excitatory and inhibitory activity variation in seizure genesis has been proved, but it is not evident how these activities influence appearance of seizure like signals. In this paper a population model is considered in which the physiological inter-relation of the pyramidal and inter-neurons of the hippocampus has been appropriately modeled. The average neurons of this model have been assumed to act as a linear filter following by a nonlinear function. By changing the gain of excitatory and inhibitory activities that are modeled by the gain of the filters, seizure-like signals could be generated. In this paper through the analysis of this nonlinear model by means of the describing function concepts, it is theoretically shown that not only the gains of the excitatory and inhibitory activities, but also the time constants may play an efficient role in seizure genesis.
  • Mehdi Siahi, Javad Razjouyan, Omid Khayat, Ali Alizadeh Mansoori Page 15
    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper a hybrid algorithm containing Gray Level Spatial Correlation histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking into account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running Computation Time and the effects of GLSC parameters are investigated in the simulation results.
  • Raheleh Kafieh, Alireza Mehridehnavi Page 22
    In this study, we considered some competitive learning methods which include hard competitive learning (HCL) and soft competitive learning (SCL) with/ without fixed network dimensionality for reliability analysis in microarrays. In order to have a more extensive view, and keeping in mind that competitive learning methods aim at error minimization or entropy maximization (different kinds of function optimization), we decided to investigate the abilities of mixture decomposition schemes, too. Therefore, we assert that this study covers the algorithms based on function optimization, with particular insistence on different competitive learning methods. The destination is finding the most powerful method according to a pre-specified criterion determined with numerical methods and matrix similarity measures. Furthermore, we should provide an indication showing the intrinsic ability of the dataset to form clusters before we apply a clustering algorithm. So, we proposed Hopkins statistic as a method for finding the intrinsic ability of a data to be clustered. The results show the remarkable ability of Rayleigh mixture model in comparison with other methods in reliability analysis task.
  • Maryam Atarod, Parvaneh Shokrani Page 31
    Background
    Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (AT). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations.
    Methods
    A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: depth PD profiles at two distances from the field''s edge, and in-plane PD profiles at two depths.
    Results
    The results of this study show that PD is depth dependent near the field’s edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field’s edge.
    Conclusions
    It is concluded that estimating the maximum fetal dose, using a flat phantom i.e. without taking into account the abdomen thickness, is possible. Also, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement.
  • Mohammad Reza Daliri, Mitra Taghizadeh, Kavous Salehzadeh Niksirat Page 37
    Human visual system recognizes objects in a fast manner and the neural activity of the human brain generates signals which provide the information about the objects categories seen by the subjects. The brain signals can be recorded using different systems like the EEG. The EEG signals carry significant information about the stimuli that stimulate the brain. In order to translate the information from the EEG for the object recognition mechanism, in this study, twelve various categories were selected as visual stimuli and were presented to the subjects in a controlled task and the signals were recorded through 19-channels EEG recording system. Analysis of signals was performed using two different ERP computations namely the «target/rest» and «target/non-target» tasks. Comparing ERP of target with rest time indicated that the most involved electrodes in our task were F3, F4, C3, C4, Fz, Cz, among others. ERP of «target/non-target» resulted that in target stimuli two positive peaks occurred about 400ms and 520ms after stimulus onset however in non-target stimuli only one positive peak appeared about 400ms after stimulus onset. Moreover reaction times of subjects were computed and the results showed that the category of flower had the lowest reaction time; however the stationery category had the maximum reaction time among others. The results provide useful information about the channels and the part of the signals that are affected by different object categories in terms of ERP brain signals. This study can be considered as the first step in the context of human-computer interface applications.
  • Raheleh Kafieh, Hossein Rabbani, Saeed Kermani Page 45
    Optical coherence tomography (OCT) is a powerful imaging modality used to image various aspects of biological tissues, such as structural information, blood flow, elastic parameters, change of polarization states, and molecular content [1]. In contrast to OCT technology development which has been a field of active research since 1991, OCT image segmentation has only been more fully explored during the last decade. Segmentation, however, remains one of the most difficult and at the same time most commonly required steps in OCT image analysis. No typical segmentation method exists that can be expected to work equally well for all tasks [2].One of the most challenging problems in OCT image segmentation is designing a system to work properly in clinical applications. There is no doubt that algorithms and research projects work on a limited number of images with some determinate abnormalities (or even on normal subjects) and such limitations make them more appropriate for bench and not for the bedside. Moreover, OCT images are inherently noisy, thus often requiring the utilization of 3-D contextual information. Furthermore, the structure of the retina can drastically change during disease. Nevertheless, OCT image segmentation is a rapidly growing and important area and a great deal of efforts went into designing algorithms for automatic segmentation of retinal OCTs. The important steps of OCT image segmentation algorithms may be categorized in to 4 steps: 1) Determining a particular range of OCT datasets (e.g. 2D, 3D, Time Domain, Spectral Domain, macular, ONH, etc.) for which the proposed algorithm will work properly; 2) Allocating proper values for possible parameters of the algorithm; 3) Running the algorithm on determined datasets and acquiring the outcomes (e.g. boundary information, thickness values, classification of normal and abnormal images and etc.); 4) Validating the results by comparing with gold standards or getting the confirmation by an expert. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Let’s define 5 separate families of segmentation approaches: Methods applicable to A-scan, B-scan, active contour approaches (frequently in 2-D), analysis methods utilizing artificial intelligence, and segmentation methods using 3D graphs constructed from the 3D OCT volumetric images. Some details of algorithms representing each class will be discussed in more detail and concluding remarks will be provided to compare the efficiency of different methods in different datasets. It should be noted that intrinsic noisy structure of retinal OCTs (particularly in old OCTs and after diseases) makes simple edge detection algorithms unsuitable for this purpose and researchers have tried to develop new algorithms to overcome such problems.
  • Viroj Wiwanitkit Page 61