فهرست مطالب

Journal of Medical Signals and Sensors
Volume:3 Issue: 4, Oct-Dec 2013

  • تاریخ انتشار: 1392/09/25
  • تعداد عناوین: 8
|
  • Saeid Rashidi, Ali Fallah, Farzad Towhidkhah Page 195
    With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively.
  • Ehsan Akafi, Mansour Vali, Negin Moradi, Kowsar Baghban Page 209
    Hypernasality is a frequently occurring resonance disorder in children with cleft palate. Generally an operation is necessary to reduce the hypernasality and therefore an assessment of hypernasality is imperative to quantify the effect of the surgery and design the speech therapy sessions which are crucial after surgery. In this paper, a new quantitative method is proposed to estimate hypernasality. The proposed method used the fact that an Autoregressive (AR) model for vocal tract system of a patient with hypernasal speech is not accurate; because of the zeros appear in the frequency response of vocal tract system. Therefore in our method hypernasality was estimated by a quantity calculated from comparing the distance between the sequences of cepstrum coefficients extracted from AR model and Autoregressive Moving Average (ARMA) model. K-means and Bayes theorem were utilized to classify the utterances of subjects by means of proposed index. We achieved the accuracy up to 81.12% on utterances and 97.14% on subjects. Since the proposed method needs only computer processing of speech data, compare to other clinical methods it is provides a simple evaluation of hypernasality.
  • Rasoul Amirfattahi Page 217
    Due to its simplicity radix-2 is a popular algorithm to implement Fast Fourier Transform. Radix-2p algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, Twiddle Factor Template, in this paper we propose a method for exact calculation of multiplicative complexity for radix-2p algorithms. The methodology is described for radix-2, radix-22 and radix-23 algorithms. Results show that radix-22 and radix-23 have significantly less computational complexity compared to radix-2. Another interesting result is that while the number of complex multiplications in radix-23 algorithm is slightly more than radix-22, the number of real multiplications for radix-23 is less than radix-22.
  • Ahmad Keshtkar, Hadi Seyedarabi, Peyman Sheikhzadeh, Seyed Hossein Rasta Page 225
    There are a variety of electrocardiogram (ECG) based methods to detect myocardial infarction (MI) patients. This study used the signal averaged electrocardiogram (SAECG) and its wavelet coefficient as an index to detect MI. Orthogonal leads signals from 50 acute myocardial infarction (AMI) and 50 healthy subjects were selected from the national metrology institute of Germany (PTB diagnostic database). They were filtered and discrete wavelet transformed was exerted on them. Four conventional features and two new features introduced in this study were extracted from SAECG and its wavelet decompositions. Finally for data classification, probabilistic neural network were used. This method was able to detect and discriminate AMI patients from healthy subjects using the probabilistic neural network, which shows 93.0% sensitivity at 86.0% specificity with 89.5% accuracy. This technique and the new extracted features showed good promise in the identification of MI patients. However, the sensitivity and specificity is comparable with other findings and has high accuracy although we extracted only 6 features.
  • Keyvan Kasiri, Kamran Kazemi, Mohammad Javad Dehghani, Mohammad Sadegh Helfroush Page 232
    In this paper, we present a new brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and cerebrospinal fluid (CSF) is extracted. These two steps are performed using the FAST toolbox (FMRIB''s Automated Segmentation Tool) integrated in FSL software (FSL-FAST) developed in Oxford Centre for Functional MRI of the Brain (FMRIB). Then, in the third step, LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. Voxel intensities and spatial positions are selected as two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems, however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from simulated MR images using BrainWeb MR image simulator and real data provided by Internet Brain Segmentation Repository (IBSR). The automatically segmented tissues were evaluated by comparison with corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth.
  • Hamed Ghodrati, Mohammad Javad Dehghani, Habibolah Danyali Page 244
    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.
  • Ehsan Habibi, Shiva Soury, Akbar Hasan Zadeh Page 256
    Various studies carried out on different photo anthropometry, but each one had some deficiencies which during the years they have been resolved. The objective of this paper is to test the efficiency of two-dimensional image processing software in photo anthropometry of hand. In this applied research, 204 office workers and industrial workers were selected. Their hands were measured by manual with photo anthropometric methods. In this study, designing the “Hand Photo Anthropometry Set,” we tried to fix the angle and distance of the camera in all of the photos. Thus, some of the common mistakes in photo anthropometric method got controlled. The taken photos were analyzed by Digimizer software, version 4.1.1.0 and Digital Caliper (Model: Mitutoyo Corp., Tokyo, Japan) was used via manual method. t-test statistical test on data revealed that there is no significant difference between the manual and photo anthropometric results (P > 0.05) and the correlation coefficients for hand dimensions are similar in both methods illustrated in the range of 0.71-0.95. The statistical analyses showed that photo anthropometry can be replaced with manual methods. Furthermore, it can provide a great help to develop an anthropometric database for work gloves manufacturers. Since the hand anthropometry is a necessary input for tool design, this survey can be used to determine the percentiles of workers’ hands.
  • Dr Asghar Gholami, Molood Baradaran, Ghahfarokhi, Marjan Ebrahimi, Milad Baradaran, Ghahfarokhi Page 262
    In recent years, interest in medical application of lasers especially as a surgical alternative is considerably increasing due to their distinct advantages such as non-contact intervention, bacteriostasis, less traumatization, minimal invasiveness, decreased bleeding and less heat damage. The present study aimed to evaluate the temperature changes and the consequent released thermal stress in cortical bone caused by an Erbium:yttrium aluminum garnet (Er:YAG) laser (Fideliss 320A, Fotona Inc., Deggingen, Germany) during osteotomy, using mathematical computation by means of Maple software, version 9.5 (Maplesoft, a division of Waterloo Maple Inc., Canada). The results obtained here were compared with the experimental measurements using Er:YAG laser in the osteotomy clinics. A bone slab with thickness of 1 mm was simulated in Maple software. Then, an Er:YAG laser emitting 100 μs pulses at a wavelength of 2940 nm were modeled. Two different clinical settings of the Er:YAG laser with 200 mJ and 400 mJ energies, both with 100 μs exposure and 500 μs silence were studied. To investigate the temperature distribution in the cortical bone, the time-dependent heat conduction equations were defined and solved in the Maple software. Finally, by defining the heat distribution function in the Maple, thermal stress in the bone was investigated. Results of the computations showed that, on the bone irradiated area (center of the bone surface) the maximum temperature rise was 0.8°C and 1.6°C, for 200 mJ and 400 mJ Er:YAG laser exposure, respectively. The temperature rise reached to its minimum at radial distances of 1.2 cm from the point of irradiated area for 200 mJ laser while it was 1.5 cm for 400 mJ laser. For 200 mJ laser the maximum derived radial (σrr), axial (σzz) and azimuthally (σθθ) stress components were 0.20, 0.16 and 0.08 MPa, respectively. While, for 400 mJ laser the maximum derived σrr, σzz and σθθ stress components were 0.39, 0.31 and 0.16 MPa, respectively. These results confirm that use of 100 μs Er:YAG laser pulses with 500 μs silence at 200 and 400 mJ energies minimizes thermal tissue damage for the laser osteotomies, without continued water cooling (irrigation) on the exposed area.