فهرست مطالب

Journal of Information Technology Management
Volume:12 Issue: 4, Autumn 2020

  • The Importance of Human Computer Interaction: Challenges, Methods and Applications
  • تاریخ انتشار: 1399/11/21
  • تعداد عناوین: 8
|
  • Surendheran Kaliyaperumal*, Mallika Vijayakumar Pages 1-10

    < p class="chekide">Recently, Massive Open Online Courses have become sensational in the field of distance learning. There is a plethora of advantages being listed in learning through MOOCs but this pedagogy lacks in few areas when compared with traditional classes. One of those inabilities of MOOC is its support to prepare the students for laboratory-based courses. The authors of this study chose a MOOC course that teaches Digital Photography and created an Augmented Reality (AR) experience for a module that explains the different parts of a digital camera. The 2nd year Multimedia students of Vellore Institute of Technology have been asked to experience the MOOC video followed by the AR experience. Their feedbacks before and after the AR experience has been statistically tested and reported. The results revealed that the students feel more confident and concentrate more when the instructional video was given as an AR experience. This study suggests that AR integrated MOOC modules might help in training students better for practice-based courses.

    Keywords: MOOC, Augmented reality in education, Distant learning, Practice-based courses
  • Omar Ibrahim Obaid, Mazin Mohammed *, Salama A. Mostafa Pages 11-21

    Corona Virus (COVID-19) is a major problem among people, and it causes suffering worldwide. Yet, the traditional prediction models are not yet suitably efficient in catching the fundamental expertise as they cannot visualize the difficulty in the health's representation problem areas. This paper states prediction mechanism that uses a model of deep learning called Long Short-Term Memory (LSTM). We have carried this model out on corona virus dataset that obtained from the records of infections, deaths, and recovery cases across the world. Furthermore, producing a dataset which includes features of geographic regions (temperature and humidity) that have experienced severe virus outbreaks, risk factors, spatio-temporal analysis, and social behavior of people, a predictive model can be developed for areas where the virus is likely to spread. However, the outcomes of this study are justifiable to alert the authorities and the people to take precautions.

    Keywords: Deep learning, LSTM, Prediction, Covid-19, Recurrent Neural Network (RNN)
  • S. Akila Agnes, J. Anitha *, A. Arun Solomon Pages 22-35

    The continuing outbreak of COVID-19 pneumonia is globally concerning. Timely detection of infection ensures prompt quarantine of patient which is crucial for preventing the rapid spread of this contagious disease and also supports the patient with necessary medication. Due to the high infection rate of COVID-19, our health management system needs an automatic diagnosis tool that equips the health workers to pay immediate attention to the needy person. Chest CT is an essential imaging technique for diagnosis and staging of 2019 novel coronavirus disease (COVID-19). The identification of COVID-19 CT findings assists health workers on further clinical evaluation, especially when the findings on CT scans are trivial, the person may be recommended for Reverse-transcription polymerase chain reaction (RT-PCR) tests. Literature reported that the ground-glass opacity (GGO) with or without consolidation are dominant CT findings in COVID-19 patients. In this paper, the U-Net based segmentation approach is proposed to automatically segment and analyze the GGO and consolidation findings in the chest CT scan. The performance of this system is evaluated by comparing the auto-segmented infection regions with the manually-outlines ones on 100 axial chests CT scans of around 40 COVID-19 patients from SIRM dataset. The proposed U-Net with pre-process approach yields specificity of 0.91 ± 0.09 and sensitivity of 0.87 ± 0.07 on segmenting GGO region and specificity of 0.81 ± 0.13 and sensitivity of 0.44 ± 0.17 on segmenting consolidation region. Also the experimental results confirmed that the automatic detection method identifies the CT finding with a precise opacification percentage from the chest CT image.

    Keywords: Covid-19, CT imaging findings, Segmentation, Deep learning, Ground-glass opacities, U-Net
  • Diana Andrushia *, Mary Neebha, Thusnavis Bella Mary Pages 36-49

    The production of counterfeit paper currencies has become cheaper because of the advancement in the printing technologies. The circulation of counterfeit currencies down the economy of a country. By leveraging this, there is a mandate to develop an intelligent technique for the detection and classification of counterfeit currencies. The intelligent techniques play a major role in the field of Human Computer Interaction (HCI) too. This paper deals with the detection of counterfeit Indian currencies. The proposed method feature extraction is based on the characteristics of Indian paper currencies. The first order and second order statistical features are extracted initially from the input. The effective feature vectors are given to the SVM classifier unit for classification. The proposed method produced classification accuracy of 95.8%. The experimental results are compared with state-of-the methods and produced reliable results.

    Keywords: Counterfeit currency, Indian paper currency, SVM, Intelligent system, Currency detection
  • J. Andrew Onesimu *, J Karthikeyan Pages 50-67

    In recent privacy has emerged as one of the major concerns of deep learning, since it requires huge amount of personal data. Medical Image Analysis is one of the prominent areas where sensitive data are shared to a third party service provider. In this paper, a secure deep learning scheme called Metamorphosed Learning (MpLe) is proposed to protect the privacy of images in medical image analysis. An augmented convolutional layer and image morphing are two main components of MpLe scheme. Data providers morph the images without privacy information using image morphing component. The human unrecognizable image is then delivered to the service providers who then apply deep learning algorithms on morphed data using augmented convolution layer without any performance penalty. MpLe provides sturdy security and privacy with optimal computational overhead. The proposed scheme is experimented using VGG-16 network on CIFAR dataset. The performance of MpLe is compared with similar works such as GAZELLE and MiniONN and found that the MpLe attracts very less computational and data transmission overhead. MpLe is also analyzed for various adversarial attack and realized that the success rate is as low as . The efficiency of the proposed scheme is proved through experimental and performance analysis.

    Keywords: Deep learning, Data privacy, Image privacy, Medical image analysis, Data morphing
  • Hikmet Tosyalı *, Çiğdem Aytekin Pages 68-88
    Today, news texts can be created automatically and presented to readers without human participation through technologies and methods such as big data, deep learning, and natural language generation. With this research, we have developed an application that can contribute to the literature regarding The Studies on Robot Journalism Applications with a technology-reductionist perspective. Robot journalism application named Robottan Al Haberi (the English equivalent of the application name is “get the news from the robot”) produces news text by placing weather, exchange rates, and earthquake data in certain templates. The news texts, which are produced by placing the data in appropriate spaces on the template and with a maximum length of 280 characters, are automatically shared via the Twitter account @robottanalhaber. The weather information is shared once a day, the exchange rate information is shared three times a day, and the earthquake information is shared instantly. Here, we aim to produce automatic and short news by using the available structured data by placing them in specific news templates suggesting different options or a combination of them for different situations.
    Keywords: Natural language generation, Artificial Intelligence, Robot journalism, Data journalism, Newspapers
  • Karanveer Singh*, Rahul Tiwari, Prashant Johri, Ahmed A. Elngar Pages 89-108

    The conjecture of stock exchange is the demonstration of attempting to decide the forecast estimation of a particular sector or the market, or the market as a whole. Every stock every investor needs to foresee the future evaluation of stocks, so a predicted forecast of a stock’s future cost could return enormous benefit. To increase the accuracy of the Conjecture of stock Exchange with daily changes in the market value is a bottleneck task. The existing stock market prediction focused on forecasting the regular stock market by using various machine learning algorithms and in-depth methodologies. The proposed work we have implemented describes the new NN model with the help of different learning techniques like hyperparameter tuning which includes batch normalization and fitting it with the help of random-search-cv. The prediction of the Stock exchange is an active area for research and completion in Numerai. The Numerai is the most robust data science competition for stock market prediction. Numerai provides weekly new datasets to mold the most exceptional prediction model. The dataset has 310 features, and the entries are more than 100000 per week. Our proposed new neural network model gives accuracy is closely 86%. The critical point, it isn’t easy with our proposed model with existing models because we are training and testing the proposed model with a new unlabeled dataset every week. Our ultimate aim for participating in Numerai competition is to suggest a neural network methodology to forecast the stock exchange independent of datasets with reasonable accuracy.

    Keywords: neural network, Stock market prediction, Numerai, NMR, Deep learning
  • Malin Bruntha *, Immanuel Alex Pandian, Siril Sam Abraham Pages 109-128
    Deep learning techniques have become very popular among Artificial Intelligence (AI) techniques in many areas of life. Among many types of deep learning techniques, Convolutional Neural Networks (CNN) can be useful in image classification applications. In this work, a hybridized approach has been followed to classify lung nodule as benign or malignant. This will help in early detection of lung cancer and help in the life expectancy of lung cancer patients thereby reducing the mortality rate by this deadly disease scourging the world. The hybridization has been carried out between handcrafted features and deep features. The machine learning algorithms such as SVM and Logistic Regression have been used to classify the nodules based on the features. The dimensionality reduction technique, Principle Component Analysis (PCA) has been introduced to improve the performance of hybridized features with SVM. The experiments have been carried out with 14 different methods. It has been found that GLCM + VGG19 + PCA + SVM outperformed all other models with an accuracy of 94.93%, sensitivity of 90.9%, specificity of 97.36% and precision of 95.44%. The F1 score was found to be 0.93 and the AUC was 0.9843. The False Positive Rate was found to be 2.637% and False Negative Rate was 9.09%.
    Keywords: CNN, Transfer Learning, GLCM, SVM, PCA