فهرست مطالب

Information Systems and Telecommunication - Volume:11 Issue: 1, Jan-Mar 2023

Journal of Information Systems and Telecommunication
Volume:11 Issue: 1, Jan-Mar 2023

  • تاریخ انتشار: 1402/02/09
  • تعداد عناوین: 8
|
  • Mohan Kumar, Sunil Kumar Khatri, Masoud Mohammadian Pages 1-11

    Cancer of the breast is a difficult disease to treat since it weakens the patient's immune system. Particular interest has lately been shown in the identification of particular immune signals for a variety of malignancies in this regard. In recent years, several methods for predicting cancer based on proteomic datasets and peptides have been published. The cells turns into cancerous cells because of various reasons and get spread very quickly while detrimental to normal cells. In this regard, identifying specific immunity signs for a range of cancers has recently gained a lot of interest. Accurately categorizing and compartmentalizing the breast cancer subtype is a vital job. Computerized systems built on artificial intelligence can substantially save time and reduce inaccuracy. Several strategies for predicting cancer utilizing proteomic datasets and peptides have been reported in the literature in recent years.It is critical to classify and categorize breast cancer treatments correctly. It's possible to save time while simultaneously minimizing the likelihood of mistakes using machine learning and artificial intelligence approaches. Using the Wisconsin Breast Cancer Diagnostic dataset, this study evaluates the performance of various classification methods, including SVC, ETC, KNN, LR, and RF (random forest). Breast cancer can be detected and diagnosed using a variety of measurements of data (which are discussed in detail in the article) (WBCD). The goal is to determine how well each algorithm performs in terms of precision, recall, and accuracy. The variation of each classification threshold has been tested on various algorithms and SVM turned out to be very promising.

    Keywords: Artificial Intelligence, Machine Learning, Wisconsin Breast Cancer Diagnostic (WBCD) dataset, k-nearest neighbors (k-NN): Support Vector Classifier: Logistic Regression, ExtraTree-decision, Random-Forest
  • Fatemeh Ghanaati, Gholamhossein Ekbatanifard, Kamrad Khoshhal Roudposhti Pages 12-23

    Today, smartphones, due to their ubiquity, have become indispensable in human daily life. Progress in the technology of mobile phones has recently resulted in the emergence of several popular services such as location-based social networks (LBSNs) and predicting the next Point of Interest (POI), which is an important task in these services. The gathered trajectory data in LBSNs include various contextual information such as geographical and temporal contextual information (GTCI) that play a crucial role in the next POI recommendations. Various methods, including collaborating filtering (CF) and recurrent neural networks, incorporated the contextual information of the user’ trajectory data to predict the next POIs. CF methods do not consider the effect of sequential data on modeling, while the next POI prediction problem is inherently a time sequence problem. Although recurrent models have been proposed for sequential data modeling, they have limitations such as similarly considering the effect of contextual information. Nonetheless, they have a separate impact as well. In the current study, a geographical temporal contextual information-extended attention gated recurrent unit (GTCI-EAGRU) architecture was proposed to separately consider the influence of geographical and temporal contextual information on the next POI recommendations. In this research, the GRU model was developed using three separate attention gates to consider the contextual information of the user trajectory data in the recurrent layer GTCI-EAGRU architecture, including timestamp, geographical, and temporal contextual attention gates. Inspired by the assumption of the matrix factorization method in CF approaches, a ranked list of POI recommendations was provided for each user. Moreover, a comprehensive evaluation was conducted by utilizing large-scale real-world datasets based on three LBSNs, including Gowalla, Brightkite, and Foursquare. The results revealed that the performance of GTCI-EAGRU was higher than that of competitive baseline methods in terms of Acc@10, on average, by 42.11% in three datasets.

    Keywords: LBSN, Trajectory data, Contextual Information, GRU
  • Jia Uddin Pages 24-30

    Identifying hazards from human error is critical for industrial safety since dangerous and reckless industrial worker actions, as well as a lack of measures, are directly accountable for human-caused problems. Lack of sleep, poor nutrition, physical deformities, and weariness are some of the key factors that contribute to these risky and reckless behaviors that might put a person in a perilous scenario. This scenario causes discomfort, worry, despair, cardiovascular disease, a rapid heart rate, and a slew of other undesirable outcomes. As a result, it would be advantageous to recognize people's mental states in the future in order to provide better care for them. Researchers have been studying electroencephalogram (EEG) signals to determine a person's stress level at work in recent years. A full feature analysis from domains is necessary to develop a successful machine learning model using electroencephalogram (EEG) inputs. By analyzing EEG data, a time-frequency based hybrid bag of features is designed in this research to determine human stress dependent on their sex. This collection of characteristics includes features from two types of assessments: time-domain statistical analysis and frequency-domain wavelet-based feature assessment. The suggested two layered autoencoder based neural networks (AENN) are then used to identify the stress level using a hybrid bag of features. The experiment uses the DEAP dataset, which is freely available. The proposed method has a male accuracy of 77.09% and a female accuracy of 80.93%.

    Keywords: EEG Signals, emotion analysis, stress analysis, autoencoder, machine learning, deep learning
  • Meisam Sharifi Sani, Amid Khatibi Pages 31-40

    The wireless Visual sensor network is a highly functional domain of high-potential network generations in unpredictable and dynamic environments that have been deployed from a large number of uniform or non-uniform groups within the desired area, cause the realization of large regulatory applications from the military and industrial domain to hospital and environment. Therefore, security is one of the most important challenges in these networks. In this research, a new method of routing smart cameras with the help of cloud computing technology has been provided. The framework in the cloud computing management layer increases security, routing, inter interaction, and other features required by wireless sensor networks. Systematic attacks are simulated by a series of standard data collected at the CTU University related to the Czech Republic with RapidMiner software. Finally, the accuracy of detection of attacks and error rates with the suggested NN-SVM algorithm, which is a combination of vector machines and neural networks, is provided in the smart cameras based on the visual wireless sensor networks in MATLAB software. The results show that different components of the proposed architecture meet the quality characteristics of visual wireless sensor networks. Detection of attacks in this method is in the range of 99.24% and 99.35% in the worst and best conditions, respectively.

    Keywords: intrusion detection, smart cameras, security, visual sensor network, cloud computing
  • Manvi Breja Pages 41-47

    Considering the growth of researches on improving the performance of non-factoid question answering system, there is a need of an open-domain non-factoid dataset. There are some datasets available for non-factoid and even how-type questions but no appropriate dataset available which comprises only open-domain why-type questions that can cover all range of questions format. Why-questions play a significant role and are usually asked in every domain. They are more complex and difficult to get automatically answered by the system as why-questions seek reasoning for the task involved. They are prevalent and asked in curiosity by real users and thus their answering depends on the users’ need, knowledge, context and their experience. The paper develops a customized web crawler for gathering a set of why-questions from five popular question answering websites viz. Answers.com, Yahoo! Answers, Suzan Verberne’s open-source dataset, Quora and Ask.com available on Web irrespective of any domain. Along with the questions, their category, document title and appropriate answer candidates are also maintained in the dataset. With this, distribution of why-questions according to their type and category are illustrated. To the best of our knowledge, it is the first large enough dataset of 2000 open-domain why-questions with their relevant answers that will further help in stimulating researches focusing to improve the performance of non-factoid type why-QAS.

    Keywords: Non-Factoid questions, web crawler, Latent Dirichlet Allocations, Topic Modeling, Natural Language Processing
  • Fardad Rad, Marzieh Gerami Pages 48-56

    Network-on-chip (NoC) is an effective interconnection solution of multicore chips. In recent years, wireless interfaces (WIs) are used in NoCs to reduce the delay and power consumption between long-distance cores. This new communication structure is called wireless network-on-chip (WiNoC). Compared to the wired links, demand to use the shared wireless links leads to congestion in WiNoCs. This problem increases the average packet latency as well as the network latency. However, using an efficient control mechanism will have a great impact on the efficiency and performance of the WiNoCs. In this paper, a mathematical modeling-based flow control mechanism in WiNoCs has been investigated. At first, the flow control problem has been modeled as a utility-based optimization problem with the wireless bandwidth capacity constraints and flow rate of processing cores. Next, the initial problem has been transformed into a dual problem without limitations and the best solution of the dual problem is obtained by the gradient projection method. Finally, an iterative algorithm is proposed in a WiNoC to control the flow rate of each core. The simulation results of synthetic traffic patterns show that the proposed algorithm can control and regulate the flow rate of each core with an acceptable convergence. Hence, the network throughput will be significantly improved.

    Keywords: Wireless network-on-chip, Flow control mechanism, Optimization, Gradient projection method, Utility function
  • meriane brahim, Salah Rahmouni, Issam Tifouti Pages 57-64

    This article introduces the study and realization of the laser barrier alarm system, after the laser is obtained by an electronic card, the wireless control system is connected to the control room to announce the application in real time, and the laser is used in many applications fields, from industry to medicine, in this article on the basis of Industrial applications such as laser barrier. It uses an alarm system to detect and deter intruders. Basic security includes protecting the perimeter of a military base or a safety distance in unsafe locations or near a government location. The first stage secures surrounding access points such as doors and windows; The second stage consists of internal detection with motion detectors that monitor movements, In this article, we adopt the embodiment of a coded laser barrier that is transmitted between two units, processes the signal, compares the agreed conditions, and to be high accuracy, we suggest using wavelet transmission to process the received signal and find out the frequencies that achieve alarm activation considering that the transmitted signal They are pulses, but after analysis with a proposed algorithm, we can separate the unwanted frequencies generated by the differential vibrations in order to arrive at a practically efficient system.

    Keywords: wavelet transform, Lasers Sources, Radio frequency, Laser Coded Barriers, Alarm System
  • Masoumeh Rezaei, mansoureh rezaei, Masoud Rezaei Pages 65-74

    Foreground-background image segmentation has been an important research problem. It is one of the main tasks in the field of computer vision whose purpose is detecting variations in image sequences. It provides candidate objects for further attentional selection, e.g., in video surveillance. In this paper, we introduce an automatic and efficient Foreground-background segmentation. The proposed method starts with the detection of visually salient image regions with a saliency map that uses Fourier transform and a Gaussian filter. Then, each point in the maps classifies as salient or non-salient using a binary threshold. Next, a hole filling operator is applied for filling holes in the achieved image, and the area-opening method is used for removing small objects from the image. For better separation of the foreground and background, dilation and erosion operators are also used. Erosion and dilation operators are applied for shrinking and expanding the achieved region. Afterward, the foreground and background samples are achieved. Because the number of these data is large, K-means clustering is used as a sampling technique to restrict computational efforts in the region of interest. K cluster centers for each region are set for training of Support Vector Machine (SVM). SVM, as a powerful binary classifier, is used to segment the interest area from the background. The proposed method is applied on a benchmark dataset consisting of 1000 images and experimental results demonstrate the supremacy of the proposed method to some other foreground-background segmentation methods in terms of ER, VI, GCE, and PRI.

    Keywords: Foreground-Background Segmentation, Support vector machine, k-means clustering, saliency map