فهرست مطالب

International Journal Of Nonlinear Analysis And Applications
Volume:12 Issue: 1, Winter-Spring 2021

  • Special Issue
  • تاریخ انتشار: 1399/10/12
  • تعداد عناوین: 193
|
  • Mahdi Sadidi *, Hossein Tavakolian, Mohammad Asim Pages 1-14

    This article investigates the formation of price bubbles in the insurance industry index in the Tehran Stock Exchange with the total price index, considering the importance of the price trend and the formation of bubbles in the stock market in order to get a better understanding of the price trend of the insurance industry compared to the total index of Tehran Stock Exchange. The time period studied in this article is from April 2017 to April 2020. The method used here is the Markov switching method. Also, the basis of this study to identify the price bubble is two-regime state-space model of Wu (1995) and Campbell and Shiller (1988), which considers bubble formation in one state and bubble burst in the other. The results show that the trend of price bubble formation in the insurance industry in Tehran Stock Exchange and the total price index of the stock exchange are different and fluctuations in the insurance industry index have been more than fluctuations in the total stock index. Also, based on the results, the number of bubble formation trends related to the insurance industry index is about 26 times and the total index is 19 times. In addition, the bubbles that occur are less compatible with each other, so that studies show that in the period of 2017 to 2018, the total index did not face the formation of bubbles, but the insurance industry index has experienced about 12 bubble formation processes in the same period

    Keywords: Price bubble, Insurance industry, Tehran Stock Exchange
  • Mohammad Bakhshi, Ahmad Yaghoobnezhad *, Hashem Nikoo Maram Pages 15-28

    For the purpose of developing capital markets, performance evaluation is one of the most important debates for shareholders, creditors, governments and managers. Investors also are inclined how successful managers are in utilizing their capital. to know the progress process of managers in using their capital. Credit rating plays a crucial role in the money and capital markets and indicates an independent opinion on the company’s ability to meet all the obligations in a timely and comprehensive manner. As most rating agencies do not disclose the method used and the methods provided for credit rating of companies in previous researches are mostly based on statistical methods and are relatively complex, in the present study, companies are ranked based on ratios regarding the information contained in financial statements, which are called accounting variables. These ratios are classified into 5 groups of profitability, growth and development, activity, leverage , liquidity, and the ratios related to each group. The survey results were collected using a questionnaire to evaluate the effective weights of each attribute with Analytical Network Process (ANP) and DEMATEL Technique and then the ranking of companies was conducted using the COPRAS technique with Expert Choice software.

    Keywords: Credit Rating, Hybrid method, Network Analysis, Financial Ratios, COPRAS, DEMATEL decision making technique
  • Azadeh Mohammadi *, Anis Shaverizade Pages 29-38

    Sentiment analysis is a subfield of Natural Language Processing (NLP) which tries to process a text to extract opinions or attitudes towards topics or entities. Recently, the use of deep learning methods for sentiment analysis has received noticeable attention from researchers. Generally, different deep learning methods have shown superb performance in sentiment analysis problem. However, deep learning models are different in nature and have different strengths and limitations. For example, convolutional neural networks are useful for extracting local structures from data, while recurrent models are able to learn order dependence in sequential data. In order to combine the advantages of different deep models, in this paper we have proposed a novel approach for aspect-based sentiment analysis which utilizes deep ensemble learning. In the proposed method, we first build four deep learning models, namely CNN, LSTM, BiLSTM and GRU. Then the outputs of these models are combined using stacking ensemble approach where we have used logistic regression as meta-learner. The results of applying the proposed method on the real datasets show that our method has increased the accuracy of aspect-based prediction by 5% to 20% compared to the basic deep learning methods.

    Keywords: Deep learning, Ensemble Learning, Natural Language Processing, Opinion Mining, Sentiment Analysis
  • MohammadMahdi Shakouri, Aliasghar Taherabadi *, Mehrdad Ghanbari, Babak Jamshidinavid Pages 39-48

    In this study, the aim is to provide a comprehensive model for the prediction, prevention and detection of financial reporting fraud using the modified benchmarking model. To achieve the research goal, the necessary data were collected for 161 companies listed on the Tehran Stock Exchange during a 10-year period (2009-2018). The results of estimating the research model have been examined by the binomial logit method. The results of testing the hypotheses of this study indicate that Beneish model is successful in separating companies involved in fraudulent financial reporting and healthy companies, based on McFadden's detection coefficient, with 73% confidence, and among the independent variables, day’s sales in receivable index (DSRI), gross margin index (GMI), asset quality index (AQI), sales growth index (SGI), depreciation index (DEPI) and total accrual to total assets index (TATAI), have a direct and significant effect on fraudulent financial reporting, but sales, general, and administrative expenses index (SGAI)  and leverage index (LEVI) have had a significant inverse effect on fraudulent financial reporting (FFR).

    Keywords: Beneish Model, fraudulent financial reporting (FFR), Fraud, M-Score Model
  • Omid Rahmani Seryasat *, Isaac Kor, Hossein Ghayoumi Zadeh, Arash Shams Taleghani Pages 49-62

    The nature and importance of user’s comments in various social media systems play an important role in creating or changing people's perceptions of certain topics or popularizing them. It has now an important place in various fields, including education, sales, prediction, and so on. In this paper, Facebook social network has been considered as a case study. The purpose of this study is to predict the volume of Facebook users' comments on the published content called post. Therefore, the existing problem is classified as a regression problem. In the method presented in this paper, three regression models called elastic network, M5P model, and radial basis function regression model are combined and an ensemble model is made to predict the volume of comments. In order to combine these base models, a strategy called stack generalization is used, based on which the output of the base models is provided to a linear regression model as new features. This linear regression model combines the outputs of the 3 base models and determines the final output of the system. To evaluate the performance of the proposed model, a database of the UCI dataset, which has 5 training sets and 10 test sets, has been used. Each test set in this database has 100 records. In the present study, the efficiency of the base models and the proposed ensemble model is evaluated on all these sets. Finally, it is concluded that the use of the ensemble model can reduce the average correlation coefficient (as one of the evaluation criteria of the model) to 74.4 ± 16.4, which is an acceptable result.

    Keywords: regression, Ensemble Model, Facebook, Comment Volume Prediction
  • Rafid Habib Buti *, Ameen Khlaif Sachit Pages 63-69

    In this paper , we studied and introduced a new subclass of analytic univalent functions defined by differ – integral operator . We obtain distortion bounds, extreme points , and some theorem of this subclass .

    Keywords: nivalent Function, Fractional calculus, Differ-integral Operator
  • Nihad Hameed Shehab *, Abdul Rahman S. Juma Pages 71-82

    In this article, the authors introduce two new subclasses of a class m-fold symmetric biunivalent functions in open unit disk. Coefficient bounds for the Taylor-Maclaurin coefficients |am+1| and |a2m+1| are are obtain . Furthermore, we solve ”Fekete-Szeg” ”o” functional problems for functions in FP,m(γ, µ, ϑ) and MP,m(κ, η, ϑ) . Also, several certain special improver results for the associated classes are presented .

    Keywords: Analytic functions, Bi-Univalent functions, Fekete-Szeg¨o coefficient, Taylor-Maclaurin series, Univalent functions
  • Abdul Sattar Jaber Al Saif *, Afrah Abdul Imam Almaziny Pages 83-96

    The addition of an application of a new version of the Differential Quadrature Method is the purpose of this work. The new method, tracing Laguerre polynomials, is applicable to test functions whose purpose is to establish the DQM weighting coefficients, focussing on the use of the DQM in investigating solving nonlinear differential equations numerically for the representation of the steady incompressible flow problem of a fourth-grade non-Newtonian fluid magnetic field between two stationary parallel plates. A series of graphs are used to demonstrate the ways a range of important physical parameters influence the velocity profile. The level of agreement when comparing a small number of grid points in the new technique with analytical solutions is remarkably high.

    Keywords: MHD fluid, fourth-grade fluid, Laguerre polynomials, Differential quadrature method
  • Amel Nashat Shakir * Pages 97-108

    In this paper, an efficient GV1-CG is developed to optimizing the modified conjugate gradient algorithm by using a new conjugate property. This is to to increase the speed of the convergence and retain the characteristic mass convergence using the conjugate property. This used property is proposed to public functions as it is not necessary to be a quadratic and convex function. The descent sharp property and comprehensive convergence for the proposed improved algorithm have been proved. Numerical results on some test function indicate that the new CG-method outperforms many of the similar methods in this field.

    Keywords: Conjugate gradient, Developed conjugation property, Improved algorithm, Optimization, Global convergence, Descent Direction
  • AL IBADI ZEYAD *, MUTHANA ALBOEDAM, ILYA KATANOV, AL ZUBAIDI SURA Pages 109-122

    This work studies the effect of different methods of spectra processing on the aromatic compounds (benzene, toluene and xylene). keeping up high-spatial objectives is progressively basic and various methodologies are utilized to check fragrant compounds that anticipate choosing a specific technique from research centre determinations. One notable part of all types of signal systems is the flexibility of adaptation. In addition, a spatial exactness is not fundamental to get a range of an expansive number of fragrant compounds where more prominent characterization and statistical mean are critical. Moreover, sufficiently low deviations of the expected values are achieved from the true values and the standard deviation to determine the properties of fragrant compounds compared with those of the aromatic compounds. A persistent baseline rectification is smoothed and performed followed by normalizing the rectified spectrum to their area. The auto fluorescence foundation is subtracted, for the pure range analysis, by utilizing scientific approaches: polynomial estimation (Poly Fit) and method Processors Gases Improved. The accuracy and reliability obtained are not complete and can be increased by developing algorithms, selecting other parameters and improving the quality of the training sample by eliminating the unwanted data. This could be done by increasing the sample size and studying it in more detail to avoid inaccuracies during the transition between concentrations Gas.

    Keywords: Poly Fit, Polynomial, Processors Gases, Ultraviolet spectroscopy, sensors, Fluorescence
  • Shrook A. S. Al-Sabbah, Zainab Hassan Radhy *, Hayder Al Ibraheemi Pages 123-132

    The linear multiple regression model is one of the prediction models whose parametric estimations could be achieved in different methods. Ordinary Least Square error (OLS) is most popular in this field of application. Although it could accurately achieve the estimation task, it fails in processing the multiple objective models. From the other side of view, the load demand for electrical energy continuously rises around the world. The governments always tackle the increase in electrical load demanding by establishing more electrical power plants and more power distribution directories. Future prediction for several electrical engineers to manage and provide technical supports for these plants and directories becomes, nowadays, urgent. This paper addresses the estimating drawback of OLS by employing the Goals Programming (GP) in the field of parametric estimation. The validity of the proposed method was applied to estimating the required number of electrical engineers, in the next coming years, as the electrical load considerably increases. Thereby, the GP was used, in this work, to determine the best linear representation for a set of data. The obtained results proved that the (GP) method is more flexible and efficient in dealing with such subject area especially in the case of multiple objective models.

    Keywords: Linear multiple module, Goals programming method, least-square error, Electricalload demand, Electrical engineers number
  • Reza Nasimi, Sassan Azadi *, Mehdi Farzinfar, Mostafa Jazaeri Pages 133-145
    Reservoir permeability in upstream of petroleum engineering plays an essential role in crude oil production. Due to the high cost and difficulty in direct measurement of the permeability, having a robust model of this parameter based on the openhole logs and data is preferred. The sonic, volumetric density, gamma ray, total porosity, neutron porosity logs are available in the time of logging and have the highest correlation with reservoir permeability characteristics. To estimate the permeability of the reservoir based on these available data, a new intelligent method of Genetic Algorithm (GA) and Wavelet Neural Network (WNN) is derived. In the developed model, a new objective function has been introduced. For avoiding more complexity of the objective function, the initializing weights of neural network has been done by GA. Then, the training levenberg marquardt algorithm is utilized to update the optimal weighting. In other words, wavelet as activation function of neural network enhances exploitation search abilities of the algorithm and leads to a robust model. In the following, a sample reservoir as a source of data in this field is selected to evaluate the effectiveness of the proposed algorithm in the permeability estimation. For the sake of comparison, two algorithms of BP-ANN and GA-BP, which have been already presented in the literature, are applied for the same data sets and the superiority of developed model in estimation has been illustrated.
    Keywords: Permeability Estimation, Wavelet Neural Network, Genetic algorithm, Logging Data, Reservoir Rock
  • Maliheh Khatami * Pages 146-160
    In retail operation management, shelf space allocation problem is an important problem that affects profitability. Various researches have demonstrated that the shelf space allocation of a product affects that product's sales. The decision that how much of which product, where and when should be placed on shelves is a critical issue in retail operation management. In this paper a new hybrid meta-heuristic algorithm based on forest optimization algorithm (FOA) and simulated annulling (SA) is presented to address the shop shelf allocation problem. To apply FOA for shelf space allocation problem, the basic arithmetic operators of FOA have been modified regarding the characteristics of this problem and FOA is improved by SA. Results obtained from an expensive experimental phase show the better performance of the proposed algorithm in comparison with other presented algorithm from the literature. Also, results show the suitability and benefits of the proposed algorithm in finding high-quality solutions and robustness.
    Keywords: retailing, shelf space allocation problem, Optimization, forest optimization algorithm, simulated annulling
  • Seyed Masoud Alizadeh Masoumiana *, Alireza Alfi, Ahmad Rezaee Jordehi Pages 161-172

    The cost of electricity generation depends on different parameters and characteristics. These param￾eters and characteristics are regional characteristics, fuel costs of fossil fuel power plants, government policies, and technological capabilities, which lead to fluctuations in the cost of electricity generation. The production cost has gradually decreased with the rapid progress of technology and gaining more experience in using wind and solar energy. Therefore, the cost of electricity generation per kilowatt￾hour has declined significantly in recent years. On the other hand, the cost of fuel for gas and thermal power plants is increasing with the reduction of oil and gas reserves and the elimination of subsidies for petroleum products. Therefore, it is necessary to replace gas and thermal power plants with wind and photovoltaic power plants in the future. Moreover, using charging stations for electric vehicles as a source of energy exchange in the form of V2G and G2V greatly helps to manage the costs involved. Using electric vehicles reduces fuel costs, maintenance costs, and air pollution allows protecting the environment and reducing respiratory, heart, and lung diseases. In this paper, the optimal planning in the location of the photovoltaic power plant, wind, and charging station in the presence of load uncertainty and electricity price uncertainty are examined using the crowding-based distance index in the edited Not-Dominated Sorting Backtracking Search Algorithm. Examining the results in MATLAB software on the 33-bus IEEE network indicated the high accuracy, speed, and control of the algorithm.

    Keywords: Photovoltaic power plant, Wind power plant, Charging station, NSBSA, Optimization
  • Saman Asadi, Mehdi Mahdavi Adeli * Pages 173-188
    Seismic demand estimation of structures (which is considered as one of the main components of performance-based designing) is along with various uncertainties, the most important of which is fault-to-site distance. In fact, with the variation in fault-to-site distance of an accelerogram, its effect on seismic demand estimation would differ. However, it seems that this distance impact on the seismic demand will be different in far and near fields concerning the distinct nature of near field. The main aim of this study is to verify this issue and determine the impact of fault-to-site distance on seismic demand of steel moment resisting frames in near and far fields using nonlinear dynamic analysis and Bayesian statistics. Nonlinear dynamic analysis is used to cover the actual nonlinear behavior of the structure in near-collapse performance level and the Bayesian approach to cover all uncertainties. Concerning the research objectives, two generic steel moment resisting frames of 3-storey with rigid behavior and 15-storey with flexible behavior have been selected and nonlinearly modeled in Open Sees. At the next stage, these frames were analyzed through incremental nonlinear dynamic analysis under five groups of 40 accelerograms that were similar in terms of all features except for fault-to-site distance and the results were used for determining their seismic demand. In so far as the only variable in this analysis is fault-to-site distance, the difference in the results could be attributed to this variable. According to the results, from statistical approach, there is some difference between the impact of distance variation on seismic demand in near and far fields which is subject to some variables such as the behavior of the frame and its performance level.
    Keywords: nonlinear dynamic analysis, Bayesian Statistics, Near, Far Fields Earthquake, Seismic Demand, Fault-to-Site Distance
  • Reza Mohamaddoust, Javad Mohammadzadeh * Pages 189-202
    Communities in social networks form with different purposes and play a significant role in interpersonal interactions. Analysis of virtual communities indicates a more precise understanding of the behaviours and desires of individuals in social networks. In this paper, new measures have been proposed for analyzing implicit and explicit communities in Online Social Networks (OSNs). The measures of “potential value of the community members” and “value of the community messages”, which are used for calculating the measure of “community value” are among the most important measures introduced in this paper. Another measure introduced is “user influence rate” in a community, which represents the contribution of a person in creating value in a community. To provide a sound dataset, we collected the information from several real implicit communities in Twitter based on different hashtags. Finally, the suggested measures have been analyzed and compared statistically and behaviourally across different communities. The results of this research well indicate the importance and practicality of the measures introduced in Community analysis of Twitter.
    Keywords: Community Value, data analysis, social computing, implicit community, Twitter
  • Meysam Vahedian *, Abu Al Ghasem Masihabadi, MohammadReza Shorvarzi, Alireza Mehrazin Pages 203-216

    When the behavioral characteristics of manager working at a healthy organization are examined, the most important issue that is clearly observed in the behavior of managers is their reactions to the market behaviors. Identifying the factors affecting the managers’ behavioral characteristics in forecasting companies’ earnings and analyzing them will improve decision-making of users applying the information disclosed by managers. Since the investors’ sentiment has been proven to be an effective and powerful component in the market, the aim of the present study is to investigate the role of investors’ sentiment on managers’ behavioral characteristics (accuracy, timely disclosure and revision frequency) in earnings forecast. In order to achieve the research objectives, as many as 76 companies listed on Tehran Stock Exchange during the years 2009 to 2018 have been investigated using multivariate regression models, panel data and R software. The results of the present study showed during the aforementioned period, when investors’ sentiment was at a high level, managers’ accuracy in forecasting earnings increased, while timely disclosure of earnings by managers as well as the number of revisions in projected earnings have decreased.

    Keywords: Investors’ Sentiments, Accuracy, Timeliness, Revise, Earnings Forecast
  • Esa Narimani Ghoortlar, Nazanin Pilevari *, Nasser Feghhi Farahmand, Mohamadreza Motadel, Kamaleddin Rahmani Pages 217-230
    Environmental pollution has become a green motivation to control the pollution increase in countries which its purpose is to reduce the negative effects of environmental pollution; hence, green chain supply management has an important role in the environmental impact of organizations. Therefore, the purpose of this study is to evaluate the green supply chain of small and medium manufacturing companies based on green productivity indicators. This research is based on practical purposes and quantitative research approaches. The statistical sample was designated by 297 small and medium manufacturing companies in East Azerbaijan province. In order to data collection, a researcher-made questionnaire based on the research literature has been used. The validity of the questionnaire was determined based on the validity of the structure and its reliability using Cronbach's alpha coefficient. To evaluate the green supply chain through green productivity indicators, a fuzzy inference system based on triangular membership functions, Mamdani inference and dependency rules has been used. The results show that the designed inference system based on green productivity indicators to evaluate the green supply chain with 43 dependency rules is able to evaluate the greenness measure of the supply chain of companies based on numerical values and linguistic words
    Keywords: Supply Chain Evaluation, Green Productivity, Fuzzy Inference System
  • Nasrin Akbarpour, Mehran Keshtkar Haranaki *, Nader Gharibnavaz, Mahmoud Ahmadi Sharif Pages 231-242
    Many industry leaders predict that the industrial Internet will create unprecedented levels of growth and productivity over the coming decades. Business leaders, governments, academic environments, and technology vendors are working hard together to realize and restrain this powerful potential. In this research, the combined method has been used. In the qualitative part, the foundation data method or grounded theory with a systematic approach which attributed to Strauss and Corbin has been used for data analysis and has been designed with three open, axial and selective coding techniques in Maxqda model software. Then, the questionnaire was designed in 6 dimensions: causal, axial phenomenon, prevailing context, intervening conditions and consequences in 50 items to confirm the dimensions, components and indicators. The statistical population is consist of 35 managers and basic technology senior experts in the Internet of Things (IOT) field, marketing and information technology experts who have been selected by Snow Ball method (chain reference) to the theoretical saturation limit. In the quantitative part, after identifying the relevant dimensions, components and indicators, the questionnaire was provided to managers and experts. The results were confirmed by independent t-test. Prioritization of dimensions and components has been done by AHP technique. Reliability of interviews with Cohen's Kappa coefficient and inter-rater reliability method, content validity of the questionnaire using content validity ratio (CVR), content validity index (CVI), apparent validity with item impact scores and reliability with Cronbach's alpha and halving method has been approved.
    Keywords: Internet of Things, 4 industrial revolution, Smart city, Real time marketing
  • Mahdi Khojaste *, Maryam Ashrafi Pages 243-254
    With the growing knowledge of economics, the role of intellectual property in the development of international businesses and the boom of the national economy has become increasingly pronounced. There are many items on the intellectual property list, but the patent may be the most important intellectual property. Patent rights that bring patent owner immunity are very valuable. These licenses have been tradable in certain markets, like many other economic markets, and one of the concerns of technology researchers has always been to calculate the value of patents and to decide whether or not to buy them. The present study has developed a system for patent valuation. In the first step, the decision to invest or not invest in intellectual property (policy making) is examined using Ohlsen's model. In the second step, the patent valuation is done by using the real option valuation methods.  Implementation of Ohlsen's model in companies listed on the Tehran Stock Exchange has highlighted the importance of advertising as one of the customer-centric intellectual property of the share price. in second step, a patent is valued in terms of the option to assign and simulate future patent cash flows using the average return process.
    Keywords: patent, Data Mining, real possession, valuation
  • Morteza Khani Dehnoi, Saeed Araban * Pages 255-263

    The QoS-aware web services discovery and composition are considered as two important, open and hot issues in Service-Oriented Architecture (SOA). By definition, web-service discovery is about how to select the best web-service for a role in a workflow among available web-services whereas web-services composition works on developing merely optimum coordination among a number of available web-services to provide a new composed web-service intended to satisfy some users requirements for which a single web service is not (good) enough. The criteria, upon which the web service selection, position, and composition processes are based, may or may not cover the Quality of services (QoS) parameters. The latter case would turn the name of the job into QoS-aware web services discovery and composition. In this article, the QoS-WSC test data-set is introduced for the QoS-aware web-services discovery and composition with regard to specific potentials and needs of the web-services world. In this respect, at first, an interface has been designed to define QoS for the roles attended in a service-oriented architecture. Then a solution, which allows comparison among web-services through the calculation of similarity of the request to amounts of the QoS parameters of available web services, is proposed. The similarity is obtained using the internal multiplication of two vectors of weighted numerical parameters representing request parameters and QoS parameters of available web services. The weights are technically significant coefficients, which can be obtained from the QoS-WSC data-set, which is assembled out of a rule-based integration of two well-known prior data sets in the field of web services.

    Keywords: web-service, Service-Oriented Architecture (SOA), data-set, Quality of Service (QoS)
  • Jaber Naseri *, Hamid Hassanpour, Ali Ghanbari Pages 265-278
    Using smart methods for the automatic generation of keywords in legislative documents has  attracted the attention of many researchers over the past few decades. With the increasing  evelopment of legislative documents and the large volume of unstructured texts, the need for rapid  access to these documents has become more significant. Extracting the keywords in legislative documents will accel-erate the legislative process and reduce costs. Nowadays, many methods are presented dynamically for generating keywords. The present study attempted to extract more meaningful keywords from texts by using the thesaurus, which has a structured system to improve the  classification of legislative documents. In this method, the semantic relationships in the thesaurus and document clustering were used and the statistical features of different words were calculated to extract some words as keywords. After pre-processing the texts, first the keywords in the text were selected using statisti-cal methods. Then, the phrases derived from the keywords were extracted using semantic terms in the thesaurus. After that, a numerical weight was assigned to each word to determine the relative importance of the words and indicate the effect of the word in relation to the text and compared to other words. Finally, the final keywords were selected using the relationships in the thesaurus and clustering methods. In order to evaluate this method, the tested text was compared to educational texts and the similarity between them was used. The results of testing various texts and subjects indicated the high accuracy of the proposed method. The data from the Parliament of Iran and the Deputy for Presidential Laws were used to evaluate the proposed model. This model could provide a very high accuracy and performance in these two bases in comparison to other methods.
    Keywords: Text mining, keyword extraction, thesaurus, semantic relationships, Clustering
  • Taher Taheri, Hossein Bodaghi Khajeh Noubar *, Houshang Taghizadeh Pages 279-295

    Export performance is one of the international marketing subjects which has been being notably researched. But still there is no comprehensive understanding of its context due to numerous effective indicators existence. The objective of this study is presenting a model to expand the export performance of trading enterprises in Ardabil province through 57 questionnaire items affecting export performance. Hence considering the researcher’s study, citing research literature and conducting interview with 30 chamber of commerce experts, 57 questionnaire items were identified in total by exploratory factor analysis questionnaire and after performing software analysis of the questionnaire items, 12 factors were identified as effective factors. All these 12 factors functioned in export performance extension of trading enterprises and became involved in model, and LISREL software outputs of modeling indicated that factors of executives’ capability in companies, competitive strategy, company communications, product strategy, human resources strategy and infrastructures respectively, play the most crucial importance part in export performance extension of trading enterprises.

    Keywords: companies’ performance, export performance extension, export, non-oil export
  • AliAsghar Khazaei Harivand *, Arash Nadrian, Majid Ashrafi, Ali Khozin Pages 297-309

    Despite the existence of financial reporting rules and requirements, including corporate governance, sustainability and social responsibility in companies, the complexity of financial reporting is still a controversial issue in the financial field and no comprehensive indicator has been provided so far. Although in previous studies, several indicators have been used to measure the readability of financial reporting, but in this study, by considering different indicators, using a mathematical method, a comprehensive indicator for measuring the readability of financial reporting has been presented. Also, to test and measure the efficiency of the designed model, data related to 152 companies during the years 2016 to 2019 have been used by mixed method. For this purpose, a questionnaire was prepared and distributed among experts to ask the experts about the weight and importance of the criteria for measuring the readability of the company's financial reporting. Indigenous model of financial reporting based on companies listed on the Tehran Stock Exchange was presented. Also, in order to measure the relationships of the hidden variable, which in this study is the readability of financial reporting, confirmatory factor analysis was used with its measurement items; the obtained results show that the proposed model is a good indicator for the readability of financial reporting and has less skewness than individual criteria.

    Keywords: Financial Reporting Readability, Comprehensive Financial Reporting Readability Index, Multi-Criteria Decision Making Model, Mixed Method
  • Naimeh Morsali, Morteza Mousakhani *, Seyed Mehdi Alvani, Ali Badi Zadeh Pages 311-323
    The objective of this study is to investigate the factors affecting the social responsibility of managers in education. The research method is applied in terms of purpose and descriptive and analytical in terms of method. According to the conceptual model of the hypotheses research, the statistical population of the research includes the staff of Qazvin Education Organization, whose number is about 16000 people. According to the statistical population, the sample size is 375, after designing a questionnaire and validity and Its reliability is tested for normality and hypotheses are tested through structural equations. According to the obtained results, the study of the model of social responsibility of managers in the education system is approved.
    Keywords: Social responsibility, educational system, structural equations
  • Maryam Oraki *, Madjid Eshaghi Gordji, Halimeh Ardakani Pages 325-342
    In this article, we introduce the interval-valued intuitionistic fuzzy set (textbf{IVIFS}), which are generalized forms of intuitionistic fuzzy set (textbf{IFS}) and fuzzy set, this is because in intuitionistic fuzzy sets the non-membership function also applies to evaluations, and these sets are useful for modelling ambiguous concepts that abound in real problems. Here we try to look for new methods for more practical solutions in optimization problems for various sciences such as computer science, mathematics, engineering, medicine, psychology, climate and etc. First, with the introduction of t-norm Frank, an action we construct some Frank aggregation operators on interval-valued intuitionistic fuzzy numbers (textbf{IVIFN}s), including the Frank weighted averaging operator, Frank-ordered weighted averaging operator, Frank hybrid weighted averaging operator, Frank geometric weighted averaging operator, Frank geometric-ordered weighted averaging operator, and Frank geometric hybrid weighted averaging operator. Also, examine some of the characteristics of these operators. In the following, we introduce two multiple attribute group decision-making methods (textbf{MAGDM}) based on such operators. Finally, we provide illustrative examples of these methods.
    Keywords: decision-making sciences, aggregation operators, nonlinear integrals, intuitionistic fuzzy set, t-conorm, t-norm Frank
  • Seyed Hasan Hosseini, Javad Vahidi *, Seyed Reza Kamel Tabbakh, AliAsghar Shojaei Pages 343-360

    Cloud computing is a massively distributed system in which existing resources interact with user-requested tasks to meet their requests. In such a system, the problem of optimizing Resource Allocation and Scheduling (RAS) is vital, because recourse allocation and scheduling deals with the mapping between recourses and user requests and also is responsible for optimal allocating of tasks to available resources. In the cloud environment, a user may face hundreds of computational resources to do his work. Therefore, manually recourse allocation and scheduling are impossible, and having a schedule between user requests and available recourses seems logical. In this paper, we used Whale Optimization Algorithm (WOA) to solve resource allocation and task scheduling problem in cloud computing to have optimal resource allocation and reduce the total runtime of requested services by users. The proposed algorithm is compared with the other existed algorithms. Results indicate the proper performance of the proposed algorithm than other ones.

    Keywords: cloud computing, Makespan, Task, Resource, whale optimization algorithm
  • Pouria Pirasteh Fard, Mehdi Abtahi * Pages 361-380
    Social responsibility and risk are some essential issues in projects to consider. Knowing the type of ‎risk in each project and its impact is an important and significant issue in choosing a project or ‎participating in tenders. Considering the increasing growth of urban communities and the need to ‎carry out tasks, and the rising standard of living, trying to carry out projects with advanced ‎technologies seems to be a necessity that requires modern and continuous planning to implement ‎the relevant matters. These plans aim to increase quality and reduce costs due to the high level of ‎demand. In the present study, a fuzzy multi-objective mathematical model has been designed to ‎prioritize projects, minimize the risks involved in implementing each project, maximize profit ‎margins, maximize project costs, and maximize jobs created by the project. Finally, an example ‎appropriate to real-world problems is designed and examined to ensure the correct operation of ‎the proposed model.
    Keywords: ‏Fuzzy multi-objective technique, Fuzzy Shannon entropy, Social responsibility, risk‏
  • Ali Ghoddosian *, Hesam Abhar Pages 381-398
    This paper presents a study on Topology optimization of continuum structures with the aim of maximizing the lowest buckling load factor. In the structural shape and topology optimization problems, stability and buckling issues are not usually considered therefore in some cases, long and thin members are obtained in an optimized configuration that can lead to the instability of the structure. In this article level set method incorporating a fictitious interface, energy is applied to find the optimal configuration. One of the main problems in traditional continuum structural optimization methods, which are based on the removal or alterations in element density, is the creation of pseudo buckling modes in the optimization process. These pseudo buckling modes should be identified and removed. The level set optimization method, which is grounded on moving the structure's boundary, has a high ability to control topological complexities. Therefore, the idea of using the level set method has led to the elimination of pseudo buckling modes and the resolving of this problem. Derivation of the required speed term in the level set method is complicated and this derivation term for the buckling load factor is the innovation of this research. Numerical examples are illustrated to prove the effectiveness of this method.
    Keywords: Level set, Buckling, Topology optimization, stability, pseudo buckling modes
  • Ali Gholami, Nasrin Jazani *, Abolfazl Kazemi Pages 399-414
    Human resource (HR) managers are the most decisive factors in the success or failure of organizations and realize the organization’s goals with their competency.  Since managers competency under crisis is completely different from that under normal circumstances and recognizing these competencies seems essential for effective management, this research was conducted to propose a competency model for human resources management [HRM] under crisis. This study applied a descriptive survey in terms of methodology and objective. The population comprised 35 experts and 2430 employees of Red Crescent, Municipality, and Provincial Government of Tehran, Iran. Two methods used to analyze research data and obtain results were the Delphi technique with interpretive structural modelling and factor analysis. According to the analysis, as viewed by experts, 5 factors, including Iranian-Islamic, personality, interpersonal, organizational, and extra-organizational components, along with 18 subcomponents were determined as components of competence for HR managers in crises.   Results showed that the following were accepted as priority indices from each component: domination of religious values, Iranian-Islamic morality, and fairness from the Iranian-Islamic components; pragmatism, adherence to morality, and assiduousness from the personality components; strategic leadership, efficient resource allocation, organizational intelligence, citizen orientation, and crisis management from the organizational component; and political intelligence, business acumen, and cultural Intelligence from the extra-organizational components.
    Keywords: Competence, Human resource managers, Crisis, Public organization
  • Ahmed Badrulddin * Pages 415-424
    An avalanche of threats from malicious used information and communication technologies (ICT) ‎in political, military, economic and social affairs led to a deep awareness of the fact that new ‎technologies may pose additional risks to international peace and safety. Thus, the problem of ‎international information security, that is, the state of the spread of information among the ‎countries of the world, with which excludes the possibility of violation of the rights of the ‎individual and users in various state agencies and in all fields of knowledge. In the field of ‎knowledge, as well as destructive and illegal impact on elements of the national critical ‎information infrastructure, became an integral part of international security as a system of ‎international relations based on the observance by all states of generally recognized principles and ‎norms of international law and excluding the solution of controversial questions and ‎disagreements between them through force or threat of force, in general. Thus, the principles of ‎international security providing for promoting peaceful coexistence, being ensuring equal security ‎for all states, the creation of effective guarantees in the military, political, economic and ‎humanitarian spheres, preventing the race of nuclear and space weapons, respect for the sovereign ‎rights of every people, fair political settlement of international crises and regional conflicts ‎certainly include the creation of a system of international information security. At the same time, ‎under the IIB system, designed for countering threats to strategic stability and ensuring equal ‎partnership in the global digital environment, we understand the totality international and ‎national norms and institutions, chief among which are the UN Regulated activities of various ‎actors worldwide.‎
    Keywords: Source Code, DNS Spoofing, DoS (Denial of Service)‎
  • Lateef Jolaoso, Ferdinard Ogbuisi, OLUWATOSIN MEWOMO * Pages 425-446
    The purpose of this paper is to study the approximation of solutions of split equality variational inclusion problem in uniformly convex Banach spaces which are also uniformly smooth. We introduce an iterative algorithm in which the stepsize does not require prior knowledge of operator norms. This is very important in practice because norm of operators that are often involved in applications are rarely known explicitly. We prove a strong convergence theorem for the approximation of solutions of split equality variational inclusion problem in $p$-uniformly convex Banach spaces which are also uniformly smooth. Further, we give some applications and a numerical example of our main theorem to show how the sequence values affect the number of iterations. Our results improve, complement and extend many recent results in literature.
    Keywords: Split equality problem, variational inclusion, Bregman distance, fixed point problem, operator norms, Banach spaces
  • Merve KARA *, Yasin Yazlik Pages 447-471
    We indicate that the systems of difference equations $$ x_{n+1}=f^{-1}big( afleft( p_{n-1}right)+bfleft( q_{n-2}right) big) , y_{n+1}=f^{-1}big( afleft( r_{n-1}right)+bfleft( s_{n-2}right) big) , nin mathbb{N}_{0},$$ where the sequences $p_{n}$, $q_{n}$, $r_{n}$, $s_{n}$ are some of the sequences $x_{n}$ and $y_{n}$, $f : D_f longrightarrow mathbb{R}$ be a $ ``1-1" $ continuous function on its domain $D_f subseteq mathbb{R}$, initial values $x_{-j}$, $y_{-j}$, $jin{0,1,2}$ are arbitrary real numbers in $D_f$ and the parameters $a,b $ are arbitrary complex numbers, with $bneq 0$, can be solved in the closed form in terms of generalized Padovan sequences.
    Keywords: system of difference equations, solution of closed form, Padovan number
  • Mohammad Fereydouni, Asgar Pakmaram *, Nader Rezaei, Rasoul Abdi Pages 473-482
    Earnings management is defined as earnings manipulation by management to achieve a part of prejudiced expected earnings. This research evaluated the effect of environmental structures on fraudulent reporting of companies by anticipating causal relationships between the structures of business environment change, long-term executive perspective, accounting experience, Organizational culture and corporate affairs, accepting prominent stakeholders, and fraudulent reporting. The population was top and middle managers and financial managers of the companies, and a standard questionnaire was used to collect data. Analyzes were based on a structural modeling approach after performing the reliability and validity tests of the sample data. The results showed that business environment changes, accounting experience, and the acceptance of prominent stakeholders negatively affected fraudulent reporting.
    Keywords: Environmental structures, Earnings management behavior, Fraudulent reporting
  • Ebrahim Nejadabdollah, Gholamreza Memarzadeh Tehran *, Asghar Moshbaki Esfahani, Seyed Abdullah Sajjadi Pages 483-494
    This research aims to identify the dimensions, components, and indicators of successor development in central bank managers. The successor development plan provides the best staff in the critical jobs of the organization given the lack of necessary planning in the departure of managers and experienced people in the organization, along with the lack of identification of talented people to replace managerial positions. This applied research employed an exploratory method to conduct the study and used a mixed method to collect data in two quantitative and qualitative parts. The successor development indicators in the central bank were explained in the qualitative part using the content analysis method in two stages of document analysis and library studies, as well as semistructured interviews. The views of the sample in the quantitative section were examined with a researcher-made questionnaire compared to indicators calculated from the qualitative section. The statistical sample in the qualitative section consisted of 12 senior managers of the Central Bank who were collected by purposive and snowball sampling method to achieve theoretical saturation. A total of 285 basic, middle, and senior managers of the Central Bank of the Islamic Republic of Iran were selected in the quantitative section by stratified random sampling using the Morgan table. Content analysis and coding method were performed for data analysis in the qualitative section. One sample t-tests were performed to explain and prioritize dimensions, components, and indicators in the quantitative part. According to the results, 68 indicators were identified, explained, and prioritized in 12 components and four dimensions of organizational readiness, competency development, human resource empowerment, and underlying factors. The highest frequency among the indicators in the quality sector is related to the component of "professional qualifications." The dimension of "human resource empowerment" (mean=4.03) and the component of "education and development" (mean=4.036) were the most crucial successor development requirements for central bank managers.
    Keywords: Successor development, Human resources development, Central bank managers
  • Tahereh Afshari, Mehran Farajollahi *, Saeed Mazloumian, Nazanin Khatib Zanjani Pages 495-518
    This research aims to design a curriculum in the flipped classroom in elementary math courses based on the synthesis research method. The statistical population included all valid scientific papers regarding the flipped classroom. A total of 120 scientific papers were identified based on inclusion criteria, and finally, 60 research papers were selected based on exclusion criteria for final analysis. A researcher-made worksheet was used to collect, report, and record the information of the initial research. Findings were analyzed using Marsh’s seven-step model of synthesis research and open and axial coding methods. The results indicated that the flipped classroom in an elementary math course is a student-centered program that creates different roles and responsibilities for the learner, teacher, and knowledge.
    Keywords: Curriculum, Curriculum elements, Flipped classroom, Research synthesis
  • Alireza Hajipour, Arash Mirabdolah Lavasani *, Mohammad Eftekhari Yazdi Pages 519-540
    The airflow around a simplified train model is investigated using a three-dimensional $gamma-widetilde{Re}_{theta t}$ transitional approach. Four different yaw angles ($theta = 10^{circ}, 20^{circ}, 30^{circ}$, and $40^{circ}$) perpendicular to the body of the simplified train model are considered which the magnitude of front air-flow is constant, and the magnitude of crosswind determines by yaw angle. The main aim of the research is to investigate the influences of the yaw angle and roughness on the time-averaged flow structure, turbulent quantities such as turbulent kinetic energy, dissipation rate, and the aerodynamic forces such as skin friction and pressure coefficients. The findings show that the yaw angle has a pronounced influence on the three-dimensional flow structure around the high-speed train. As the yaw angle augments, the aerodynamic forces like skin friction and pressure coefficients increase. Furthermore, the roughness has a negligible effect on the pressure coefficient. Also, the skin friction coefficient locally increases in the rough train body.
    Keywords: The high-speed train, Crosswind effects, Yaw angle, Surface roughness, $gamma-widetilde{Re}, {theta t}$ transitional model
  • Mohammad Ahmadzadeh, Javad Vahidi *, Behrouz Minaei Bidgoli, Alireza Pourebrahimi Pages 541-562
    Today, intrusion detection systems are extremely important in securing computers and computer networks. Correlated systems are next to intrusion detection systems by analyzing and combining the alarms received from them, appropriate reports for review and producing security measures. One of the problems face by intrusion detection systems is generating a large volume of false alarms, so one of the most important issues in correlated systems is to check the alerts received by the intrusion detection system to distinguish true-positive alarms from false-positive alarms. The main focus of this research is on the applied optimization of classification methods to reduce the cost of organizations and security expert time in alert checking. The proposed intrusion detection model using correlation(IIDMC) is tested on a valid test dataset and the results show the efficiency of the proposed model and consequently its high accuracy.
    Keywords: Intrusion Detection, Fuzzy Correlator, Incremental Online Learning, Active Learning
  • Khosro Jalali, Javad Vahidi *, Seyed Saleh Mohseni, Hadi Dehbovid Pages 563-570
    One of the ways to enhance the security and concealment of data used today is image watermarking. In image watermarking operation, we try to hide image inside another image without letting others know about the hidden image. In this paper, Contourlet Transform and SVD Transform are used to embedded watermark in the host image. The PSO Optimization Algorithm is also used in the watermark extraction step to find the best scale factor. The results of the proposed algorithm in this article show an improvement over the comparative methods.
    Keywords: Watermarking, Contourlet Transform, PSO Algorithm
  • Gwang Hui Kim * Pages 571-582
    In this paper, we solve and investigate the superstability of the $p$-radical functional equations related to the following Wilson and Kim functional equations begin{align*} fleft(sqrt[p]{x^{p}+y^{p}}right) &+fleft(sqrt[p]{x^{p}-y^{p}}right)=lambda f(x) g(y),\ fleft(sqrt[p]{x^{p}+y^{p}}right) &+fleft(sqrt[p]{x^{p}-y^{p}}right)=lambda g(x) f(y), end{align*} where $p$ is an odd positive integer and $f$ is a complex valued function. Furthermore, the results are extended to Banach algebras.
    Keywords: stability, Superstability, radical functional equation, cosine functional equation, Wilson functional equation, Kim functional equation
  • Shabir Malik *, Bashir Zargar, F.A. Zargar, F.A. Sofi Pages 583-594
    In this paper some sharp lower bound estimates for the maximal modulus of $p'(z)$ and polar derivative of $p(z)$ are established in terms of their degrees, coefficients and maximal modulus of $p(z)$ over unit disk under the assumption that all the zeros of $p(z)$ lie in $|z|leq k$, $k geq 1$.
    Keywords: Lower bound, Maximum modulus, Polar derivative, Polynomial
  • Enas Abdullah, Suad Alasadi *, Alyaa Al Joda Pages 595-604

    Leveraging text mining for sentiment analysis, and integrating text mining and deep learning are the main purposes of this paper. The presented study includes three main steps. At the first step, pre-processing such as tokenization, text cleaning, stop word, stemming, and text normalization has been utilized. Secondly, feature from review and tweets using Bag of Words (BOW) method and Term Frequency $_$Inverse Document Frequency is extracted. Finally, deep learning by dense neural networks is used for classification. This research throws light on understanding the basic concepts of sentiment analysis and then showcases a model which performs deep learning for classification for a movie review and airline$_$ sentiment data set. The performance measure in terms of precision, recall, F1-measure and accuracy were calculated. Based on the results, the proposed method achieved an accuracy of $95.38%$ and $93.84%$ for a movie review and Airline$_$ sentiment, respectively.

    Keywords: Sentiment Analysis, Deep learning, DNN, Text mining
  • Doria Affane, Mustapha Fateh Yarou * Pages 605-615
    In this paper, we deal with the state-dependent nonconvex sweeping process motivated through quasi-variational inequalities arising in the evolution of sandpiles, quasistatic evolution problems with friction, micromechanical damage models for iron materials. We prove the existence of absolutely continuous solution for the problem in presence of a perturbation, that is an external force applied on the system. The perturbation considered here is general and take the form of a sum of a single-valued Carath'eodory mapping and a set-valued unbounded mapping.
    Keywords: State-dependent sweeping process, prox regular sets, proximal subdifferential, unbounded perturbation, ball-compactness
  • Miguel Vivas Cortez *, Muhammad Shoaib Saleema, Sana Sajid Pages 617-632
    In the present research, we will deal with mean square integral inequalities for preinvex stochastic process and η-convex stochastic process in setting of beta function. Further, we will present some novel results for improved H¨older integral inequality. The results given in this present paper are generalizations of already existing results in the literature.
    Keywords: Mean square integral inequalities, Convex stochastic process, η-convex stochastic process, Preinvex stochastic process, Beta function
  • Farbod Khanizadeh *, Farzan Khamesian, Alireza Bahiraie Pages 633-642
    Concerning life insurance, penetration rate is one of the main goal of every developed insurance industry. In this sense systematic marketing is a significant component in strategic plan of insurance companies. To achieve the goal insurers need to group their client into different groups in which some common features are shared and people demonstrate a similar pattern. This paper utilizes K-means clustering as an unsupervised learning algorithm in order to divide customers into number of clusters. The clusters are constructed based on two independent variables namely; car and life insurance premiums. Then the descriptive statistics of other determining features are provided with which the most willing group in purchasing life insurance is presented.
    Keywords: Clustering, K-means, machine learning, Life Insurance
  • Ramu Dubeya, Rajnish Kumar, Khursheed Alam, Vishnu Narayan Mishra * Pages 643-651
    In this paper, we introduce the concept of $(G,rho,theta)$-invexity/pseudoinvexity. We formulate duality outcomes for $G$-Wolfe-type fractional symmetric dual programs over arbitrary cones. In the final  section, we discuss  the  duality theorems under  $(G,rho,theta)$-invexity/ $(G,rho,theta)$-psedoinvexity assumptions.
    Keywords: Fractional programming problem, symmetric duality, theta)$-invexity, $G$-Wolfe model, $(G, rho, theta)$-pseudoinvexity
  • Danh Quoc Nam * Pages 653-662
    In this paper, we consider an inverse problem for the time-fractional diffusion equation on the sphere where the final data on the sphere are given. The problem is ill-posed in the sense of Hadamard. Hence, the regularization method has to be used for the stable approximate solution. Then the well-posedness of the proposed regularizing problem and convergence property of the regularizing solution to the exact one is proved. Error estimates for this method are provided together with a selection rule for the regularization parameter.
    Keywords: Time fractional diffusion, inverse problem, Ill-posed problem, Convergence estimates
  • Houssem Kadem, Salim Mesbahi *, Saida Bendaas Pages 663-676
    In this paper, we prove the existence of weak global solutions for a class of generic reaction diffusion systems for which two main properties hold: the quasi-positivity and a triangular structure condition on the nonlinearities. The main result is a generalization of the work already done on models of a single reaction-diffusion equation. The model studied is applied in image recovery and contrast enhancement. It can also be applied to many models in biology and radiology.
    Keywords: reaction diffusion system, global existence, Schauder fixed point theorem
  • Nana Mbroh, Clovis Noutchie *, Rodrigue Massoukou Pages 677-687
    A second order finite difference scheme is constructed to solve a singularly perturbed degenerate parabolic convection diffusion problem via Rothe's method. The solution of the problem exhibits a boundary layer on the left side of the spatial domain.  By means of the Crank Nicolson finite difference scheme, the time derivative is discretised to obtain a set of semi-discrete boundary value problems. Using a fitted operator finite difference scheme based on the midpoint downwind scheme,  the system of boundary value problems are discretized and analysed for convergence. Second order accuracy is established for each discretisation process. Numerical simulations are carried out to validate the theoretical error estimate.
    Keywords: Singular perturbations, parabolic degenerate problem, fitted operator finite difference methods, convergence
  • Nana Mbroh, Clovis Noutchie *, Rodrigue Massoukou Pages 689-698
    In this paper,  a  one-dimensional modified Burgers'  equation is considered for different  Reynolds numbers. For very high Reynolds numbers,  the solution possesses a multiscale character in some part of the independent domain and thus can be classified as a  singularly perturbed problem. A numerical scheme that uses a fitted operator finite difference scheme to solve the spatial derivatives and the implicit Euler scheme for the time derivative is proposed to solve the modified  Burgers'  equation via Rothe's method. It is important to note that the proposed fitted operator finite difference scheme is based on the midpoint upwind scheme. The stability of the scheme is established and the error associated with each discretisation is estimated. Numerical simulations are carried out to validate the theoretical findings.
    Keywords: Singularly perturbed problems, Modified Burger's equation, Uniform convergence
  • Rahmat Darzi, Abdolhadi Dabbaghian * Pages 699-707
    Using a novel norm that is comfy for fractional and singular differential equations the existence and uniqueness of IVP for new type nonlinear Langevin equations involving three fractional orders are discussed. This norm is a tool to measure how far a numerical solution is from the exact one. New results are based on the contraction mapping principle. Lemma 2.2 has a prominent role in proving the main theorem. The fractional derivatives are described in Caputo sense. Two examples are presented to illustrate the theory.
    Keywords: Fractional Langevin equation, Fixed point theorem, Existence results
  • T. Podile, B. Muatjetjeja, Abdullahi Adem * Pages 709-718
    This paper aims to study a generalized (2+1)-dimensional Bogoyavlensky-Konopelchenko equation. We perform symmetry reduction and derive exact solutions of a generalized (2+1)-dimensional Bogoyavlensky-Konopelchenko equation. In addition, conservation laws for the underlying equation are constructed.
    Keywords: Symmetry reduction, Exact solutions, Conservation laws
  • Maryam Rezaei *, Sayed Farhad Mousavi, Ali Moridi, Madjid Eshaghi, Hojat Karami Pages 719-731

    The increase in exploitation from aquifers in an unbalanced way to meet the growing demands of agriculture has led to a decrease in the groundwater levels and as a result, an increase in the cumulative groundwater-reservoir deficit. In the long run, this will also reduce profits from agriculture due to declining water table levels and rising water extraction costs. In this article, is proposed the application of a socialist cooperative game for propensity to cooperate and improve agriculture's cumulative net benefit and stimulate the balanced use of groundwater. The purpose of this approach is to prevent groundwater level drawdown and compensate for part of the groundwater-reservoir deficit in the Dezful-Andimeshk plain, southwest of Iran. In this study, the consumer behavior, as one of the main factors in groundwater resources management has been investigated. This method has been derived from the socialist cooperative game theory, taking the consumer as an effective factor on water table drawdown, and envisioned in the form of an eco-socialism model. Results revealed that maximum water table drawdown will be reduced by 21%, and as a result, 16 million cubic meters (MCM) of groundwater reservoir deficit will be compensated and the net benefit from agricultural activities will also increase by 26%.

    Keywords: Water table drawdown, Groundwater reservoir deficit, Water resources management, Socialist cooperative game, game theory
  • Christina Majola, Ben Muatjetjeja, Abdullahi Adem * Pages 733-743
    A two-wave mode Korteweg-de Vries equation is investigated by using Lie symmetry analysis. The similarity reductions and new exact solutions are obtained via the simplest equation method. Exact solutions including solitons are shown. In addition, the conservation laws are derived using the multiplier approach.
    Keywords: Two-wave mode Korteweg-de Vries equation, Lie symmetry method, Simplest equation method
  • Aml Shloof, Aisha Gewily * Pages 745-754
    This study used the fractional B-spline collocation technique to obtain the numerical solution of fractal-fractional differential equations. The technique was considered to solve the fractal-fractional differential equations (FFDEs)  with ($0 <gamma_i < 1,, i = 1, 2, cdots, N$). In this suggested technique, the B-spline of fractional order was utilised in the collocation technique. The scheme was easily attained, efficient, and relatively precise with reduced computational work numerical findings. Via the proposed technique, FFDEs can be reduced for solving a system of linear algebraic equations using an appropriate numerical approach. The verified numerical illustrative experiments were presented will show the effectiveness of the technique proposed in this study in solving FFDEs in three cases of nonlocal integral and differential operators namely power law kernel, when the kernels are exponential and the generalization of Mittag-Leffler kernel. The approximate solution is very good and accurate to the exact solution.
    Keywords: fractional B-spline, Linear fractional differential equations FDEs, Caputo-Fabrizio derivative C-F
  • Ali Ansari, Hossein Jafari *, Shahriar Farahmand Rad Pages 755-765
    This study is devoted to introducing a computational technique based on Bernstein polynomials to solve variable order fractional optimal control problems (VO-FOCPs). This class of problems generated by dynamical systems describe with variable order fractional derivatives in the Caputo sense. In the proposed method, the Bernstein operational matrix of the fractional variable-order derivatives will be derived. Then, this matrix is used to obtain an approximate solution to mentioned problems. With the use of Gauss-Legendre quadrature rule and the mentioned operational matrix, the considered VO-FOCPs are reduced to a system of equations that are solved to get approximate solutions. The obtained results show the accuracy of the numerical technique.
    Keywords: Fractional Optimal Control Problems, Variable order, Bernstein polynomials, Operational matrix
  • BOUBEKEUR GASMI, Arezki Kessi, Zakia Hammouch * Pages 767-780
    This paper presents a new sub-equation method based on an auxiliary equation which is implemented via the well-known generalized Kudryashov method, to construct new traveling waves to the Telegraph equation with time and space conformable derivatives. To illustrate its effectiveness, it was tested for seeking traveling wave solutions to the (1+1)-Telegraph equation with space-time conformable derivatives. With the help of Maple Software we derive some new solitary waves solutions. It can be concluded that the proposed method is an accurate tool for solving several kind of nonlinear evolution equations.
    Keywords: (1+1)-Telegraph equation, Generalized Kudryashov method, Conformable derivative, Auxiliary equation, Traveling wave, Optical solutions
  • S Sadegh Modarres Mosadegh *, Reza Dehghanizade Pages 781-792
    Trace of quotient spaces is usually seen wherever there is a study of a linear structure. In linear spaces, we use subspaces and their corresponding equivalence relation to define quotient spaces. With the same method, in this paper, we present two generalized structures of quotient space that are defined on quasilinear spaces. One of them is a quasilinear space and the other is a linear space. After that, we try to introduce norms on certain states of these spaces and examin some properties of them. We will also provide examples for better understanding throughout the process.
    Keywords: Non-linear space, Quasilinear space, Normed quasilinear space, Non-linear quotient space
  • Sabah Jasim Alsaedi Pages 793-798

    Logistic regression is commonly statistical methods used in empirical study including categorical dependent variables .In accurate study of reviews dental carries   associated with some variables as sex, drink milk, smoking, Extract teeth and diabetics. Aim of study is examine common practices of the results and interpretation of logistic regression data .Determinate some variables (dependent variables that was effect on denial carries. In results and by used two methods as enter methods and steps wise forward to determine sex and extract teeth which more dependent variables on independent variables (yi) dental carries.

    Keywords: Logistic regression, Dental caries, odds ratio
  • Sunbul Mohammed *, Anwaar Abdulkareem Pages 799-807
    This article dealt with the topic of using some functions in the censored regression model, such as the Tobit model and Log-BurrXIIEE model and estimating the parameters of the two models. Based on real data for patients with renal failure, the two models were compared. Results are showed  that the LBXIIEE model is the best model compared to the Tobit model according to the values of the model selection criteria, as the values of these criteria for the LBXIIEE model were less than the values of the same criteria for the Tobit model when Urea as a dependent variable in the model. We also note that the values of the criteria (AIC, BIC, H-QIC) change and decrease according to the steps used for estimation and analysis of the two models (Tobit, LBXIIEE) according to the specific dependent variable
    Keywords: censored regression model, Tobit model, Log-BurrXIIEE model, criteria
  • Omid Rahmani Seryasat *, Sina Ahmadi, Pouya Yousefi, Farzad Tat Shahdost, Sareh Sanei Pages 809-823
    Phishing is a social engineering technique used to deceive users, which means trying to obtain confidential information such as username, password or bank account information. One of the most important challenges on the Internet today is the risk of phishing attack and Internet scams. These attacks cost the United States billions of dollars a year. Therefore, researchers have made great efforts to identify and combat such attacks. Accordingly, the present study aims to evaluate the methods of identifying phishing websites. This research is applied in terms of its objectives and descriptive-analytical in nature. In this article, the classification approach is used to identify phishing websites. From a machine learning point of view, if a suitable strategy is used, the ensemble of votes of different classifiers can be used to increase the accuracy of classification. In the method proposed in this paper, three inherently different ensemble classifiers, called bagging, AdaBoost, and rotation forest are employed. In this method, the stacked generalization strategy is used as an ensemble strategy. A relatively new dataset is employed to evaluate the performance of the proposed method. The database was added to the UCI Database in 2015 and uses 30 features that appear to be appropriate for distinguishing phishing and non-phishing websites. The present study uses 10-fold-cross-validation method as an evaluation strategy. The numerical results indicate that the proposed method can be used as a promising method for detecting phishing websites. It is worth mentioning that in this method, an F-score of 96.3 is resulted, which is a good result in detecting phishing.
    Keywords: Phishing, Classification, Ensembling, Stacked generalization
  • Khudair Hussain *, Naseif Al Jawari, Abdul Khaleq Mazeel Pages 825-845

    In this article, we present a new fractional integral with a non-singular kernel and by using Laplace transform, we derived the corresponding fractional derivative. By composition between our fractional integration operator with classical Caputo and Riemann-Liouville fractional operators, we establish a new fractional derivative which is interpolated between the generalized fractional derivatives in a sense Riemann-Liouville and Caputo-Fabrizio with non-singular kernels. Additionally, we introduce the fundamental properties of these fractional operators with applications and simulations. Finally, a model of Coronavirus (COVID-19) transmission is presented as an application.

    Keywords: Fractional integral, Fractional derivative, non-singular kernels, Mittag-Leffler function, Coronavirus (COVID-19)
  • Wakas Khalaf *, Maitham Shakir, Nagham Abd Alreda Pages 847-860

    The Iraqi Ministry of Defense decided that there would be no escape for the terrorist gang members from the death which it proceeded to it with a false doctrine until it is eradicated from Iraq which can never be an incubator of terrorism, accordingly and with the capabilities available to the ministry, the ministry has developed two strategies: The first is allocating ten regiments from the Iraqi Special Operations Forces (ISOF) to maximize the performance of the Iraqi armed forces (IAF) in defending the homeland and maintaining its security and stability from terrorist gangs in four border regions in Iraq. The second is to reduce the arrival time of the ISOF to the battlefield by determining the optimal possible paths using an efficient scientific approach which is the dynamic programming (DP). The results of this study after solving a real-life problem proved that the proposed approach is an effective mathematical approach for taking a series of related decisions by finding maximization level of performance of ISOF and the shortest time for them to reach the battlefield.

    Keywords: Allocating, Maximizing the performance, Optimal path, Reducing arrival time, Dynamic programming, DP
  • Almutasim Hamed *, Ekhlass Al Rawi, Ahmed Qasim Pages 861-871

    The nonlinear Benny-Lin equation has been solved In this paper using Adomian decomposition technique (ADM ) with different initial conditions and the results shown in Figures (1-8), also by using modified Adomian decomposition technique with genetic algorithm to expound the optimal equation parameters. The proposed method (GA-ADM) guarantees that the optimal parameters will be achieved precisely regardless of the complexity and multiple values of the equation. The proposed method gives more accurate results than the ADM.

    Keywords: Benny-Lin equation, Adomian-decomposition method, Genetic algorithm
  • Ehsan Dehghanipour *, Ali Ghoddosian Pages 873-891
    A topological derivative of the Lagrangian is required for optimization of structures with thermal and mechanical boundary conditions by the level-set method using the reaction diffusion equation. In this study, drawing on the relationship between the shape derivative and the topological derivative, the topological derivative of the Lagrangian was obtained by Reynolds' transport theorem. Given that introducing holes to the topology creates boundaries, the derivative was found by incorporating the boundary integral into the Reynolds' transport theorem and analyzing the stress over the hole boundaries. The temperature was assumed to be dependent on topology in the present study under thermal and mechanical boundary conditions. Placing a hole in the structure affects the temperature of the remaining elements. Penalty factor is enforced on thermal conductivity for removed elements, and the result is taken into consideration in the Laplace's equation expressing the steady-state conductive heat transfer.
    Keywords: Topological Derivative, Shape Derivative, Reynolds' Transport Theorem, Reaction Diffusion Equation, Thermal, Mechanical Boundary Conditions
  • Elaf Khaleel *, Eman Hamed, Huda Ahmed Pages 893-901
    In this paper, we have investigated a new spectral Quasi-Newton (QN) algorithm. New search directions of the proposed algorithm increase its stability and increase the arrival to the optimum solution with a lowest cost value and our numerical applications on the standard Firefly Algorithm (FA)and the new proposed algorithm are powerful as in meta-heuristic field. Our new proposed algorithm has quite common uses in several sciences and engineering problems. Finally, our numerical results show that the proposed technique is the best and its accuracy higher than the accuracy of the standard FA. These numerical results are compared using statistical analysis to evaluate the efficiency and the robustness of new proposed algorithm.
    Keywords: QN-method, self-scaling QN, Conjugate gradient, Unconstrained optimization, Firefly Algorithm
  • Ali Momeni Asl, Ali Broumandnia *, Seyed Javad Mirabedini Pages 903-921
    This study proposes a scale-invariant digital color image encryption method that includes three main steps the pre-substitution, the 3D scale-invariant modular chaotic map, and the post-substitution. 1) The pre-substitution: At the first stage, pixels of plain sub-images are XOR with different key patterns. By starting from one of the plain sub-images, the pixels of the selected plain sub-image is XOR with the initial key, and the result is used as the cipher sub-image and as the next key pattern for performing XOR operations on the next plain sub-image. In other words, the XOR result of each step is used as the next step key pattern, 2) the 3D permutation: At the second stage,  first the red, green, and blue components of a $M times N$ color image is divided to m sub-images with size $n times n$. Then m sub-images are partitioned into $k = leftlceil {frac{m}{n}} rightrceil $ windows $W_1$ to $W_k$ with size $n times n$ sub-images. The last two ï؟¼$W_k-1$ and ï؟¼$W_k$ windows may be overlap in several sub-images. Finally, the three-dimensional modular chaotic maps are performed on the $W_1$ï؟¼ to $W_k$ï؟¼ windows with MIPF keys and selected by LFSR, 3) the post-substitution: At the final stage, the $M times N$ color image is initially divided into a set of color sub-images. Then, the 24-bit pixels of each sub-images are circularly shifted with several bits specified in the secret key. Modular arithmetic is used in the 3D scale-invariant chaotic maps to increase keyspace and enhance security parameters. With repeat at least one round of main steps, the proposed encryption scheme reaches optimum parameter values and it is highly sensitive to minor differences in both secret key and plain image. The proposed encryption method for images improves the standard parameters of evaluation such as entropy, adjacent pixel correlations, histogram, and expanded keyspace.
    Keywords: Image encryption, 3D chaotic map, Modular arithmetic, LFSR, Keyspace
  • Neda Zarinnegar *, Mehdi Mahmoodzadeh, Ghasem Eslami, Valimohammad Darini Pages 921-935
    One of the industries that have played an undisputed role in the economic cycle in recent years, both in economic issues and profitability and in the health sector, is the organic products industry. This study aims to identify the dimensions and components of the two categories of characteristics. Demographic and environmental factors affecting the development of the organic food market were conducted in the form of a conceptual model. The method of the present study is descriptive-analytical and of a combined type which includes both qualitative and quantitative methods. The research method in the qualitative stage was content analysis and in the quantitative part was descriptive and survey research. The statistical population in the qualitative stage were experts, professors as well as experts in the field of organic production. In the qualitative sampling method with interview tools, we continued to collect information until we reached saturation point. The tool used The interview was semi-structured. The statistical population in the quantitative part is unlimited, which includes all consumers of organic products in the city of Mashhad and 384 people were selected as a sample by available sampling method. The data collection tool was a researcher-made quantitative part of the questionnaire. To analyze the data of this research in the qualitative stage, Max Kyoda software and in the quantitative part to determine the effect and fit of the model in the quantitative part, the structural equation modeling method was used. In the end, seven moles were obtained, which included the characteristics of households, urbanization, political, cultural, economic, social and competing factors.
    Keywords: Environmental factors, demographics, organic food products, market development
  • Hamid Tavakkolai, Gholamreza Ardeshir *, Yasser Baleghi Pages 937-950
    Scaling makes digital circuits highly vulnerable to faults, hence, fault detection and correction in digital systems is crucial. This problem becomes more serious when the complexity and frequency of the system on a chip increase in order to achieve higher performance. Addition is one of the fundamental mathematical operators, which is also the basis of many other operations such as subtraction, multiplication, and addressing. Therefore, designing a fault-tolerant adder is a hot topic in VLSI circuits. In such circuits, the important issue of detecting and correcting multiple faults while using few hardware resources is of high interest from the aspect of circuit design optimization. In this paper, a fast adder is used, and the ability to detect and correct multiple errors is supplemented. In the proposed architecture, a self-testing full adder is designed based on multiplexers, and then this full adder is incorporated to design a self-testing multi-bit fast adder. Furthermore, a self-correcting full adder is designed using redundancy, and then this full adder is applied to design a self-correcting multi-bit fast adder. The syntheses and simulations of the proposed adders are performed for 8, 16, 32, and 64 bits. The obtained results show that the proposed adders outperform recent works in terms of the used hardware resources and the multiple faults detection and correction capability.
    Keywords: Fast Adder, Fault-tolerant Adder, Self-testing Adder, Self-correcting Adder, Soft Fault, Multiple Faults
  • Haitham Hasan *, Mais Al Sharqi Pages 951-958

    The Bodily motion or emotion, which can be obtained for example from a hand or a face, originates gestures. Every individual has a unique pattern of dorsal hand veins. The vein pattern's orientation changes when one rotates their hand in a particular direction. This study focused on hand-gesture recognition using dorsal hand veins. The aim of this work is a novel technique to track and recognizing hand vein rotation using fuzzy neural network, and the change in orientation was considered as a gesture and measured.  The algorithms were tested over various rotations ranging from $-45^{circ}$ to $+45^{circ}$. We successfully detected various rotations in both clockwise and anti-clockwise directions, achieving $93%$ accuracy and a reasonable time execution. This problem can be solved because a person can steer a car wheel merely by rotating his/her hand. An infrared camera captured the rotation of hand veins, so car wheel steering was unnecessary.

    Keywords: Complex Walsh transform, Dorsal hand vein pattern, feature extraction, Fuzzy neural network, Sectorization
  • Lala Riza *, Muhammad Fazanadi, Judhistira Utama, Taufiq Hidayat, Khyrina Abu Samah Pages 959-970
    Motif discovery has emerged as one of the most useful techniques in processing time-series data. One of the implementations of motif discovery is in case study 1:1 mean motion resonance (MMR) in the astronomy field. This study aims to build a computational model and its implementation to process time-series data and predict 1:1 MMR from asteroid orbital elements in time-series form. This model proposes Symbolic Aggregate approximation (SAX) and Random Projection (RP) algorithms implemented in the Python programming language. Some experiments involving ten asteroids’ orbital elements data have been carried out to validate the program. From the results obtained, we conclude that our computational model can predict the location of the motif and with which planet the motif is found for 1:1 resonance to occur.
    Keywords: Motif discovery, Astrophysics, Random projection, time series
  • Nazim Razali, Aida Mustapha *, Norwati Mustapha, Filipe Clemente Pages 971-980
    This paper presents a Bayesian Approach for Major European Football League match prediction. In this study, four variants of Bayesian approaches are investigated to observe the impact of different structural learning algorithms within the family of Bayesian Network which are Naive Bayes (NB), Tree Augmented Naive Bayes (TAN) and two General Bayesian Networks (GBN); K2 algorithm with BDeu scoring function (GBN-K2) and Hill Climbing algorithm with MDL scoring function (GBNHC). The predictive performance of all Bayesian approaches is evaluated and compared based on football match results from five major European Football League consisting of three complete seasons of 1,140 matches. The results showed that GBN-HC gained 92.01% of accuracy while GBN-K2 and TAN produced comparable results with 91.86% and 91.94% accuracy, respectively. The lowest result was produced by NB, with only 72.78% accuracy. The results suggest that TAN requires further exploration in football prediction with its ability to cater the minimal dependency among attributes in a small-sized dataset.
    Keywords: Football, Bayesian networks, Naive bayes, Tree augmented naive bayes, General bayesian networks
  • Khyrina Abu Samah *, Raseeda Hamzah, Nur Nabilah Abu Mangshor, Lala Riza, Norah Noor Pages 981-996
    Islamic jurisprudence as it relates to every element of Muslim life that founds in Sharia. Many interpretations exist for Sharia, all based on differing Islamic schools of thought and multiple laws concerning Sharia. In Malaysia, there are 135 sections of Sharia law related to family and marriage in Act 303. The survey conducted proves that Malaysians struggled when looking for sharia law and took a toll of time. Therefore, we propose a web-based Sharia Law Finder (SLF) system that solves the sharia law’s information retrieval issues in Malaysia. The SLF adapted the key extraction (KE) algorithm, which is Term Frequency-Inverse Document Frequency (TF-IDF), and Rapid Automatic Keyword Extraction (RAKE). The testing for implementation of both algorithm effectiveness is using the functional and usability test. The solution enhances using visualization tools which are bubble charts and word cloud. Bubble charts visualize the related Sharia law based on users’ queries and word cloud to visualize keywords used by the users. A total of 30 respondents have tested the functionality and usability of SLF. As a result, the system successfully works as specified functionality, 96.58% for the System Usability Scale, indicating the proposed solution’s acceptance.
    Keywords: Family, marriage act, RAKE algorithm, Sharia law, TF-IDF, Visualization
  • Suzi Sarnin, Wan Norsyafiza Mohamad *, Mohd Yusof, Ros Shilawani Abdul Kadir, Mohd Nor Mad Tan, Norsinnira Azlan, Zarina Zamani Pages 997-1009
    This study focuses on providing a solution for a mobile service provider with Multi Network Operators (MNOs) to provide an excellent service attended by thousands of Mobile Subscribers (MS) at Nasional Bukit Jalil Kuala Lumpur Stadium in Malaysia using a single multi-beam antenna via a hybrid circuit. The Hybrid Combiner (HC) is the solution used to combine multiple MNOs in order to minimize space and cost while maintaining the aesthetic value of the national stadium. During a significant incident, MS users may encounter difficulties accessing the service due to network congestion. In this case, MNOs will need to add capacity to meet the demand for data transmission and voice call transactions. In the current situation, MS users have received poor service because they are unable to connect to the internet or make phone calls while attending a major event at a stadium. MS users will be able to access the network and enjoy live feeds via Facebook (FB) and other software applications without delay or interruption, as well as voice call congestion, following the implementation of the proposed solution. Using the Planning Methods, the results of the proposed solution will be compared to the results of the Walk Test and the coverage simulation analysis. Data statistics obtained from MNOs will explain the solution’s effectiveness in terms of signal quality level, with the Signal to Noise Ratio (SINR) recorded at -95 dBm below the threshold of -85 dBm to prevent interference with MS users. The Resource Block (RB) Utilization shows that all sectors are utilized at less than 70% of total available capacity, indicating that the congestion level is manageable and MS users can access the network without interruption. A key factor in the proposed study, known as Hybrid Combiner Circuit of Multi Network Operator for Capacity Enhancement Solution in Indoor Environment, is fast deployment, low maintenance, and a shared solution between MNOs.
    Keywords: Mobile network operator(MNOs), In-Building coverage (IBC), Multibeam antenna system (MAS), Multibeam RF antenna
  • Lutfiah Lut, Ismail Musirin *, Muhammad Othman, Nor Azwan Mohamed Kamari, Thuraiya Mohd, Shazlyn Shaharudin, Suraya Masrom Pages 1011-1024
    Power system these days appears to work at high-stress load, which could trigger voltage security problems. This is due to the fact that the system will operate under low voltage conditions, which could be possibly below the allowable voltage limit. The voltage collapse phenomenon can become one of the remarkable issues in the power systems which can lead to severe consequences of voltage instability. This paper proposes a method for managing the voltage stability risk using two methods which are evolutionary programming (EP) and multiverse optimization (MVO). Consequently, EP and MVO were used to manage the risk in the power system due to load variations. The risk assessment is made in order to determine the risk of collapse for the system utilizing a pre-developed voltage stability index termed as Fast Voltage Stability Index (FVSI). It is used as the indicator of voltage stability conditions. Results obtained from the study revealed that the MVO technique is much more effective compared to EP.
    Keywords: Voltage Stability, Fast voltage stability index, Multiverse optimization (MVO), Evolutionary programming (EP), Risk assessment
  • Rufaizal Che Mamat, Azuin Ramli, Mohd Badrul Hafiz Che Omar, Abd Samad, Saiful Aman Sulaiman * Pages 1025-1034
    Predicting the maximum ground surface settlement (MGS) beneath road embankments is crucial for safe operation, particularly on soft foundation soils. Despite having been explored to some extent, this problem still has not been solved due to its inherent complexity and many effective factors. This study applied support vector machines (SVM) and artificial neural networks (ANN) to predict MGS. A total of four kernel functions are used to develop the SVM model, which is linear, polynomial, sigmoid, and Radial Basis Function (RBF). MGS was analysed using the finite element method (FEM) with three dimensionless variables: embankment height, applied surcharge, and side slope. In comparison to the other kernel functions, the Gaussian produced the most accurate results (MARE = 0.048, RMSE = 0.007). The SVM-RBF testing results are compared to those of the ANN presented in this study. As a result, SVM-RBF proved to be better than ANN when predicting MGS.
    Keywords: Road embankment, Maximum ground surface settlement, Support vector machines, Kernel functions, artificial neural networks
  • Mohsen Shahi, Farzan Khatib * Pages 1035-1045
    In this paper, in a unique work, the structure of NOT / XOR multifunctional all-optical logic gates is provided using interference effects in 2D photonic crystal structures for use in photonic integrated circuits of next generation. The applicability of the structure has been analyzed by modifying the output waveguide to optimize it and finally examining its performance by placing several structures together for using in optical integrated circuits. In both basic and optimized structures, the contrast ratio, response time, and data transfer rate were measured 36 dB, 0.176 ps and 5.68 Tbit/s, respectively. Appropriate output results along the very small size of about 75.78 µm2, these circuits make the logic very proper for use in optical integrated circuits. For this purpose, in a more complete work, an all-optimal 4×2 Encoder has been designed using optimized structure. In this structure, the contrast ratio is about 13.2 dB, the response time is 0.168 ps and the data transfer rate is 6 Tbit/s. The results of this NOT/XOR basic and optimized all-optical logic gate structure as well as 4×2 Encoder indicate the high flexibility and applicability of these designs for use in structures in this field for use in optical integrated circuits. In this paper, the Plane Wave Expansion method is used to obtain and analyze the photonic band gap range and the Finite-difference time-domain is used to analyze and simulate the designed structures.
    Keywords: photonic crystal, Optical logic gate, Photonic band gap, Optical integrated circuit
  • Nur Nabila Mohd Isham, Siti Mokri *, Ashrani Abd Rahni, Nurul Ali Pages 1047-1058
    Based on Global Cancer 2015 statistics, the lung cancer of all types constitutes 27% of overall cancers while 19.5% of cancer deaths are due to lung cancer. In lieu of this, an effective lung cancer screening test using Computed Tomography (CT) scan is crucial to detect cancer at the early stage. The interpretation of the CT images requires an advanced CAD system of high accuracy for instance, in classifying the lung nodules. Recently, Deep Learning method that is Convolution Neural Network (CNN) shows an outstanding success in lung nodules classification. However, the training of CNN requires a great number of images. Such a requirement is an issue in the case of medical images. Generative adversarial network (GAN) has been introduced to generate new image datasets for CNN training. Thus, the main objective of this study is to compare the performance of CNN architectures with and without the implementation of GAN for lung nodules classification in CT images. Here, the study used Conditional GAN (cGAN) to generate benign nodules images. The classification accuracy of the combined cGAN-CNN architecture was compared among CNN pretraining networks namely GoogleNet, ShuffleNet, DenseNet, and MobileNet based on classification accuracy, specificity, sensitivity, and AUC-ROC values. The experiment was tested on LIDC-IDRI database. The results showed cGAN-CNN architecture improves the overall classification accuracy as compared to CNN alone with the cGAN-ShuffleNet architecture performed the best, achieving 98.38% accuracy, 98.13% specificity, 100% sensitivity and AUC-ROC at 99.90%. Overall, the classification performance of CNN can be improved by integrating GAN architecture to mitigate the constraint of having a large medical image dataset, in this case, CT lung nodules images.
    Keywords: Computed Tomography, Convolution neural network, Generative adversarial network, Lung nodules, Classification
  • Chong Yee, Syahrul Azmi *, Leong Hwai, Grain Adam, Siti Rafidah Abdul Rahim Pages 1059-1074
    This paper presents a three-phase current source inverter (CSI) topology with voltage boosting capability for standalone system. Current source inverter (CSI) and voltage source inverter (VSI) are two competitive options to be used as an interfacing unit between variable dc input resource and ac output fed into on-grid, off-grid and industrial used. Among the two topologies, VSI is widely used due to its variable controllable output voltage and ability to operate steadily with open loop V/Hz control. Yet, it suffers circuit complexity due to the need of extra converter stage to meet the required output. On the other hand, CSI has an advantage of voltage boosting capability and better quality of output waveshape, thus no extra converter stage is needed. Most of research works were focusing on studying the suitability and practicality of CSI, along with the advancement using silicon carbide-based power switches and improved modulation techniques to minimize the harmonics suffer by CSI. There is lack of research in investigating the boosting capability particularly on how high CSI able to boost the fundamental output and its impact to overall performance in both open and closedloop standalone system. Thus, this work is intended to highlight in detail the boosting capability of CSI and compare with VSI based on the several circuity and operational features. To support the work, three modulators are implemented namely sinusoidal pulse width modulation (SPWM), third harmonic injection PWM (THIPWM) and space vector modulation (SVM). A dedicated synchronous frame proportional-integral (PI) control in used in closed-loop condition. Result shows that CSI topology able to boost the fundamental output voltage by 52% to 58% by using smaller modulation index as compared to VSI. Interestingly, CSI able to achieve comparable quality and harmonic minimization of output voltage and current as in VSI but with smaller PI control gain. All works are analyzed and verified using MATLAB/Simulink platform.
    Keywords: Voltage source inverter (VSI), Current source inverter (CSI), SPWM, THIPWM, SVM, Voltage-controlled inverter, Comparative Analysis
  • Siti Rafidah Abdul Rahim *, Yeap Liang, Muhamad Hussain, Ismail Musirin, Syahrul Azmi, Azralmukmin Azmi Pages 1075-1090
    This paper discusses the incorporation of the Hybrid Swarm Evolutionary Programming (SEP) for optimal distributed generation (DG) in the distribution system. High load demand will result in unstable control power distribution due to power transmission loss. The compensation process can be implemented by installing a compensation device to prevent this from happening. It is required the optimal sizing and location for the devices to achieve the objective and this can be done by using an optimization technique. Thus, this project aims to develop a hybrid computational intelligence technique is called hybrid SEP for loss minimization. The proposed method embedded the element of Particle Swarm Optimization (PSO) into traditional Evolutionary Programming (EP) to improve precision of traditional EP algorithm. The purpose of this study is to investigate the maximum benefits of DG integration to be gained. The proposed techniques are validated on IEEE-69 bus radial system with multiple units of DG. The results showed that the most effective type of DG inject to IEEE-69 bus radial system is with DG Type III, with 95% of active power loss reduction and the best on voltage profile improvement. Hybrid SEP shows superior to EP in term of loss minimization.
    Keywords: Distributed generation, Evolutionary programming, Hybrid Swarm evolutionary programming
  • Winnie Chong Mei Yen, Mohd Mansor *, Sharifah Shaaya, Ismail Musirin Pages 1091-1102
    As power demand rises, the power system becomes more stressed, potentially leading to an increase in power losses. When compared to lower power losses, higher power losses result in higher power system operating cost. Flexible AC Transmission System (FACTS) devices help to reduce power losses. This paper describes the use of a computational intelligence-based technique, in this case the Artificial Immune System (AIS), to solve the installation of Thyristor Controlled Static Compensator (TCSC) and Static VAR Compensator (SVC) in a power system while ensuring optimal sizing of both devices. The goal of determining the best locations and sizes for the multi-type FACTS devices is to minimize system power loss. Three case studies are presented to investigate the effectiveness of the proposed AIS optimization technique in solving the multi-type FACTS device installation problem under various power system conditions. The optimization results generated by the proposed AIS are beneficial in improving the power system, particularly in terms of system power loss minimization, which also contributes to power system operating cost minimization. As a result, the likelihood of this being sustainable and able to be implemented for an extended period is greater.
    Keywords: FACTS devices, Computational intelligence, Artificial immune system, Loss minimization, multi-type
  • Nur Shahida Ab Fatah, Nur Ainna Ramli, Dian Darina Indah Daruis, Farhana Deris, Noorsidi Aizuddin Mat Noor, Aslina Baharum * Pages 1103-1115
    Working while studying is common, with at least half of all students taking a paid job at some point throughout their studies. By combining working and studying simultaneously will impact the academic performance of students. The main issues are the consequences of working while studying, which results in declining academic performance. This study aimed to identify possible reasons that cause students to work and study simultaneously, assess the correlation between work and academic performance, and determine the main barriers students face as they work and study. Using quantitative method, data was collected from 218 students from undergraduate and postgraduate, who have experienced study while working. Results show that most students agreed that working while studying correlates with academic performance. Furthermore, it is necessary to learn about the positive and negative effects of their efforts that distract them all this time. Having known all, these factors helped this study develop a good strategy for maintaining a better life while working and studying simultaneously.
    Keywords: Impact, simultaneously, Obstacles, students
  • Fakariah Hani Mohd Ali, Muhammad Fadhli Mohd Salleh *, Nurul Huda Nik Zulkipli Pages 1117-1132
    Nowadays, companies have been moving their IT infrastructure from own data centers to specialized public cloud providers. While there are cost benefits, the security issue is one of the major concerns in cloud computing due to the number of companies that use cloud storage to save their personal data keep increasing. Many honeypots have been used in the past, but they were difficult to use due to a lack of data visualization and attack analysis. To learn more about attackers, their motivations and techniques, honeypots are used to investigate how attackers attempt to hack an information system and provide useful insight into potential security flaws. This honeypot allows to monitor attacks by pretending to be actual machines with valuable and sensitive data, such that attackers interact with them. For this research, honeypot was set up on DigitalOcean cloud and the experimental method performs and result of the implementation in this research use real attack since the honeypot deployed on the cloud and exposed to the Internet. The results show that Cowrie honeypot able to collect data that is valuable to security researcher or network administrator for future research to make analysis. It is believed by implementing Cowrie honeypot using ELK stack on cloud platform will assist on detection and prevention for SSH attacks.
    Keywords: Honeypot, Cowrie, SSH attacks, ELK Stack, cloud computing
  • Sediqeh Soleimanifard * Pages 1133-1142
    One of the most important economic goals of countries is fostering more economic growth; this involves the increasing usage of energy sources. Because of the limitations in non-renewable energy sources and environmental pollution caused by burning these sources, renewable energy sources have become a priority. Biomass energy is one of the new and renewable varieties of energy. This energy is more compatible with nature and the environment, and its production and supply cause little environmental pollution; also, since such energies are renewable, there is no near end for their exhaustion. Therefore, biomass energy constitutes a remarkable part of the world energy supply. Because of the importance of energy in economic growth, this study analyzed the relationship between biomass energy consumption and gross domestic production (GDP) during the 1967-2019 time period, using the autoregressive distributed lag modeling approach (ARDL).
    Keywords: Biomass energy consumption, Gross domestic production (GDP), Auto-regressive distributed Lag modeling approach (ARDL), Causality
  • Reza Mohamaddoust, Javad Mohammadzadeh *, Majid Khalilian, Alireza Nikravanshalmani Pages 1143-1158
    In this research, charisma has been measured at different levels of online social networks including charisma at the level of messages, individuals, and communities. First, the charisma-associated features have been extracted and then weighted by hybrid proposed methods. Eventually, measuring and ranking charisma has been investigated through technique for order preference by similarity to ideal situation (TOPSIS) as one of the leading multi-criteria decision-making methods. Through the proposed approach, the charisma of different messages, individuals, as well as implicit and explicit communities can be measured, ranked, and compared. In this research, eight datasets were collected from Twitter with different and diverse features. The results indicated that the charismatic messages and individuals of each dataset have been chosen properly and logically. Further, a method has been presented to measure the rate of charisma in every community which can be employed for comparing communities and predicting behavior in online communities.
    Keywords: Charisma, Online social networks, Online community analysis, TOPSIS, Twitter
  • Aqeel Al-Fatlawi, Musaddak Abdul Zahra *, Hussein Rassool Pages 1159-1174
    The  optical fiber stands for an exceptionally appealing correspondence medium since it offers an enormous data transmission and low lessening, also, can in this manner encourage requesting administrations like excellent visual communication as well alternatives in PC organizations. In this study, a good strategy simulation based on mathematical equations has been presented for a unique optical channel correspondence. Also, this paper shows the nonlinear analysis  phenomenon of fiber scattering, modulator as well recipient reaction periods, coding type of waveform. The light source spectral width have influence to the presentation of the fiber optics correspondence like link length, information rate, BER. Additionally, this paper show the force and rising period, spending plan is utilized to get a good guess of the communication length as well the piece rate utilizing optical framework test scheme.
    Keywords: nonlinear analysis, Optical Networks, Fiber Optical telecommunication, Power Budget, Rise Time Budget, Fiber Dispersion
  • Abbas Bashiri, Seyed Mehdi Alizamini, MohammadMehdi Nasrabadi Pages 1175-1188

    Today, the tourism industry is one of the most dynamic developing sectors of the world economy and an important factor in the social and economic development of regions and countries. It is known as one of the three first-class profitable industries in the world, which in order to expand and develop requires the cooperation and agreement of environmental, cultural, economic, political and social factors at the community level, in this regard, the phenomenon what has now negatively affected the tourism industry in Iran and the world is the Corona (Covid-19) crisis. This study suggests an infrastructure for determining the factors affecting health tourism. Therefore, the newest multi-criteria decision-making method, fuzzy best-worst method was used to calculate the relative importance of indices and Fuzzy evaluation based on distance from average solution technique was applied as a multi-attribute decision-making method to rank effective factors in health tourism. The statistical population of this research consists of directors and experts in the tourism industry Mazandaran.  Based on the results of this study, improving the level of tourist security in the province ranks first among other options for the economy itself.

    Keywords: Tourism performance, Health tourism, Fuzzy best worst method, Fuzzy EDAS, Multi-criteria decision-making
  • Baydaa Hashim Mohammed, Ahmed Sameer Abdulmohsin, Awad Kadhim Hammoud, Amer Hasan Mahmood, Al-Sharify Noor Pages 1189-1203

    Clinical diagnosis that necessitates the use of centralised facilities and site visits might be difficult for patients in resource-constrained or remote settings. As a result, the development of a low-cost test that uses smartphone data collecting and transmission to enable illness self-management and point-of-care (POC) diagnostics would be advantageous. The PCV of canine blood collections can be extrapolated using a smartphone photograph. Filter paper has been utilized to hold the plasma samples, and photos were taken in various conditions using a smartphone. These findings were then contrasted with PCV values performed on the identical samples in accordance with WHO recommendations. Many samples were examined, and it was discovered that smartphone photos taken in a controlled setting could accurately predict the samples’ PCV. Whether testing samples of a regular erythroid mass or a polycythemic sample, this projection was the most correct. The findings of this study reveal that using Image Enhancement Techniques, smartphone-obtained photos can be used to estimate canine PCV. If this can be included into a smart device, it might be used in low-resource environments.

    Keywords: Anemia, Diagnostics, Hematocrit estimation, PVC, Smartphone Camera, Canine blood samples
  • Hussain Zaydan Ali Pages 1205-1211

    One of the parts of land cover is the vegetation cover. The changes in land cover are due to man-made of natural with the time. The vegetation indices are used in remote sensing for long time to monitor changes in vegetation. Remotely sensed data is considered as an important source of information. For particular area, the vegetation can be considered as a source to collect information about soil, or water table, and to delineate potential zone of ground water. Landsat 7 image is used to identify the land cover and to monitor the vegetation indices in the area under investigation. There are many indices in remote sensing. In this paper I used NDVI and SAVI for the study region and I produce Maps for these two indices using ArcGIS 10.2.2 software.

    Keywords: Landsat7 images, Mapping, ArcGIS, Vegetation indices, Iraq
  • Mohsen Torkzadeh, Bahram Barzegar, Fakhteh Mahini Pages 1213-1228

    Today, one of the most important challenges for universities is the teaching of academic disciplines; Including accounting. One of the major challenges of the 21st century is how to educate students who are prepared to face a dynamic and changing society. One of the major challenges of the 21st century is how to educate students who are prepared to face a dynamic and changing society. The accounting learning process should be designed to develop students’ ability to think critically. Therefore, providing a model for using critical thinking in accounting education in the universities has been considered by researchers. In this study, after identifying and classifying the factors affecting accounting education, the obtained model was tested by Structural Equation Modeling. The statistical population of the research in the qualitative part included prominent teachers of accounting and in the quantitative part included teachers and doctoral students of accounting universities. The sample size was 12 in the qualitative part and 286 in the quantitative part. The results of extracted pattern showed that; The dimensions of causal conditions have a significant effect on axial conditions. Axial phenomenon, Contextual conditions, and intervening conditions have had a significant impact on strategies. Also, strategies have had a significant impact on the consequences. In the end, the main index of Grounded Theory model was favorable fit.

    Keywords: Critical thinking, Accounting education, Grounded theory
  • Raed K. Ibrahim, Laith F. Jumma, Ibrahim A. Amory, Aqeel Al Hilali Pages 1229-1242

    Today’s multi-core processors are built by all processor manufacturers for computers, cell phones, and other embedded systems. For all computer engineers, designing and researching the hardware architecture of multicore systems is critical. The type of cache coherence protocol employed on a multi-core computer has a direct impact on execution time, latency, and power consumption. Because it is a good example of a CPU, a 32-bit MIPS processor was chosen. With the addition of our prior work, an advanced special circuit was created using VHDL coding and ISE Xilinx software to implement it. One protocol was utilized in this design, the MOESI (Modify, Owned, Exclusive, Shared, and Invalid) protocol. The result of the test was obtained using a test bench, and they revealed that all of the protocols’ states were operational.

    Keywords: MEOSI protocol, Multicore processor, VHDL, MIPS, FPGA
  • Digvijay Pandey, Subodh Wairya, Raghda Salam Al Mahdawi, Saif Al-din M Najim, Haitham Abbas Khalaf, Shokhan M Al Barzinji, Ahmed J Obaid Pages 1243-1257

    Growing requirements for preservation as well as transportation of multi-media data have been a component of everyday routine throughout the last numerous decades. Multimedia data such as images and videos play a major role in creating an immersive experience. Data and information must be transmitted quickly and safely in today’s technologically advanced society, yet valuable data must be protected by unauthorised people. Throughout such work, a covert communication as well as textual data extraction approach relying on steganography and image compression is constructed by utilising a deep neural network. Using spatial steganography, the initial input textual image and cover image are all first pre-processed, and afterwards the covert text-based images are further separated and implanted into the least meaningful bit of the cover image picture element. Thereafter, stego- images are compressed to create an elevated quality image and to save storage capacity at the sender’s end. After all this, the receiver will receive this stego-image through a communication channel. Subsequently, steganography and compression are reversed at the receiver’s end. This work has a multitude of problems that make it a fascinating subject to embark on. Selecting the correct steganography and image compression method is by far the most important part of this work. The suggested method, which integrates both image-steganography and compaction, achieves better efficacy in relation to peak signal-to-noise.

    Keywords: Image Compression, steganography, Data Transmission
  • Saurabh Adhikari, Maha A Hutaihit, Moumita Chakraborty, Sawsan dheyaa Mahmood, Benjamin Durakovic, Souvik Pal, D Akila, Ahmed J Obaid Pages 1259-1267

    In industry-academy studies, the cloud computing model goes way above the ground. Cloud has emerged as a fantastic business model for service users and, depending on consumer requirements, can be used pay per usage base. Due to inadequate hardware or software resources, When the quantity of client requests for their high-demand service requirements is large, they prefer to wait in a server queue. As a result, in this study, Reduction in overall waiting time and server utilization factor has been focused on. Comparison has been made on average waiting time and analysis made on server utilization using the M/M/c queuing model.

    Keywords: waiting time, queueing model, server utilization, Cloud computing
  • Ahmad Kazemi, MohammadEbrahim Shiri, Amiri Sheikhahmadi, Mohamad Khodamoradi Pages 1269-1282

    Now diagnostic methods with the help of machine learning have been able to help doctors in this field. One of the most important of these methods is deep learning, which has gotten good answers in images containing cancer. Increasing the accuracy of deep neural network classifiers can increase the diagnosis of breast cancer. In this paper, we have tried to achieve higher accuracy than non-parallel models with the help of a parallel model of a deep neural network. The proposed method is a parallel hybrid method combining AlexNet and VGGNet networks applied in parallel to mammographic images. The database used in this article is INBreast. The results obtained from this method show a 4% increase compared to some other classification models so that in the type of density 1, it has achieved about 99.7%. In the case of other densities, an accuracy of nearly 99% has been obtained.

    Keywords: Medical Image, Magnetic Resonance Imaging, parallel convolutional neural network
  • Awad Kadhim Hammoud, Hatem Nahi Mohaisen, Mohammed Q Mohammed Pages 1283-1291

    In this research paper, we will present how to hide confidential information in a color image randomly using a mathematical equation; by apply this equation to the number of image bytes after converting the image into a digital image, the number of randomly selected bytes depends on the length of the secret message. After specifying the bytes, we include the secret message in those selected bytes utilizing least significant bit (LSB) of steganography, and return the new bytes in the same place in the original image by using the same mathematical equation, after the hiding process using steganography, and then we encrypt the image and send it to the recipient. Several statistical measures applied to the original image, compared with the image after embedding, and after the image encrypted. The results obtained are very good. The statistical measures were used the histogram, mean square error (MSE) and the peak signal to noise ratio (PSNR). The system is designed to perform these processes, which consists of two stages, hiding stage and extract stage. The first stage contains from four steps, the first step of this stage reading the image and converting it to a digital image and make an index on each byte of the image bytes and the application of the mathematical equation to select the bytes by randomly, second step is the process of hiding the secret message in selected bytes and return those bytes to the original locations, third step is the calculation of the statistical measures to determine the rate of confusion after the inclusion of the confidential message, fourth step to encrypt the image of the message carrier and measure the rate of confusion after the encryption and compare with the original image. The extraction process consists of three steps, the first step is to use the private key to decrypt, and the second step is to apply the same mathematical equation to extract the embedded bytes of the confidential message, third step use the same method of hiding the information and extracting the confidential message.

    Keywords: Steganograph, y Encryption, Decryption, Cryptography, Image, LSB, Randomly
  • Elaf S Abdulwahid Pages 1293-1298

    The purpose of this paper is to introduce a new class operator on separable Hilbert space. The operator T ∈ B(H) is called D(T)-operator if there exist U ∈ B(H), U≠0, I such that T*TU = UT*T, where T* adjoint operator of T. Then, some main properties of the class of D(T)-operator are studied in this research.

    Keywords: D(T)-operator, quasi-normal operator, The sepctrum of normal oprator, Hilbert space
  • Natarpha Satchawatee, Sutana Boonlua, Palan Jantarajaturapath Pages 1299-1309

    The research main objective is to investigate the effects of technology transfer capability (which are learning capability, acceptance orientation, innovation focus, exchange competency, and change awareness) on sustainable organizational competitiveness. The key research question is How does technology transfer capability relate to sustainable organizational competitiveness of the ICT companies? About 286 questionnaires were usable for analysing with effective response rate of 76.06 percent. The key informants are top executives of the information and communication technology business around Thailand. The results show that innovation focus and exchange competency positively affect sustainable organizational competitiveness. That means the technology transfer capability is important for firm’s competitiveness. The innovation focus is essential components of technology transfer capability that enhances new product development, valuable operational enhancement, remarkable business efficiency, and firm performance.

    Keywords: Technology transfer, Sustainable competitiveness, Information, communication technology companies
  • Seyed Majid Tabatabaei, Seyed Mohammad Seyed Hosseini *, Alireza RashidiKomijan Pages 1311-1336

    In this research, the main purpose of managing disorders that occurs in the supply chain of Iran Khodro, in such a way that the supply chain can keep the service provided to the customer in the event of a disturbance, or created the lowest changes in the customer’s service. . For this purpose, first examines the concepts and theories associated with disruption and recovery in the supply chain, and then the research mathematical model is designed and presented. Finally, the validation of the model and solving a model based on data collected from Iran Khodro is presented in one of the real issues of the company. Based on the results, it is determined that the occurrence of an impairment in one of the suppliers can disturb the entire supply of supply chain members to disturb, and when the disorder is corrected, the retarded production process to the next periods. Transferred and its recovery in subsequent periods will continue with the production disturbance of subsequent periods to 15 periods, and then its impact will be resolved, which can be expressed by taking 1 year production periods that the impact of sanction and lack of production of production pieces It has been remained in the system for 15 years and can disrupt this system without increasing production capacity and will recover the production system for 15 years.

    Keywords: Supply chain, Supply chain disorder, Supply chain recovery, Mathematical modeling, Metric LP technique, Algorithm NSGAII
  • Saeed Amiri *, Ahmad Mosallanejad, Amir sheikhahmadi Pages 1337-1354

    Integrating multimodal medical imaging has many advantages for diagnosis and clinical analysis because it creates the conditions for physicians to make more accurate diagnoses. To the best of our knowledge, there are still some disadvantages to current image fusion methods. First, image fusion often has low contrast due to the law of weight average to combine low-frequency components. The second problem is the loss of accurate information in the merged image. This paper presents a wavelet-based method and equilibrium optimization for MRI and PET medical image fusion to obtain a high-quality image fusion. In the proposed method, the equilibrium optimization algorithm finds the appropriate common points in MRI and PET images and performs the combination with the help of wavelet transform. This allows the welded image to retain the details transferred from the MRI images significantly. Experimental results show that the proposed approach is effective in significantly increasing the quality of the integrated image and preserves the insignificant information transmitted from the input images.

    Keywords: Medical Image, Image Fusion, Equilibrium Optimization, Wavelet Transform
  • Rana Mohammed Rasheed, Shmoos Abd Al Sattar Jabbar, Nada Mahdi AL Hussiny, Ahamed Mohammed Rasheed Pages 1355-1366

    The solar still method desalinates salt water by using direct solar radiation from the sun. It operates on the evaporation and condensation concept. The tenant is a water basin (which receives the seawater) that is fully insulated on all sides and sealed with a clear glass top that allows solar energy to enter. When a sunbeam strikes the salt water in the basin, evaporation occurs, resulting in vapor rising to the top and being caught by the cover glass above the sink; therefore, condensate develops on the bottom of the cover glass and is collected as the distillate. The purpose of this project is to determine the performance of solar still under real weather conditions of Baghdad. In this work, a single slop type of solar still has fabricated and tested during April-May under outdoor weather conditions in Baghdad. The results of the experimental demonstration show that the daily production of fresh water by solar still with area (0.3375) (m2 ) is about (800 ml).

    Keywords: Solar energy, solar radiation, evaporation & condensation process
  • Faryal Shahzad, Fazeel Abid, Ahmed J Obaid, Bipin Kumar Rai, Mohsin Ashraf, Azmi Shawkat Abdulbaqi Pages 1367-1396

    Hepatitis-related liver diseases are a leading cause of mortality and morbidity among people with HIV/ AIDS taking highly active antiretroviral therapy due to shared transmission routes. An estimated 2–4 million HIV-infected persons have chronic HBV co-infection, and 4–5 million have HCV co-infection worldwide and 14,000 new infections each day. The purpose of this study was to determine the prevalence and associated factors of HBV and HCV co-infection in HIV-positive patients. A cross-sectional study was conducted among 235 HIV/ AIDS patients seeking medical care at special clinics of two public hospitals in Lahore, Pakistan, from February 2018 to May 2018. A structured questionnaire was used to collect information on socio-demographic and clinical characteristics of HIV/ AIDS patients after obtaining their written informed consent. Chi-square, Fisher's exact, and two independent sample t-tests as appropriate were used to find the association between risk factors and HBV, HCV co-infection with HIV. Further, a forward stepwise logistic regression model was used to evaluate the predictors of HBV and HCV co-infection with HIV. P-value < 0.05 was regarded as significant. Of 235 HIV-positive patients, 9\% were co-infected with HBV, 41 were HCV co-infected, and 6\% had HBV-HCV triple infection. The highest prevalence of HBV (55\%), HCV co-infection (70\%), and HBV-HCV triple infection (85\%) were observed in intravenous drug users followed by heterosexual routes. Male, hypertensive, alcohol consumers, and smokers were statistically significantly associated with HBV co-infection (P−value<0.05). The factors include being male, never married, having <1 year of HIV diagnosis, having <200 CD4 counts (cell/mm3), presence of physical disability, having been infected through sexual routes, injecting drug user, alcohol consumer, and smoker were statistically significantly associated with HCV co-infection (P−value<0.05). Whereas the factors; heterosexual transmission, intravenous drug use, alcohol use, smoking, and presence of physical disability were statistically significantly associated with HBV, HCV triple infection (P<0.05). The adjusted odds ratio obtained by fitted logistic regression model showed that HIV transmission routes (both hetero and homo) and never married had lesser odds of HCV co-infection whereas the person with HIV transmission through intravenous drug use, who smoke and aged more than 30 years, had greater odds of HCV co-infection. Co-infection with hepatitis B and C virus is common among this studied sample of HIV-infected patients. The study's finding reaffirms the need for routine baseline screening for this marker and as there is more chance of co-infection with these hepatitis viruses due to enhanced immunodeficiency by HIV and shared routes of transmission. It highlights the need for timely initiation of HAART. Furthermore, those found to be negative should be immunized with HBV and HCV vaccines to improve.

    Keywords: HBV, HCV, HIV, logistic regression
  • Somayeh Askarpour, Maryam Saberi Anari Pages 1397-1410

    Metaheuristic algorithms are effective ways to solve optimization problems and use existing phenomena in nature to solve these problems. Due to the independence of metaheuristic algorithms from the gradient information, the objective function can be used to solve large-scale problems by optimization solutions. The organisms’ behavior in nature in their interaction with each other is one of the optimization methods that are modeled as swarm-based algorithms. Swarm-based algorithms are a set of metaheuristic algorithms which are modeled based on group behavior of their organisms and social interactions. The behavior of wildebeests in nature is considered as a swarm-based algorithm for survival because it can be seen that these organisms migrate in groups and try to survive for themselves and their own herd. In this paper, a new metaheuristic algorithm (WOA) based on migratory and displacement behavior of wildebeests is presented of solving optimization problems. In this algorithm, problem solutions are defined as wildebeest herds that search the problem space for appropriate habitat. The results of the implementation of a set of benchmark functions for solving optimization problems such as the Wildebeest Optimization Algorithm, Whale Optimization Algorithm, BAT, Firefly and Particle Swarm Optimization (PSO) algorithms show that the proposed algorithm is less error rate to find global optimum and also caught up rate in the local optimum is less than the methods.

    Keywords: Wildebeest optimization algorithm, Swarm-Based algorithms, Optimization problems, Metaheuristic algorithm
  • Huaxiang Liu, Kuok Tiung Lee Pages 1411-1421

    Wechat is the most widely and frequently used mobile social media and has profoundly integrated into the daily life of many people. Sustainable development is a common challenge for all. Under this background, how to promote public participation in environmental communication has become an important topic. In this paper, a method is proposed to understand the motivating mechanism behind Wechat users’ environmental information-sharing behavior by taking China’s unique social and cultural background into account. A comprehensive theoretical model for this study is constructed based on the theory of use and satisfaction and the theory of planned behavior (TPB).Initially, the dataset is collected and annotated. The data is preprocessed using normalization method. The theory of use and satisfaction and TPB are employed for predicting the research status of Wechat utilization. For enhancing the accuracy of prediction, we employ Improved Grasshopper optimization algorithm (IGOA). The performance of the proposed system is analyzed and compared with the conventional approaches.

    Keywords: Wechat, Theory of use of uatisfaction, Theory of planned behavior (TPB), Normalization method, Improved grasshopper optimization algorithm (IGOA)
  • Huaxiang Liu, Kuok Tiung Lee Pages 1423-1437

    With over one billion monthly active users, Chinese social networking and multipurpose software Wechat has become one of the world’s most popular social media platforms. Wechat has gradually risen to the top of the social media heap among Chinese teenagers. Because of this, little is known about how people’s usage of Wechat is influenced by psychological factors. In order to build an integrated model that can predict and explain a person’s ongoing use of Wechat, researchers utilised the theory of use and satisfaction (TUS) and the theory of planned behaviour (TPB). Researchers used an upgraded version of the TPB model that incorporated the extra variables of self-identity and belongingness to predict long-term Wechat usage intentions and behaviour in a sample of Chinese adolescents. Further studies looked at the impact of Chinese adolescents’ sense of self-identity and belongingness on their usage of Wechat. regression studies partly confirmed the TPB: attitude and subjective norm substantially predicted intents to continue using Wechat, and intention significantly predicted behaviour.... Intention and, perhaps surprisingly, behaviour were strongly predicted by self-identity, but not by belongingness. Prior behaviour also had a strong influence on both intention and behaviour. Wechat addiction was shown to be strongly correlated with feelings of self-identity and belongingness. Fuzzy based theories of usage and satisfaction and hidden markov theories of planned behaviour are also used in this research to investigate why adolescents use WeChat on their mobile phones. Consumer involvement is examined for its effect on motivating demands and as a mediating factor in problematic usage. All hypotheses are tested using techniques such as factor analysis, correlation analysis, and structural equation modelling, which are based on Wechat user surveys. These results may guide efforts aimed at modifying Chinese teens’ ongoing usage of Wechat or addictive tendencies for Wechat.

    Keywords: Wechat, Chinese teenagers, Fuzzy based theory of use, satisfaction, Hidden markov based theory of planned behavior (TPB)
  • T. Jerry Alexander*, S. Suresh Kumar, N. R. Krishnamoorthy Pages 1439-1447

    Handwritten manuscripts contain much ancient information related to astrology, medicines, grammar etc. They are of various forms such as palm leaves, paper, stones etc. These manuscripts are preserved by the method of digitization with noise introduced. By using proper filtering as well as denoising methods these noises are eliminated and the images are restored. It is finally required to recognize the handwritten characters automatically from the restored image enabling the researchers and enthusiasts for going through the document very easily. This proposed work deals with the creation of handwritten characters dataset for all the characters within a specific dimensional area and recognition of handwritten characters using deep learning method. First the handwritten dataset is created from different human handwritings in a specific format, scanned and each character with suitable dimension is obtained by labeling them as per sequence. Then various forms of convolution network are applied for the character recognition and the results are compared to obtain the suitable net for the Tamil character recognition from the handwritten document.

    Keywords: Character recognition, Convolution Neural Network, Historical Manuscripts
  • Ali Ehsani, Seyed Abdollah Amin Mousavi*, Mahmood Alborzi, Maryam Rastgarpour Pages 1449-1468

    A question answering system answers questions using natural language processing, a database, or a document set and returns an accurate answer to the user’s question. A large number of efforts have been made to design some systems to answer the user’s question. However, limited studies have been conducted on the Persian language to extract the answer to the questions with subjects “why” or “how”. The scarcity of such studies is attributed to the complexity and time-consuming analysis and processing of the text structure when going beyond the boundaries of a sentence. The present study’s primary purpose was to analyze Persian text to create a set of linguistic patterns that can perform related information of causal/explanatory text sentences in a general domain. Information retrieval and text structure recognition algorithms were used for data and text analysis, called Rhetorical structure theory. In addition, 70 questions for “why” and 20 questions for “how” were determined for evaluating the system performance, respectively. Finally, the .NET programming language and relational database, and Persian language interpreters were used to design the software system. Eventually, a system was designed and published to answer the question with subjects “why” or “how” with general Data Domain. The system answered 61 questions with a recall rate of 68%. About 55% of the items were correctly responded to according to the signs of inter-sentence relation, while the correct answers to 13% of questions were related to rhetorical relation among the sentences.

    Keywords: Question Answering System, Natural Language Processing, Text mining, causal, explanatory relation, Rhetorical structure theory
  • M. Shashidharan*, C. Srinivasan, Yogesh Mahajan, Shameem Anwar Pages 1469-1475

    Warehouse (Hubs) Management represents one of the key elements of logistics that stores goods between the point of manufacture or production and the spot of consumption in order to confer a never ending flow of goods. It plays a crucial part in the supply chain of an organization to stock up products effectively in Material Management practices. The potential of Warehouse (Hubs) has received much attention, recently, and considered to be an increasing and novel approach among the best performing organizations. Warehousing operation is vital as it serves as the hub, where nodes are linked to their respective spokes. An effective strategy adopted in the Warehouse (Hubs) enables to enhance and to satisfy the customer genuine needs as demonstrated earlier. Hence, in the present study, we investigate the level of warehousing characteristics or attributes associated with supply chain Warehouse (Hubs) efficacy, among the selected employees of logistic sectors and Warehouse (Hubs) management firms, functioning in Chennai city, with a sample size of 417 respondents, using descriptive statistics. Results showed that there was a significant positive correlation was observed with moderate frequency of 35.4% followed by high frequency 30.7% respectively among examined respondents and elucidated their intention and a quest for sustained or continuous effective Warehouse (Hubs) management measures, to improve and target the unique demands of the customers. The outcomes of the descriptive analysis showed that, obtained mean values as above 3 (3<) indexed clearly that looking for hub and spoke model strategy that ensured adoption of safety parameters with a higher standard deviation (1.169). The low standard deviation indicates the low possibility and our findings shown to be higher opportunities for integration of efficient Warehouse (Hubs) management. Study inferred that effective and continuous Warehouse (Hubs) management confers a promising platform for enhanced data management cost optimization, forecasting of demands, in the light of value added approaches in supply chain management of examined Warehouse (Hubs) sectors.

    Keywords: Warehouse (Hubs) operation, Supply chain management, Hub, spoke, Safety parameters, Logistic management
  • Chen Cai Pages 1477-1495

    Nowadays, every sector strives to provide its customers an exceptional experience. With an online to offline retailing structure, higher profits may be made while still providing customers with dependable service. Offline and online techniques are also covered in this framework for improving production management. Online to offline (O2O) models are included here for various retail sales of particular items from the sector since information technology plays a vital role in retail production management. Industries must innovate or die if they want to remain competitive in the future marketplace. Because of this, innovation has become a popular term in the business world. Innovative product cost management technique that incorporates idea development, dissemination, and conversion. To get around this concept, one must use an approach that does not rely on distribution. The model’s usefulness in Realtime is shown via numerical examples. Critical parameters are used in a sensitivity analysis. Graphical ramifications and special situations give further evidence of the current study’s global optimization.

    Keywords: Information technology, Retail industry, Online to offline, Innovation strategy
  • J .Josly Priyatharsni, A. Uma Pages 1497-1509

    This paper proposes an area and power efficient technique for the design of an ECG detector. In biomedical applications, like the ECG detector for implantable cardiac pacemaker systems, area and power consumption plays a major role. Thus in this paper an area efficient ECG detector with folded pipelined FIR filter is proposed. In conventional wavelet filter bank structure, the decimated wavelet filter bank used makes use of 3 LPFs and 1 HPF of pipelined architecture. This pipelined filter structure requires more hardware. Thus in the proposed architecture folding transformation technique has been applied to the pipelined filter structure in order to reduce the hardware. The decimated wavelet filter bank consisting of the filter structures followed by down samplers is used to denoise the ECG signal. The QRS complex detector consisting of a comparator, counter and a threshold block is used to find the correct location of the QRS complex. In order to further reduce the number of registers that occurs as a result of the folding transformation, folding transformation with register minimization technique is applied to the pipelined filter that results in less hardware utilization. The proposed technique is implemented using Xilinx Sytem Generator. Thus a total area of 22.78% is saved using the proposed method. Considerably a low power of 115mW is also achieved which makes it useful for high performance medical applications.

    Keywords: Implantable cardiac pacemaker (ICP), Wavelet filter bank (WFB), ElectroCardiogram (ECG), Detection error rate (DER)
  • S. Shanthini, A .S. Aarthi Mai Pages 1511-1517

    ABZU is an explainable non- linear model, that has reimagined artificial intelligence to completely change the way problems are solved. It has a new standard of interpretability that has simple visual depictions and mathematical expression for models developed which yields high accuracy. From this, a model developed can be highly accurate and algorithm is recognized to all. This makes more complicated predictions in an easier level and explore new features of the model. This can be achieved through leveraging the results in the form of graphs and representing it. This algorithm applied in the fields of Artificial Intelligence / Machine Learning will yield an accurate result. This improves efficiency and increases the model’s reusability.

    Keywords: Machine learning algorithm, Data science
  • V. B .Ameer Rasel, A. Uma, P. Kalpana Pages 1519-1528

    Cardiovascular diseases are the major threat for the survival of human being. Electrocardiogram is used in the diagnosis of the heart diseases. But many types of noises are present to deter the quality of ECG signals. Thus, these noises must be filtered using Digital filter like FIR (Finite Impulse Response) filter. There are different approaches to design a FIR filter. One of the prominent ways is by using windows. There are other ways of designing the FIR filters but filtering using windows is one of the fastest ways of designing the FIR filters. In this paper different orders to filter has been analyzed and the best order to filter the ECG is found. Using that order different windows of FIR filter has been designed using Xilinx System Generator which is used to filter the noisy ECG signal. Also, various windows have been compared for filtering noises in ECG like Baseline Wander noise and Muscle Artifacts noise. The best window among all the window is found out using MATLAB Simulink.

    Keywords: Finite impulse response (FIR), Electrocardiogram (ECG), Muscle artifacts (MA), Baseline wander (BW)
  • M .Panneerselvam, S. Chinmaya Narayany, M .Farah Mariam, K .Hari Priya, K. Narmatha Pages 1529-1539

    Interactions between input and output variables are a prevalent challenge in the design of multi-loop controllers for multivariable processes, and they can be a major stumbling block to obtaining good overall performance of a multi loop control system. The deconstructed dynamic interaction analysis is proposed to solve this limitation by decomposing the multi loop control system into a series of n independent SISO systems, each with its own PID controller. The multivariable decoupler and multi loop PID controller is applied to Two Tank Conical Interacting System (TTCIS). This TTCIS is chosen as benchmark problem used by many researchers. Firstly, the Mathematical modelling of TTCIS is derived using First principal model. The non-linear system is linearized using Jacobian matrix and decomposed into multiple SISO systems. The controller design for the process is then obtained, and an RGA matrix is constructed to minimise the interaction effects. To demonstrate the efficiency of the suggested strategy, simulation results using TTCIS are provided.

    Keywords: Multi-loop PID controller, Decoupling, Linearization, Two Tank Conical Interacting System
  • S .Navaneethan, U. Swetha Pages 1541-1552

    Human motion tracking is a significant problem in the rehabilitation phase of people with leg injuries. To monitor and analyze them in a reliable way under low cost, Knee and thigh angles of the human leg are estimated using sensors. The human leg is modeled as a two link revolute joint robot. Initially, switched linear models of the human leg are considered. Since linear models are considered, Kalman filtering algorithm is applied to obtain the values of the estimates. Results are obtained for Kalman filtering algorithm and it is observed that, estimates cannot be obtained on using Kalman filtering algorithm. On considering the non-linearity of the human leg, the nonlinear model is obtained. The parameters are estimated using the Extended Kalman filtering algorithm. The results are obtained and are reliable. Based on these values, the rate of recovery of the patient during rehabilitation phase can be assessed. Furthermore, this data can be sent to physicians over the Internet of Things.

    Keywords: Switched linear model, Extended Kalman Filter
  • Arun Kumar Pinagapani, S. Iniya Pages 1553-1560

    Proton Exchange Membrane Fuel Cell (PEMFC) converts chemical energy liberated during the electrochemical reaction of hydrogen and oxygen to electrical energy and has huge potential in transport, stationary and portable applications. It is an open loop stable system but it does not settle in the given set point. Hence controllers are required to make the system to settle at the set point and also to prevent the PEMFC from oxygen starvation. A simulative approach of controlling PEMFC with Proportional Integral Derivative (PID) controller and Fractional order PID (FOPID) controller is investigated in this paper. FOPID controller is tuned using two approaches namely KC auto tuning and Oustaloup approximation. The performances of the two FOPID controllers are compared with PID controller. The use of FOPID controller results in better closed loop time domain characteristics than PID controller and is shown with simulations in MATLAB.

    Keywords: PEMFC, FOPID, PID, Controller, Fractional order
  • Lala Septem Riza, Muhammad Ilham Nurfathiya, Jajang Kusnendar, Khyrina Airin Fariza Abu Samah Pages 1561-1572

    The objective of this research is to design and implement a computational model to determine DNA barcodes by utilizing the Particle Swarm Optimization (PSO) algorithms implemented on Big Data Platforms, namely Apache Hadoop and Apache Spark. The steps are as follows: (i) inputting DNA sequences to Hadoop Distributed File System (HDFS) in Apache Hadoop, (ii) pre-processing data, (iii) implementing PSO by utilizing the User Defined Function (UDF) in Apache Spark, (iv) collecting results and saving to HDFS. After obtaining the computational model, two following simulations have been done: the first scenario is using 4 cores and several worker nodes, meanwhile the second one consists of a cluster with 2 worker nodes and several cores. In term of computational time, the results show a significant acceleration between standalone and big data platforms with both experimental scenarios. This study proves that the computational model built on the big data platform shows the development of features and acceleration of previous research.

    Keywords: Big data, Algorithm, Particle swarm optimization, Similarity check, Motif discovery, DNA barcoding
  • Kalyani Chaudhari, Shruti Oza Pages 1573-1583

    The nuchal translucency (NT) detection and thickness measurement is a milestone in the prediction of the abnormalities in addition to chromosomal disorders in a fetus in ultrasound imagery. Nuchal translucency is an accumulation of fluids just at bottom of the foetal neck which is closely associated with chromosome abnormalities with cardiac arrest within the pregnancy period of the first trimester. At the hospital, the sonographers manually estimate the thickness of the mid-sagittal plane of nuchal translucency, which is a significant marker for prenatal screening. Such a conventional process done by a technician is quite time-consuming and requires a skilled technician. Within this methodology, an automatic NT detection method based on SIFT keypoint and GRNN is proposed in the mid- sagittal plane. This Non-invasive approach is crucial not just for the assessment of NT, as well as for the detection of extreme deformities and the identification of high-risk pregnancies. The proposed method is tested on a large image dataset which shows that the proposed technique has better accuracy than well-known state of the art methods. The proposed SIFT and GRNN based method have an error of 0.02 which is very less compared to the SVM, ANN, NB and KNN.

    Keywords: GRNN, Nuchal translucency, NT, SIFT
  • Neerajkumar S. Sathawane, Ulhaskumar Gokhale, Dinesh Padole, Sanjay Wankhede Pages 1585-1594

    Cardiovascular diseases are the world's principal reason for death, accounting it about 17.9 million people per year, as reported by World Health Organization(WHO). Arrhythmia is often a heart disease that is interpreted by a variation in the linearity of the heartbeat. The goal of this study would be to develop a new deep learning technique to accurately interpret arrhythmia utilizing a one-second segment. This paper introduces a novel method for automatic GAN-based arrhythmia classification. The input ECG signal is derived from the fusion of well known Physionet dataset from MIT-BIH and some Hospital ECG databases. The ECG segment over time is used to detect 15 different classes of arrhythmias. The GAN network uses an attention-based generator to learn local essential features and to maintain data integrity for both time and frequency domains. Among these, the highest accuracy obtained is 98\%. It can be inferred from the results that the proposed approach is smart enough to make meaningful predictions and produces excellent performance on the related metrics.

    Keywords: Electrocardiogram, ECG classification, Inception, GAN, Generative adversarial network
  • Pankaj Pramod Chitte, Ulhaskumar Gokhale, Vivek Kapur, Dinesh Padole Pages 1595-1604

    Cervical Spondylosis is a recurring spinal syndrome in which the spine progressively tightens and that can eventually become fully rigid. Early diagnosis is really an efficient way of improving the recovery rate and reducing costs. Due to the difficult and comprehensive procedure for recognizing cervical spondylosis in the initial stages, this area is untreated. Strong correlations of the vertebrae make the automatic detection procedure challenging. These minor variations in the X-ray image make visual interpretation a challenging task involving skilled explorers. Even after this, the problem still remains untreated and also the feasibility of even an automatic detection framework has still not been addressed for this application. Thus, the Deep learning-based method was used to predict some potential relevance of Cervical Spondylosis has. The proposed system can be used to detect the onset of cervical spondylosis in the early stages using deep learning techniques.

    Keywords: Cervical spine, Cervical spondylosis, Deep learning, X-Ray imaging, inception
  • S. Prabhakaran, B. Vinod, S. Suresh, K. Asokkumar Pages 1605-1609

    Many robots were installed in the manufacturing industry in the last decade. This shows the im- portance and value of the robot system in production. But many robots are not collaborative with humans. Collision detection is a fundamental issue for the safety of a robotic cell. All existing col- lision detection sensors are worked by opposing force (contact between robot and human) and it is injurious to humans due to the high collision force response of the robot at maximum speed. This research work demonstrates the use of the capacitive sensor for collision detection and the experiment shows the collision avoiding by non – contact manner. The sensors are mounted on the links of the robot. If the robot comes closer to humans, the capacitive sensor will detect the human presence and send the signal to the robot controller before the robot touches the human. The controller will turn off or hold the robot in position immediately based on the sensor signal. So this technique will ensure human safety more and more when compared to the current robot system. This sensor can be installed on a new robot and retrofitted to the old robot.

    Keywords: Robot collision, Capacitance sensor, Non – Contact Sensor, Human safety
  • A. Bhuvaneswari, T.A. Venetia Pages 1611-1630

    Today, online shopping has evolved as a prominent business and there are very few opportunities for vendors to improve their sales. A machine learning algorithm can be used to predict what should be sold in a particular month so that sales can be increased. Once the Prediction is done a dashboard will be created to display which products should have been offered to have high sales. Billing the sales and analyzing with help of an expert is done. But in this case, not all people have the resources to get help from the experts. Vendors rely on their experiences. People who have started businesses for a few years lack experience and need support. To Help the vendors in improving their business a prediction of sales is done for each month and a dashboard will display the items to be sold in a particular month for an offer. To do Prediction Machine Learning Algorithms Random Forest Algorithm is used. This Algorithm is the best algorithm to do prediction and it is based on decision trees. The Scope of this project is developing the random forest model for predicting the sales of the products in each month from the year January 2013 to October 2015.

    Keywords: E-commerce, Machine learning, Artificial intelligence, Online advertising, Randomforest algorithm
  • B. Sai Lakshmi Chaitanya, K. Rajalakshmi Pages 1631-1638

    The present trend of low power and area has led to the design of many efficient algorithms for speech processing in portable devices. In biomedical applications like hearing aid, high performance is not the key area of interest. Area and power also play a major role. There are many factors which affect the market penetration. So, the system can run as slow as possible to trade speed for power and area. An area efficient folded IIR filter is designed. The folded architecture uses the concept of time multiplexing and it has only one multiplier and one adder to perform all the operations. Thus, the area is reduced, and this filter is converted into an IP. The area and resource utilization of the folded IIR filter is compared with the normal IIR filter design. AFIR filter is also designed and converted into an IP.This FIR IP is used in the block design to process a corrupted audio signal. Zed board is a part of Xilinx zynq -7000 All Programmable SoC. It consists of processing system and programming logic. The filter design is implemented in the programming logic part. The design is synthesized and the bitstream in generated. The generated bitstream along with the hardware is exported to SDK where the corrupted audio signal is processed on zed board.

    Keywords: Low power, IP, multiplexing, SoC, Processing system, Programming logic
  • R. Vidhypriya Pages 1639-1648

    The growth of the Internet of Things and mobile devices highly rely on independent and distributed operation in wireless networks. A focus on the allocation of spectrum for effective communication, mitigation of interference and reduction in the energy consumption in the wireless environment is essential. Non-availability of the spectrum in a wireless network can be overcome by spectrum reuse in Cognitive Radio Femtocell networks (CRFN) which improves the indoor communication coverage. is mostly preferred. The spectrum is sensed at regular intervals by the secondary user (SU) to detect the presence of the primary user(PU). Sensing the spectrum reduces the performance and the throughput of the secondary users. To overcome the above in this research, a novel multichannel spectrum allocation (MSA) technique combined with a decode-and-forward (DF) based cooperative spectrum sensing scheme is proposed. The information rate that can be transmitted over a given bandwidth is greatly enhanced in the proposed multichannel resource allocation (MRA) technique It is evident from the simulation results, that the throughput of the SUs is boosted when compared over the established techniques.

    Keywords: Terms–Cognitive radio femtocell network, Resource allocation, Cooperative spectrumsensing, Primary user, Licensed band, Secondary user allocation
  • C.D. Anisha, K.G. Saranya Pages 1649-1654

    A stroke occurs in the scenario wherein the blood supply to the brain is blocked, leading to a lack of oxygen to the blood. There is a need for the early diagnosis of the stroke to handle the emergency situations of stroke in an efficient manner. Integration of Artificial Intelligence (AI) in the early diagnosis of stroke provides efficiency and flexibility. Artificial Intelligence (AI), which is a mimic of human intelligence has a wide range of applications from small scale systems to high-end enterprise systems. Artificial Intelligence has emerged as an efficient and accurate decision-making system in healthcare systems. Machine Learning (ML) is a subset of Artificial Intelligence (AI). The incorporation of machine learning techniques in stroke diagnosis systems provides faster and precise decisions. The proposed system aims to develop an early diagnosis of stroke disorder using a homogenous logistic regression ensemble classifier. Logistic regression is a linear algorithm that uses maximum likelihood methodology for predictions and a standard machine learning model for two- class problems. The prediction is improved by accumulating the predictions of two or more logistic regression using a bagging ensemble classifier thereby increasing the accuracy of the stroke diagnosis system. The accumulation of prediction of two or more same models is known as a homogenous ensemble classifier. The results obtained show that the proposed homogenous logistic regression ensemble model has higher accuracy than single logistic regression.

    Keywords: ndex Terms—Stroke, machine learning, logistic regression, Homogenous logisticregression Ensemble classifier
  • M. Aishwarya, N. Ilayaraja, R.M. Periakaruppan Pages 1655-1663

    The raw data obtained in the form of search results may be large for any particular problem, but is often a relatively small subset of the data that are relevant, and a search engine does not enable discovering the necessary subset of relevant text data in a large text collection. In this paper, a solution to a problem called conformity to truth, which studies how to find websites with the maximum amount of true facts, from a large amount of conflicting information on the user-defined topic, is proposed. Two algorithms called ParaSearch and FactFinder, which helps in identifying the best web links for searching general information and finding individual facts respectively are proposed. In ParaSearch, latent Dirichlet allocation (LDA) is used to identify the top 10 frequent terms using which we further construct a similarity matrix to identify the best web pages. In FactFinder, the usage of semantic processing is done to identify the best web pages, building upon the existing Page Rank Algorithm to further optimize the search results. The results prove that ParaSearch can identify web pages with the maximum number of facts conforming to the truth much better than popular search engines. The ambiguity of the individual facts is decreased to a great extent by using the FactFinder algorithm. Thus these algorithms will increase the accuracy of identifying possible web links for a given search word much better than most of the popular search engines.

    Keywords: frequency analysis, Latent Dirichlet allocation, text mining
  • N. Geetha Pages 1665-1671

    Internet of Things (IoT) had emerged as a technology critical for environmental applications. Water is a natural resource and is the basic need for living things. In the current world, there is a scarcity of water due to overpopulation and also contamination of water resources. It is very crucial to save our drinking water for current and future generations. IoT based smart water quality monitoring system is proposed to monitor the quality of water. Different sensors are used to monitor various parameters like pH value, turbidity in the water. The microcontroller unit is interfaced with the sensors to process the data and GSM is used to send information to the monitoring centre for further actions to be performed. Results proved that the quality of the water is verified and SMS is sent to a higher authority.

    Keywords: Index Terms-pH sensor, GSM module, monitoring, pollution
  • B. Nivedetha, P. Thamaraikannan Pages 1673-1678

    A wireless-enabled helmet paired with Smartphone will effectively reduce the chance of accidents. Many youngsters choose bike riding for fun, economic, less parking space and are easily manageable in traffic compared to other automobiles. The proposed work ensures the riders are wearing helmets using IR sensors during travel, intimate the accident location to a nearby hospital and family members. This wearable device consists of many sensors which will send the readings to the Arduino application through WIFI. Most importantly, the motorbike ignition system is connected in such a way that it will not trigger on until the rider is wearing a helmet and without consuming alcohol. This system is widely used during night travelling, to find whether the traveller is tired or sleepy. This work is further enhanced with the built-in navigation system to provide a tense free ride.

    Keywords: Smart Helmet, Accident, Alcohol detection, Vibration Sensor, GPS, GSM, Arduino, Solar panel
  • V. Krishnaveni *, R. Keethana Pages 1679-1691

    In this work, a robust rain removal algorithm is proposed for removing rain from still images. The algorithm uses a deep network architecture called DerainNet for effective rain removal. The proposed network directly learns the mapping relationship between rainy and clean image detail layers from the given set of data. In order to modify the objective function and also to improve the deraining process, other Deep CNN based architecture increases the width or depth of the neurons, which in turn increases the complexity of the network. But this work makes use of the Image Processing domain knowledge which reduces the complexity of the network. Instead of training the entire image, only the detail layer of the image is trained. The detailed layer of the image is obtained using two low-pass filters one after the other. They are guided filter and L0-Smoothing filter. The results obtained prove that the proposed network performs better deraining on images in comparison to paper [2] with light rain streaks. Python version 3.8 is used for this work.

    Keywords: CNN, Guided filter, L0-Smoothing filter, the Detail layer
  • N. Ilayaraja *, R.M. Periakaruppan Pages 1693-1702

    The personalized local mobile search aims at finding the right on the spot information that is most relevant to the user's requests. It is implemented as a mobile application where the user can access nearby places based on his/ her current location. In Today's technology driven world user profiles are the virtual representation of each user and they include a variety of user information such as personal, interest and preference data. These profiles are the outcome of the user profiling process and they are essential to service personalization. The user profile based personalization approach can be applied to enhance the power of mobile local search for local spots and contributes to a significant convenience in location-based mobile searching. The system takes the user information such as personal, health, entertainment and choice of preference and these parameters are passed to Google Maps API key for personalized query processing. As a result, the user will get prominent services rather than closing one.

    Keywords: Robot collision, Capacitance sensor, Non–Contact Sensor, Human safety
  • S. Karthik*, Ila. Vennila Pages 1703-1712

    The need for power in today's society is growing by the day. The use of natural resources to generate power is maximized. This document is designed to help you increase the energy efficiency of your home's electrical equipment. To reduce energy usage, appliances are connected to a simple network that can be accessed remotely over the internet. The assessment of the home's environmental conditions is gathered in the web-server and precise by smartphone or internet. This technology may also provide real-time electrical parameters in the form of a web application regularly.

    Keywords: IoT, Energy management, Home automation, ArduioMega 2560, WiFi module
  • J. Agalya*, P. Prabavathi, S. Subha Rani Pages 1713-1717

    A dual band monopole antenna for passive RFID tags is presented, simulated with suitable bandwidth and gain for RFID application. The monopole is optimized using HFSS to obtain dual band characteristics. This paper presents a dual-band monopole antenna suitable for passive RFID applications which operates at 2.45 GHz and 5.8 GHz. This antenna is designed with low cost and low profile substrate to achieve a reasonable gain. This antenna has a total area of 38×28mm2 and is mounted on an FR4 substrate with a thickness of 1.6mm, dielectric permittivity constant 4.4, and loss tangent of 0.025.

    Keywords: Monopole antenna, RFID, Passive tag
  • Hossein Bastin *, Seyed Abdul Hamid Sabet, Masoud Salehi Razouh, Abdolkarim Hosseinpour Pages 1719-1730

    Extensive studies have been conducted on the relationship between monetary policy transmission channels and economic growth; however, there is no comprehensive theory in the literature. The relationship between monetary policy channels and economic growth is influenced by many economic and non-economic factors that vary depending on countries' economic and political conditions. A key factor in Iran over the last four decades has been economic sanctions. According to the cause; The purpose of this study was to investigate the non-linear effects of the sanctions index on the relationship between monetary policy transmission channels and economic growth in Iran during the period 1391-1397 with a non-linear approach and using the NARDL method. The findings of this study showed that in the period under review, the exchange rate had a negative and significant effect on economic growth, which has become stronger with the introduction of sanctions in the model. Similar results were obtained for the bank credit variable, with the difference that bank credit has had a positive and significant effect on economic growth, strengthened by the sanctions index in the model. Other results also showed that interest rates had not had a significant effect on economic growth.

    Keywords: Sanctions index, Monetary policy, Transmission channels, Economic growth
  • R. Subalakshmi *, B. Radhakrishnan Pages 1731-1743

    In this paper, the authors establish the approximate and exact controllability of semilinear non-autonomous impulsive neutral stochastic evolution integrodifferential systems with variable delay in a real separable Hilbert space. The findings are determined by using the fixed point approach. Finally, an example is addressed in the proposed work.

    Keywords: Impulsive neutral stochastic evolution equation, Fractional power operator, Approximate controllability, Exact controllability, Banach fixed point theorem
  • M. Subathra *, V. Umarani Pages 1745-1751

    In recent years of Natural Language Processing research, the name ambiguity problem remains unresolved while retrieving the information of author names from bibliographic citations in a digital library system. In this paper, a feature ranking model is investigated that resolve the ambiguity problem with Analytical Hierarchy Process (AHP). The AHP procedure prioritizes and assigns the weights for certain criteria which forms a judgemental matrix called pairwise comparison matrix. The result of the AHP analysis aims to get the preprocessing level using Levenshtein Distance. Finally, the AHP helps to find the co-author criteria as the highest priority than the other criteria taken from the digital library data set.

    Keywords: NLP, citations, digital library, Levenshtein distance, AHP
  • B. Vinod *, B. Bindu, V. E. Jayanth Akash, S. Dinesh Kumar Pages 1753-1760

    Material handling robots are replacing human workers in most of the manufacturing shop floors. Robot operating system is an open-source framework that enables visualization and implements various complex robots and their functions. A 6-DoF robotic manipulator with a gripper is designed to perform the pick and place operations. The aim is to integrate the designed robot with the robot operating system. The integrated system is then visualized and controlled using a gazebo and RViz to perform pick and place operations.

    Keywords: Robotic manipulator, Pick, place, Robot operating systems, Motion planning
  • V. Vijayanand *, R. Jayachitra, P. Guhapranav Pages 1761-1774

    Productivity improvement is vital to sustaining in the market. This work outlines the significance of productivity improvement encompassing quick changeovers, fixture design modifications and improved material handling thereby meeting the demand requirements. There is the total of 10 component varieties such as D88 U1, D88 L1, D110 U2, D110 L2, D132 U3, D132 L3, D150 U4, D150 L4, D168 U5 and D168 L5.  D88 contributes to 82\% of part production and hence termed as high-volume components whereas D110, D132, D150 and D168 components are termed as low volume components. In the current state, there were totally 7 machines but in the future state, there would be only 4 machines. Out of 4 machines, 2 machines will be delivering the same output of 220 D88 parts per day as of Mar-2021 and the other 2 machines will be satisfying fluctuating order arrivals at low volume production line. Apart from effective utilization of resources, deploying the latest technologies is equally important in this competitive environment. Industries needs to incorporate the growing technologies to sustain in the market. Internet of Things (IoT) connects the physical world with the digital word and makes life simple by automating a wide variety of processes across diverse environments available. This work highlights the usage of the Industrial Internet of Things (IIoT) for replenishment systems in a small-scale industry environment and outlines the effect of productivity improvement. This idea will be beneficial if done at a low cost, unlike the inventory management software which is priced quite higher and seems unaffordable to small-scale industries.

    Keywords: IIoT, Productivity improvement, Cycle time, Capacity plan, Order quantity, Safety stock, Replenishment plan
  • R. Suresh Kumar *, P.K. Rajesh, S. Neelakrishnan Pages 1775-1791

    Electric vehicles will become an inevitable part of future transportation, because of the increasing concerns of global warming and climate change effects, caused by gasoline and diesel vehicles. Lithium-ion cells are the primary candidates for energy storage in electric vehicles. Lithium-ion cells are sensitive to operating temperatures. Operating them beyond the optimum temperatures, reduces their lifetime and can lead to thermal runaway, at extreme conditions. Hence, a thermal management system is required. In this work, a simple ‘On-Off’ control is used and the upper and lower thresholds are optimized, to reduce the energy consumption and the temperature difference between the cells. 3 coolant flow rates are selected and are analyzed for each upper and lower threshold. A MATLAB Simulink model and spreadsheet are used for analysis. The models are validated by experiments. It is found that a control strategy of ′32∘C to 35∘C′, with a coolant flow rate of 0.67 kg s−1, among the selected strategies, is better in reducing energy consumption and temperature difference. Running the cells at relatively higher temperatures, within the optimum range, helps in reducing energy consumption and temperature difference.

    Keywords: Lithium-ion battery, Battery Thermal Management, On-Off control, Energy consumption, Temperature difference, Electric Vehicle
  • K. Anitha *, R. Aruna Devi, M. Munir, K.S. Nisar Pages 1793-1806

    The unification process of Rough sets with Graphs is implemented in phenomenal applications in all the fields of Engineering.  With the rapid and exponential increase in the worldwide web, it is necessary to organize the data. The major part of the data like google links, the social networks can be represented in graphs. But in the case of uncertainty, the concepts of classical graph theory cannot handle complex networks. For resolving these issues in 2006 Tong He introduced the concepts of Rough Graphs. In this paper, we have introduced metric dimensions in Rough graphs along with their Mathematical Properties.

    Keywords: Rough Sets, Graphs, Set Approximations, Metric Dimension, Resolving set
  • N. G. Bathrinath, M. Yuvaraja * Pages 1807-1825

    In the era of electric vehicles, a regenerative shock absorber can be equipped in the place of normal shock absorbers which continuously produces electricity and can be amplified and fed to the battery, a piezoelectric generator can be used that harvests mechanical vibrations energy available on an electric vehicle. The piezoelectric material can be placed in shock absorber housing above the piston head. Due to bumps and pits on road, continuous generation of electricity is possible. It can be then amplified and fed back to the battery regeneratively. The 3D cad model of piezo housing was modelled and the force acting on the housing of piezoelectric materials was analyzed and found to be 103.5KN and equivalent power produced was calculated. Various Piezoelectric materials were studied and PZT 5A was found to be the best to act as transducer and quartz was the best material to act as Actuator. A proof of concept developed showed the power produced by piezo material and comparison study of that with PZT 5A was done and power produced was 19J which is 9\% of power produced by the charger of electric vehicles. The calculation for mileage showed an increase of 3 km which is 4\% of the maximum distance covered by the average electric bike.

    Keywords: Electric vehicle (EV’s), Regenerative, Shock absorber, Transducer, Quartz, Actuator
  • V. Chowthri, A. Uma *, P. Kalpana Pages 1827-1833

    An adaptive filter is a real-time computational device that iteratively simulates the relationship between a filter's input and output signals. It is based on an adaptive algorithm that iteratively self-adjusts the linear filter coefficients to decrease the power of e. (n). The LMS method is one of the most widely used adaptive algorithms for adjusting the coefficients of adaptive filters, among others. The error-computation block and the weight-update block, which determine the filter's efficiency, are the two key computing blocks of the direct-form LMS adaptive filter. In this paper, adaptive filter is implemented in two different architectures namely, zero adaptation delay adaptive filter and two adaptations delay adaptive filter which results in low power consumption and less area complexity. Zero adaptation delay adaptive filter provides nearly 52\% savings in the area and the delay decreases by 26\% in two adaptations delay adaptive filter over the conventional adaptive filter. Hence based on the required speed and area for the application, any one of the proposed structures can be used.

    Keywords: Adaptive filter, Least mean square algorithms, LMS adaptive filter, Adaptation delay, Area, Delay
  • V. Naren Thiruvalar *, E. Vimal Pages 1835-1843

    The main objective of this project is to detect driver’s drowsiness and alert the driver which is an important precautionary measure in order to avoid accidents. Here two different algorithms based on Convolution Neural Network (CNN) were applied and the results were compared respectively. “Highway Hypnosis” is a serious issue to be addressed while driving especially on highways. Drivers who travel on highways continuously for more than 3 hours must be aware of this serious problem. If there is proper knowledge of it, fatalities would be drastically reduced. In this project, a dedicated detection coupled with an alarm system is provided to alert the driver in case of drowsiness. CNN is used since it is very effective in analyzing images and videos. In this project, a live video feed is used to detect drowsiness by suitable algorithms.

    Keywords: CNN, Drowsiness Detection, Viola-Jones, PERCLOS
  • K.V. Anusuya *, M. Sakthi Priya Pages 1845-1854

    Spectrum shortage in Vehicular Communications has become an area of concern due to the diverse nature of applications served. Though the allocated 5.9GHz Dedicated Short Range Communication (DSRC) spectrum for vehicular communication is inefficiently utilized, the problem of congestion creates an impression that there is always a scarcity in the spectrum. The criticality of the issue becomes high when a safety message is unable to access the channel or is dropped due to congestion. This paper proposes an approach to prioritize channel access to safety messages through dynamic spectrum sharing. In case, when a safety message finds all the DSRC sub-channels occupied, then any one of the ongoing non-safety messages is switched over to the adjacent 5.8GHz ISM/UNII-3 band allowing the former to access the DSRC band. The proposed approach shows an increased QoS and reduced channel access delay for safety messages compared to the previous works.

    Keywords: Channel delay, DSRC, Safety message, Spectrum sharing, VANET
  • J. Kanchana*, S. Sindhya Pages 1855-1862

    The manual training of the new employees requires more time and labour work hours. Virtual reality and modern digital manufacturing techniques can be used to reduce the time required in this, thereby reducing the product cycle time. Virtual Reality (VR) provides a completely immersive and interactive experience of a simulated environment to the user. It helps in better understanding of the manufacturing process without using much of the employees working hours. The study aims at developing a Virtual Reality application for the machining process of a centrifugal pump impeller. This application would help the trainees in understanding the machining process involved and the process parameters. The factory setup and the impeller are modelled using Solidworks. The Virtual Reality application is developed using Unity 3D and played using Android smartphones and VR glasses. The application can be used as an alternative to the traditional training processes in industries.

    Keywords: Virtual reality, Unity, Solidworks, VR glasses, Android smartphone
  • M. Manimozhi*, A. Angeline Rajathi Pages 1863-1871

    Due to nonlinear actuators or changing environmental conditions, the parameter of the process changes resulting in the failure of a conventional controller. To overcome the failure of the conventional controller and to achieve the desired output, an adaptive mechanism called Model Reference Adaptive Controller (MRAC) can be designed. Since the conventional MRAC is aimed at the first-order system for two-variable parameter adjustment, it cannot be applied for most practical systems, including DC motor. To overcome this a modified MRAC which is the combination of conventional MRAC and a PID controller is designed in this paper.

    Keywords: Conventional controllers, PID, desired output, Model Reference Adaptive Controller (MRAC), modified MRAC
  • Esraa Faiq*, Ali M. Ahmed, Lujain N. Yousif, Mohammed Q. Mohammed Pages 1873-1883

    This research describes the thermopower and electric conductivity of CdTe films produced using thermal evaporation technology; the energy activation value of this electrical and thermoelectrically properties are estimated. CdTe sheets have an electrical resistance of about (107. cm). Conductivity was studied, and it was found that the electrical conductivity increased with temperature. At low temperatures, there are two values of activation energy (Ea1=0.337eV) because of this dependence, but at high temperatures, (Ea2=0.702eV) is the only value of activation energy. The Seebeck coefficient (thermopower) was researched to find that it was temperature-dependent, and it was found that, as the temperature increased, the Seebeck coefficient decreased. The experiment results on thermoelectric power were summarized in a two-paragraph statement. The activation energy was measured to be (ES=0.561eV), and the CdTe film was shown to be p-type conductive.

    Keywords: Energy, CdTe, Thermal Evaporation, Seebeck coefficient
  • MohammedMahdi Hashim, Ali A. Mahmood, Mohammed Q. Mohammed Pages 1885-1904

    The information and communications technology time are essential for the security aspect of processes and methodologies. The security of information should a key priority in the secret exchange of information between two parties. That's to guarantee the information's security, some strategies are used, and they include steganography, watermark, and cryptography. In cryptography, the secrete message is converted into unintelligible text, but the existence of the secrete message is noticed, on the other hand, watermarking and steganography involve hiding the secrete message in a way that its presence cannot be noticed. Presently, the design and development of an effective image steganography system are facing several challenges such as low capacity, poor robustness and imperceptibility. To surmount these challenges, a new secure image steganography work called the Pixels Contrast (PC) method is proposed along with the eight neighbour's method and Huffman coding algorithm to overcome the imperceptibility and capacity issues. In the proposed method, a new image partitioning with a Henon map is used to increase the security part. This method has three main stages (preprocessing, embedding, and extracting) each stage has a different process. In this method, different standard images were used such as medical images and SIPI-dataset. The experimental result was evaluated with different measurement parameters like Histogram Analysis Structural Similarity Index (SSIM), Peak signal-to-noise ratio (PSNR). Compared the proposed method with the previous works then proved to be better than existing methods. In short, the proposed steganography method outperformed the commercially available data hiding schemes, thereby resolving the existing issues.

    Keywords: Eight neighbors, Compression method, Image steganography, Security, imperceptibility
  • Samir Alshebly, Ameera Alkiffai, Athraa Kadhim Pages 1905-1911

    The strongly generalized differentiability notion is used to study the fuzzy Aboodh transform formula on the fuzzy nth-order differential in this paper. It is also employed in an analytic technique for fuzzy fifth-order differential equations, and the related theorems and properties are demonstrated in detail. Solving a few instances demonstrates the process.

    Keywords: Fuzzy fifth-order differential equation, Fuzzy nth -order differential equation, Fuzzy number, Fuzzy Aboodh transform, Strongly generalized differentiable
  • Marwa Ayad Mohammed Badr*, Wedad Ador Wadi Pages 1913-1935

    The study targets to determine the factors affecting the world oil markets and describes the significant changes in oil prices in the short and long term predict the optimal price within and outside the sample limits until 2025 by using the data of the study variables for the years (1970-2020) through reconciling the equation of decline in price and factors influencing price determination by the Committee of Oil and Energy Experts has been estimated model using the single equation, The dual logarithm formats, it is more appropriate to describe the large fluctuations in world oil prices and under the assumptions of rational expectations theory and (ARD) methodology, this model has tested according to the standard specifications where the study found that the estimates of the parameters of the model with model constraints coincided with the limitations of the model to have an impact on those factors in the short and long term of factors affecting the determination of world oil prices, as well as the model appropriate data for predicting and estimate the economic policies and the accuracy of oil and energy experts. Determining crude oil prices according to changes in the specific factors of those prices leads us to accept the hypothesis of rational expectations and its significant role in explaining changes in oil prices.

    Keywords: oil prices, rational expectations theory, world market
  • Farah Farman∗, Hind A. Jasim, Ahmed Sameer, Asmaa Abdulrahman Haneen Pages 1937-1946

    Numerically and experimentally the full synchronization of 2 × 2 optoelectronic device network. By the using of 2×2 oscillator’s network, every one of the oscillators has been considered as optocoupler (i.e. LED that has been coupled with photodetector). By fixing the strength of the feedback (E) and increasing the current of the bias (δ) of every one of the oscillators, the dynamic sequence like chaotic and periodic mixed-mode oscillations has been observed. Synchronization of unidirectionally coupled optoelectronic devices network has been featured when bias current equal to 4.4 × 10−4 . Transitions between the synchronization and the non-synchronization states through spatiotemporal distributions have been investigated.

    Keywords: LED, chaos, feedback, synchronization
  • Ehsan Amiri, Ahmad Mosallanejad*, Amir Sheikhahmadi Pages 1947-1955

    One of the challenges today is capturing fake images. One type of image forgery is a copy-move forgery. In this method, a part of the image is copy and placed at the most similar point. Due to the existing algorithms and processing software, it is not easy to detect forgery areas and has created challenges in various applications. Based on the Bat algorithm, the proposed method has tried to help detect fake images by finding forgery areas. The proposed method includes a simple image segmentation and detection of forgery areas with the help of the BAT Algorithm with Mutation. According to the proposed algorithm, the image is first grayed out, then divided into 100 pieces. The optimal Bat algorithm randomly selects some component image and performs a similarity search. The mutation operator is used to avoid getting stuck in the local optimization. The proposed algorithm does not get stuck in the local optimization with the help of the mutation operator and can find forgery areas with a precision of about 81.39% for the IMD dataset and about 81.04% for the MICC-F600 dataset.

    Keywords: Image Forgery, Copy-Move Forgery, BAT Algorithm, Mutation
  • Alyaa Abdulhussein Al-Joda*, Enas Fadhil Abdullah, Suad A. Alasadi Pages 1957-1964

    Medical data mining has been a widespread data mining area of late. Mainly, diagnosing cancers is one of the most important topics that many researchers studied to develop intelligent decision support systems to help doctors. In this research, three different classifiers are used to improve the performance in terms of accuracy. The classifiers are Support Vector Machine (SVM), Adaptive Boosting (AdaBoost), and Random forests (RF). Two machine learning repository datasets are used to evaluate and verify the classification methods. Classifiers are trained using the 10-fold crossvalidation strategy, which splits the original sample into training and testing sets. In order to assess classifier efficiency, accuracy (AC), precision, recall, specificity, F1, and area under the curve are used (AUC). The Experiments showed that the AdaBoost classifier’s achieved an accuracy of 100% which is superior in both datasets in comparison with SVM and RF with AC of 97%. The accuracy is also compared with another study from the previous work that uses the same datasets, and the results demonstrated that the current research has better accuracy than the other study.

    Keywords: Classifier, AdaBoost, SVM, RF, ROC, Breast Cancer
  • Muna S. Abbas*, Fadhel S. Fadhel Pages 1965-1985

    There are two key points in this work as the main objectives. The first is how to convert n th order fuzzy differential equation into a first-order system of fuzzy differential equations using the notion of upper and lower bounds of the fuzzy solution to constitute the so-called interval fuzzy solution. The second is to solve the obtained system from the first step using a powerful method (the backstepping method) to provide an asymptotically stable solution by applying direct methods of stability (Lyapunov direct method).

    Keywords: Backstepping method, Fuzzy differential equations, Uncertainty interval, Controlproblems, Lyapunov functions, Quadratic from
  • Hossein Yarahmadi, Mohammad Ebrahim Shiri∗, Hamidreza Navidi, Arash Sharif Pages 1987-2018

    Multi-agent systems (MAS) are one of the prominent symbols of artificial intelligence (AI) that, in spite of having smaller entities as agents, have many applications in software development, complex system modeling, intelligent traffic control, etc. Learning of MAS, which is commonly based on Reinforcement Learning (RL), is one of the problems that play an essential role in the performance of such systems in an unknown environment. A major challenge in Multi-Agent Reinforcement Learning (MARL) is the problem of credit assignment in them. In this paper, in order to solve Multi-agent Credit Assignment (MCA) problem, we present a bottom-up method based on the bankruptcy concept for the effective distribution of the credits received from the environment in a MAS so that its performance is increased. In this work, considering the Task Start Threshold (TST) of the agents as a new constraint and a multi-score environment, as well as giving priority to agents of lower TST, three methods PTST, T-MAS and T-KAg are presented, which are based on the bankruptcy concept as a sub branch of game theory. In order to evaluate these methods, seven criteria were used among which density was a new one. The simulation results of the proposed methods indicated that the performance of the proposed methods was enhanced in comparison with those of the existing methods in six parameters while it proved a weaker performance in only one parameter.

    Keywords: Multi-agent Systems, Credit assignment problem, Bankruptcy, ReinforcementLearning, Game Theory, Global Reward Game, Machine Learning
  • Mohammed Ali Mohammed*, Karim Q. Hussein, Mustafa Dhiaa Al-Hassani Pages 2019-2023

    There are many audio reading systems to help blind persons to read the texts. The reading system alone is not enough to help blind persons. This paper aims to design and implement a book recommendation and meaning system which appended to an audio reading system that was proposed previously. This book recommendation system is based on the book title that the blind person reads. In addition, the meaning system is used to get the meaning of the most frequent word in the text that is read. Since the proposed system is utilized by blind persons, it uses text-to-speech tools to convert the result into audio.

    Keywords: Book recommendation, Machine learning, Meaning system, Blind persons
  • Marzieh Molavi Ghalani*, Mohammadreza Yousefzadeh Choosari, Farhad Seraji Pages 2025-2042

    This research aimed to analyze the content of science textbooks of the fourth to sixth grade of elementary school based on the emphasis on the components of the health curriculum. The research population consisted of science textbooks of the science textbooks of elementary school, and the statistical sample size was equal to the population using the census sampling method. The research method was categorical content analysis, and the unit of analysis was sentence and theme. The measurement tool was a reverse questionnaire and a researcher-made checklist, and selective, axial, and open coding and maxqda software were used to analyze the data. The results showed that the mental and spiritual health components are not mentioned in the science textbook of the fourth and fifth grades among the health curriculum components, including physical, mental, environmental, social, and spiritual health. All the components of a health curriculum were mentioned in the science textbook of the sixth grade. The most emphasis was on the environmental health component, and the least was on the mental and spiritual health component among the mentioned components.

    Keywords: Health curriculum, Components, Content analysis, Textbooks, Fourth, Fifth, andsixth grade of elementary school
  • Fatemeh Hassannejad, Hassan Esmailpour *, Hamidreza Saeednia Pages 2043-2055

    This research aimed to evaluate the customers' in-store shopping experience in hypermarkets of Tehran. The conceptual model consisted of 17 sub-components, including the experience of interaction with the community, friends, and family members, hedonistic value, profit-minded value, in-store sentiment, volatile, advertising, comfort, price, product, learning, design, staff, visual, olfactory, hearing, and touch. Social, value, emotional, practical, intellectual, and sensory experience are examined as the six primary components of the customers’ in-store experience. The measurement instruments were developed based on the qualitative and multi-stage screening findings. The collected data were analyzed by a questionnaire with 72 and 66 items and using 638 statistical samples, collected through stratified relative random sampling with exploratory factor analysis, structural modeling, and path analysis. The results of structural equation modeling confirmed the effect of each of the 17 components of customer experience on customer satisfaction and the effect of customer satisfaction on the intention to return to the store. According to the path analysis results, the effect of all six primary components of customer experience on customer satisfaction, intention to return to the store, as well as the effect of customer satisfaction and intention to return to the store are positively supported.

    Keywords: Shopping experience, Store, Hypermarket, Structural equations
  • Johanes Fernandes Andry, Julia Gunadi, Glisina Dwinoor Rembulan, Hendy Tannady Pages 2057-2066

    In this study, we will analyze how big data is implemented in TESLA Company, in this case, we will use sales data. With the growth of big data and the need for its use in companies, nowadays big data is everywhere. TESLA is an American automobile and energy storage company founded by engineers Martin Eberhard and Marc Tarpenning in July 2003 under the name Tesla Motors. The company name is a tribute to inventor and electrical engineer Nikola Tesla. Eberhard said that he wanted to build an automobile manufacturer and also a technology company whose core technology is batteries, computer software and proprietary electric motors. As the amount of data that companies must process today continues to increase, companies must keep up with the times by using big data. Big data can be used to move, contain, and access large amounts of unstructured and disparate data in a timely manner. it is good. The method we use is quantitative data. This calculation will use the Rapid Miner software. The result of this study is the data is 2,146 units, total volume from 118,500 to 47,065,000 based on the number of existing sales, and classification results are from 2621300 to 18766300.

    Keywords: Big data, Classification, Data, TESLA
  • C. R. Mamatha, M. Ramakrishna Pages 2067-2076

    The efficiency of routing in ad-hoc networks depends on the node traffic. One of the methods of improving the network efficiency in MANETs is by predicting the network traffic. It is very important to predict the characteristics of the future network traffic from the previous parameters. In this paper, we propose TPAM-a a multipath routing based traffic prediction algorithm that uses RNN architecture. TPAM consists of two modules, which includes a multipath routing algorithm and a network congestion discovery using RNN. It is clear from the simulation results that RNN architecture provides promising results in predicting the network traffic under varying conditions. Further, the algorithm has improved efficiency in routing by using the multipath selection method. Finally, the proposed algorithm has a less end-to-end delay, lower overhead and a high success ratio.

    Keywords: multi-path route, network traffic, traffic prediction
  • K. Suresh Kumar Reddy, V. Tamizhazhagan, V. Senthil Murugan Pages 2077-2086

    VANET plays a key role in an Intelligence transportation system. To avoid accidents and inform the traffic information, VANETs infrastructure plays an important role. This paper concentrate on V2I communication, that is information exchange among the vehicles, this can be done with a Roadside Unit (RSU). The main function of RSU is to act as a data repository for information exchange among the vehicles to and from like traveller information, traffic information, accidents zones, etc. RSU is a standalone device that communicates with vehicles as well as with Communication networks. Deployment of RSUs was cost-effective, once we install, we cannot mobile the RSU Equipment, so the right way of placing RSU is very important. This paper focuses on and proposed a cost-effective RSU deployment strategy in the VANETs environment, the proposed strategy is based on principles of adaptive beamforming method, by applying these principles, we select an effective emplacement of RSU in different vehicular environment topology.

    Keywords: VANETs, RSU, MUSIC, DOA
  • R.D. Sivakumar, K. Ruba Soundar Pages 2087-2092

    Image compression and Image processing are the two aspects that affect image specific e-learning environment. In this regard, there are various methods proposed to process and compress the image effectively. Recent works mainly concentrate on finding the memory complexity and processing complexity of various techniques. According to that, block truncation models are widely applied over various e-learning fields. Block Truncation Model (BTM) considers the images as a collection of individual blocks to be processed. These blocks are extracted and evaluated for image compression. To compress the images, the least important blocks need to be ignored or suppressed. At this stage, standard BTC, Absolute Moment BTC (AMBTC), Machine Learning (ML) based BTC and Deep Learning (DL) based BTC techniques have emerged from various resources. This work is analyzing various BTC models in terms of time efficiency, memory efficiency and computation efficiency. The results shown in this work reveal the detailed comparisons of e-learning based block truncation models.

    Keywords: E-learning, BTC, Functions, Comparison, Performance Evaluations
  • Abed Ali Hamad, Zakariya Yahya Algamal Pages 2093-2104

    It is a challenge in the real application when modelling the relationship between the response variable and several explanatory variables when the existence of collinearity. Traditionally, in order to avoid this issue, several shrinkage estimators are proposed. Among them is the Kibria and Lukman estimator (K-L). In this study, a jackknifed version of the K-L estimator is proposed in the generalized linear model that combines the Jackknife procedure with the K-L estimator to reduce the biasedness. Our Monte Carlo simulation results and the real data application related to the inverse Gaussian regression model suggest that the proposed estimator can bring significant improvement relative to other competitor estimators, in terms of absolute bias and mean squared error.

    Keywords: Collinearity, K-L estimator, Inverse Gaussian regression model, Jackknife estimator, Monte Carlo simulation
  • Surekha B. Puri, Shrinivas P. Mahajan Pages 2105-2123

    Learning music requires a two-prong approach which includes theoretical studies and practical exposure to the instrument to be learnt. While previous literature has focused on developing technologies for determining the notes of different musical instruments, the harmonium has not been so popular in this research area. This research focuses on using a hybrid approach for polyphonic triad recognition of the Harmonium music. In this research, over 21000 audio samples of harmonium including notes and triads were taken for the Convolutional-Recurrent Neural Network (CRNN) model training purpose. The recorded audio samples were also used to train the Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) models to comparatively analyze the efficiency of these models. The results indicated that the CRNN model is more efficient, accurate, and precise on a score-based transcription. The proposed system produced 94\% accurate results for triad recognition of Harmonium. The recognized triads were represented as sheet music using Lilypond. Possible applications of this output are for better understanding of the triad sequences by students or for Automatic Music Transcription of performances.

    Keywords: Acoustic modeling, Music language modeling (MLM), Music analysis, Recurrent neural networks, Convolutional neural network
  • K. Banuroopa, D. Shanmuga Priyaa Pages 2125-2136

    In this paper, a novel audio finger methodology for audio classification is proposed. The fingerprint of the audio signal is a unique digest to identify the signal. The proposed model uses the audio fingerprinting methodology to create a unique fingerprint of the audio files. The fingerprints are created by extracting an MFCC spectrum and then taking a mean of the spectra and converting the spectrum into a binary image. These images are then fed to the LSTM network to classify the environmental sounds stored in UrbanSound8K dataset and it produces an accuracy of 98.8\% of accuracy across all 10 folds of the dataset.

    Keywords: Audio fingerprinting, MFCC, Audio Classification, LSTM
  • B. Nithya*, G. Anitha Pages 2137-2151

    Data mining techniques have been applied to analyze, predict and diagnose diseases. The prediction of disease becomes meaningless when there is no proper recommendation of a drug to the patient. A drug recommendation method called Artificial Neural Network (ANN) with side effect constraints was proposed to recommend drug names for multiple diseases such as Chronic Kidney Disease (CKD), diabetic and heart disease based on the interaction between drug and disease and their side effects. In this drug recommendation method, multiple attributes of drugs and patients were collected from different sources and the hidden relationship between the attributes was predicted by using a Hidden Markov Model (HMM). In addition to this, statistical features were calculated and added as additional features. The collected and calculated features were used in ANN with side effect constraint classifier which predicted drug name for multiple diseases with the consideration of side effects. However, there is a high dimensionality problem in the recommended method due to more number of features. Moreover, it leads to more computational and space complexity in the ANN classifier. In this paper, an efficient Krill Herd (KH) algorithm for optimization is introduced to solve the abovementioned problems in the drug recommendation method. According to the herding behavior of the likeness of the krill individuals, KH selects the optimal features. The multiple attributes of drugs and patients are collected in a different time slots. The KH algorithm is also used to select the optimal time slot. Then, the optimal time slot and features are given as input to ANN which predicts drug names for multiple diseases with high accuracy and low computational complexity.

    Keywords: Drug recommendation, Feature selection, Optimization algorithm, Krill herdalgorithm
  • C. R. Mamatha, M. Ramakrishna* Pages 2153-2166

    Recently Mobile Ad-hoc Networks(MANETs) are deployed extensively in many areas. To obtain the important information in difficult terrain the feature of the mobility of nodes is used. The effectiveness of information collected by these nodes may be improved by setting up communication among nodes. It is very much important to consider energy conservation among nodes since these mobile nodes operate on limited battery power. The reactive routing protocols are mostly used over proactive routing protocols due to the nodes mobility and limited energy. The reactive routing protocols update routing information based on demand but proactive routing protocols update routing information when there is a variation in the network. The reactive routing protocols collect a huge amount of data and to route these huge data, energy may drain out easily. So it is very much important for the QoS mechanism to be made more energy-efficient. In this paper, an energy-efficient routing mechanism is proposed to achieve this goal. The proposed mechanism achieves by selecting a suitable neighbour node in its routing path to route the information load with efficient energy conservation.

    Keywords: Reactive routing, QoS, Energy efficiency, Application prioritization
  • Md Khurshid Alam*, S. Surendra Singh Pages 2167-2180

    The interaction of dark energy in the LRS Bianchi type-I line element is explored on the background of f(R, T) gravity, where R and T denotes the Ricci scalar together with the trace of energy momentum tensor of matter respectively. Here modified field equations are calculated using f(R, T) = f1(R) + f2(T) together with inhomogeneous equation of state (EoS), p = ωρ − Λ(t), where ω is constant. The solutions of modified Einstein field equations (EFE) obtained are solved by taking a periodic time varying deceleration parameter (DP). Our model shows periodic nature with Big-Bang prevailing at time t = 0. An investigation is done on the energy conditions and found that the conditions of null energy and strong energy are found to be violated. We analyse the geometrical and physical behaviours of these models.

    Keywords: Anisotropic, Dark energy, Inhomogeneous equation of state, Modified theory of gravity
  • Ali Khalaf Hasan*, Dalal Naji Hameed Pages 2181-2188

    Reduced electric quadruple transition probabilities (BE2 ↓) in the mixed configuration of 64N i with two holes have been calculated within the nuclear shell model. In the present work modified surface delta interaction MSDI within the model space (1P3/2 0f5/2) has been used for two holes neutrons The closed nuclear core is represented by the Ni- 66 nucleus. We have used a theoretical study to find a relationship between the semi-classical coupling angle θa,b and the energy levels at different orbital within (hole- hole) configuration. we observed good agreement between theoretical energy levels with experimental data, new values have been specified for both the excited energy levels and the reduced electric quadruple transition probabilities (BE2 ↓), these values are considered as a proposal, that grows theoretical understanding, of the energy levels and the expected transition probabilities through, this work.

    Keywords: Modified surface delta interaction, 64N i, Reduced Transition Probabilities (BE2)
  • Emad A. Kuf, Sara Falih Maktoof* Pages 2189-2196

    In this paper, we introduce the definition of a new general integral transform which call it ”A new general polynomial transform ”. Also, we introduce properties, theorems, proofs and transforms of the Logarithmic functions, polynomials functions and other functions. As well as, we discuss how we can apply this integral transform and its inverse to solve the ordinary differential equations with variable coefficients.

    Keywords: Integral transforms, Inverse integral transforms, Polynomial transform, Inversepolynomial transform
  • Ali Hameed Yousif*, Ahlam Hanash Gatea Pages 2197-2202

    Variable selection in Poisson regression with high dimensional data has been widely used in recent years. we proposed in this paper using a penalty function that depends on a function named a penalty. An Atan estimator was compared with Lasso and adaptive lasso. A simulation and application show that an Atan estimator has the advantage in the estimation of coefficient and variables selection.

    Keywords: Poisson regression, Lasso, Adaptive Lasso, Atan
  • Shireen Jawad*, Dina Sultan, Matthias Winter Pages 2203-2210

    Wind flow is one of the biosphere components that could change the amount of predation. This paper suggests and analyses a prey-predator model including wind in the predation task. The HollingTanner functional response has been considered to illustrate the global dynamics of the proposed model, considering the change in wind intensity. The persistence conditions are provided to reveal a threshold that will allow the coexistence of all species. Numerical simulations are provided to back up the theoretical analysis. The system’s coexistence can be achieved in abundance as long as the wind flow increases.

    Keywords: dynamic, Holling-Tanner functional, Wind flow
  • Alaa A. Sharhanl *, Alaa Al-Muslimawi Pages 2211-2222

    In this investigation, shear-thinning and shear-thickening inelastic fluids through a contraction channel are presented based on a power-law inelastic model. In this regard, Navier–Stokes partial differential equations are used to describe the motion of fluids. These equations include a time-dependent continuity equation for the conservation of mass and time-dependent equations for the conservation of momentum. Numerically, a time-stepping Taylor Galerkin-pressure correction finite element method is used to treat the governing equations. A start-up of Poiseuille flow through axisymmetric 4:1 contraction channel for inelastic fluid are taken into consideration as instances to satisfy the method analysis. Here, the impacts of different parameters, such as Reynolds number (Re), the consistency parameter (k), and the power-law index (n), are examined. Mainly, the effect of these parameters on the convergence levels of solution components considering it the most important point of view. The findings demonstrate that the inelastic parameters have a significant influence on the rates of velocity and pressure temporal convergence, and this effect is observed significantly. Fundamentally, the rate of convergence for shear-thickening flow is found to be greater than the convergence for shear-thinning flow. In addition, the critical level of Reynolds number is also determined for shear-thinning and shear-thickening situations. In this context, we captured that the critical level of Re for a shear-thickening case is much higher than that found for the shear-thinning case.

    Keywords: Taylor Galerkin-pressure correction finite element method, Inelastic fluid, Viscosity, Power-law model
  • Ban M. Tuij, Ahmed Al-Adilee * Pages 2223-2230

    This study is concerned with generating odd distribution by combining Rayleigh distribution to uniform distribution, denoted by (RUOD). We drive the distribution function (df), its probability density function (pdf), and we discuss some other properties like the survival function, moments and graphs of such functions. Also, we estimate the parameters model of the generated distribution RUOD by using the maximum likelihood method to find the approximate values that fit the desired distribution. Eventually, we analyze some data set by the generated RUOD and compare the results upon the goodness-of-fit measures with respect to Rayleigh distribution to decide the best distribution that fits the data set.

    Keywords: Continues distributions, Odd techniques, Moments, Estimation method, Data analysis
  • Sajda K. Mohammed *, Buthyna N. Shihab Pages 2231-2241

    In this paper, the concept of fuzzy visible submodules which is a new type of fuzzy submodules has been introduced. Some results and characterizations of fuzzy visible are established namely the homomorphic image of the fuzzy visible submodule, the sum of two fuzzy visible submodules. The relation between fuzzy visible submodule and its submodules. Also, the fuzzy quotient modules in sense of fuzzy visible have been presented. We prove that the intersection of a collection of fuzzy visible submodules are visible submodules and the converse is not true. Also, we define the strong cancellation fuzzy modules and we established some results of it with respect to fuzzy visible submodules. Many other properties we study in fuzzy visible submodules.

    Keywords: Fuzzy visible submodule, Pure fuzzy submodule, T-pure fuzzy submodule, Fully cancellation fuzzy module, Strongly cancellation fuzzy module
  • Sarab Dakhil Theyab *, Waggas Galib Atshan, Habeeb Kareem Abdullah Pages 2243-2255

    A form for the fourth Hankel determinant is given in this paper as H4(1)=∣∣∣∣∣∣1a2a3a4a2a3a4a5a3a4a5a6a4a5a6a7∣∣∣∣∣∣ The modern concept of the fourth Hankel determinant is studied for the subclass of analytic functions μ(β,λ,t) defined here using the concept of subordination. Bounds on the coefficients |an| with n = 2,3,4, 5,6,7 for the functions in this newly introduced class are given and the upper bound of the fourth Hankel determinant for this class is obtained. Lemmas used by the authors of this paper improve the results from a previously published paper. Interesting particular cases are given in the corollaries of the main theorems.

    Keywords: Subordination, Analytic Function, Fourth Hankel Determinant, Coefficient Bounds
  • Ahmed Aladilee, Hiba Abbas AL-Asadi * Pages 2257-2264

    In this paper, we analyze the covid-19 data set in two ways, The first one depends on the calculation of correlation coefficient via classical mathematical representation. And the second way of analysis depends on modern technique which is associated with copula function concepts and its relationship to measures of association. Afterwards, we compare the obtained results to decide far which is better in an analysis of the examined dataset.

    Keywords: Statistical inferences, Probability concepts, Correlation coefficients, Copula functions, Data analysis
  • Hawraa A. AL-Challabi, Noor N. Rasoul *, Ahmed Aladilee Pages 2265-2272

    In this paper, we propose an extension to the bivariate FGM copula within a polynomial function of degree one. The desired extension depends on the modification that was shown by Sriboonchitta-Kreinovich [12]. We also illustrate a general form of such extension with degree n. We examine various necessary and sufficient conditions which prove that the illustrated function within the extension is the copula. Eventually, we present several calculations of the most popular dependencies within the proposed FGM copula of degree one.

    Keywords: Copulas, Distribution functions, FGM copula family, Nonparametric measures
  • Saad Raheem Bakheet*, Waggas Galib Atshan Pages 2273-2283

    In the present paper, we obtain sandwich theorems for univalent functions by using some results of differential subordination and superordination for univalent functions involving the Rafid-Jassim operator.

    Keywords: Analytic function, Integral Operator, Differential Subordination, Superordination, Sandwich theorem
  • Waggas Galib Atshan, Sarah Jalawi Abd * Pages 2285-2296

    In the present paper, we obtain some subordination and superordination results involving the Hadamard product operator Dμ,bα,cDα,cμ,b for certain normalized analytic univalent functions in the open unit disk. These results are applied to obtain sandwich results.

    Keywords: Analytic function, Integral Operator, Differential Subordination, Superordination, Sandwich results
  • Amal Mohammed Darweesh, Waggas Galib Atshan, Ali Hussein Battor Pages 2297-2313

    In this paper, we obtain some applications of fourth-order differential subordination and superordination results involving multiplier transformation Hp(τ,ψ), for p-valent functions. Also, we obtain several sandwich-type results.

    Keywords: Hadamard Product, The Multiplier Transformations, Differential Subordination, Differential Superordination, Fourth-Order
  • Somayeh Imani, Sina Kheradyar, Keyhan Azadi Hir Pages 2315-2323

    This article examines the effect of fair value accounting and some other influential variables as control variables on the risk of dropping stock prices of banks listed on the Tehran Stock Exchange during the period 2007-2021. The negative skewness coefficient of daily returns has been used to measure the risk of dropping stock prices of the company and used the formula of Hsu et al. \cite{16} for fair value. The analysis of corporate financial information was performed by Autoregressive Distributed Lag (ARDL) in the Eviews software environment. The findings of the present study showed that fair value accounting had no significant effect on stock price drops not only in the short run but also in the long run.

    Keywords: Stock Price Drop Risk, Fair Value Accounting, Autoregressive Distributed Lag
  • Tahereh Rameh, Rezvan Abbasi, Mohamadreza Sanaei Pages 2325-2337

    One of the most complex and controversial issues in financial markets is the prediction of price and stock returns which is always a matter of interest to shareholders. The stock market is vulnerable to various factors that affect the price fluctuations in the stock market. The development of a strong stock market algorithm that can accurately predict stock behaviour is important to maximize profits and minimize the loss of investors. Although in addition to the history of each share, other psychological factors affect the value of each share, in this research, an artificial intelligence model is proposed based on long short-term memory and text embedding. In addition to being paid to the stock market in the form of time series data; In order to investigate the psychological force of the market, features are also extracted from news sites. And finally, based on the combination of features extracted from news sites and time-series data, predicts the future of the stock market. The results of the evaluations show the proposed model can predict the market future truly.

    Keywords: Long Short-term Memory, Text Embedding, Stock Market
  • Bassim Kareem Mihsin, Waggas Galib Atshan, Shatha S. Alhily Pages 2339-2352

    In this paper, we give an upper bound for the fourth Hankel determinant H4(1) for a new class S#C associated with the sine function.

    Keywords: analytic functions, univalent functions, fourth-order Hankel determinant, subordination, convex function, sine function
  • Heidar Kadivar Zinkanlou, Golam Abbas Shekari*, Saeid Emamgholizadeh Pages 2353-2366

    Research has indicated that the ability of organizations in society to manage crises and challenges has a significant impact on the ability of that society to cope with crises and the quality of recovery from challenges. Organizations, with the essential services they provide in times of crisis, play a very important role in the speed with which society returns to a normal situation. Therefore, organizational resilience, meaning the capacity of the organization to predict the crisis, reduce the effects and resistance to its damage, appropriate response, and finally recovery from the crisis, has recently become an important topic in organizational research. Business agility is also an important issue for organizations in responding appropriately to environmental changes and challenges. The most recent issue is receiving appropriate signals from the environment and the organization, which can be properly analyzed to respond to crises and challenges by the organization. In this regard, this article attempts to help organizations achieve resilience by identifying the indices and dimensions of organizational resilience and finally designing a model for it. The present study aimed to design an organizational resilience model based on the components of business agility with the signalling theory approach. In this research, the qualitative-quantitative combination method and interview experts are used to design the model. The statistical population of the research is banking industry experts and managers and deputies of branches of Sepah Bank in North Khorasan. The data collection tool was a semi-structured interview and a researcher-made questionnaire, whose validity was assessed and confirmed as content validity with the five experts' and professors’ opinions. The reliability of the questionnaire was assessed using Cronbach's alpha test. The results showed that the alpha coefficient is greater than the statistics value of 0.7, and the questionnaire had good reliability. Data analysis was performed using SMART PLS software.

    Keywords: Organizational resilience, Agility, Signaling theory
  • Morvarid Khan Mohammadi, Khosro Faghani Makrani, Jamadori Gorganli Doji Pages 2367-2384

    Professional judgment is the main essence of auditing. As a result, the quality of auditing depends on the quality of the judgment of the auditing profession, and access to more desirable professional judgments requires the identification of factors affecting the judgment and decision-making of auditors. The purpose of this article is to model the professional judgment of the auditor. For this purpose, the questionnaire method has been used and the sample size included 8 accounting experts in this study. This research has been conducted based on the pairwise comparison test and the data analysis has been conducted by the DEMATEL method. The results of the research test indicate that all the key components have direct and indirect effects on the judgment of the auditing profession. The most effective is on the judgment of the auditing profession of financial criteria and then the management criteria, financial performance, independent auditor, audit committee, internal auditor and finally the characteristics of the auditor's personality, which has the least effect on the judgment of the audit profession.

    Keywords: Professional judgment, Auditor, Experts
  • Seyede Roghaye Hashemi, Hossein Moeini, Hossein Shirazi Pages 2385-2401

    This research aimed to identify and rank factors affecting customer experience for purchasing luxury brands. Therefore, a phenomenological approach was used in the qualitative stage to identify the components of the customer experience for purchasing luxury brands. Then, the model was tested and ranked using the SWARA method and structural equations. The data collection instrument was a semi-structured interview with Apple brand products sales managers. In order to ensure the return of a sufficient questionnaire (10% confidence margin), the sample number of 384 customers and sales manager of the Apple brand was determined, and the questionnaire was randomly distributed. The researcher achieved theoretical saturation through 18 interviews and finally provided 130 concept codes in six main dimensions, including luxury brand, consumer experience, brand prestige pricing, brand trust, brand thinking, and brand satisfaction.

    Keywords: Customer experience, Brand, Luxury brand, Phenomenology, Ranking
  • Waggas Galib Atshan, Suad Hassan Mahdy Pages 2403-2412

    In the present paper, we obtain some subordination and superordination results, involving the operator Ta for functions of the form f(z)=z−1+∑∞k=1akzk, which are meromorphic univalent in the punctured open unit disk these results are applied to obtain sandwich results.

    Keywords: Analytic function, Univalent subordination, Superordination Hadamard (convolution), Sandwich theorems
  • Samad Talebi Anzab, Mohammadreza Mehrabanpour, Hossein Jahangirnia, Bahman Banimahd Pages 2413-2427

    The purpose of this study has been to explain the effect of personality and social psychology types, motivation in the formation of a company's social responsibility reporting. This research has been practical in terms of results, analytical in terms of purpose and cross-section in terms of the time period. The statistical population of the research is the members of the board of directors of accepted companies in the Tehran Stock Exchange. Based on Cochran's formula, a sample of 130 people was selected. Data analysis based on structural equations was used SPSS software version 23 and Emus software. The results of content analysis express the extraction of 212 subcategories into 9 main categories as influential components on the formation of company social responsibility reporting. Also, the results of the analysis based on structural equations showed that personality psychology, social psychology and motivational psychology had a positive and significant effect on the formation of social responsibility reporting.

    Keywords: Personnel Psychology, Social Psychology, Motivation Psychology, Social Responsibility Reporting
  • Pouria Yazdani, Hassan Ghodarti, Hossein Panahian, Meysam Arabzadeh, Esmail Mazroui Nasrabadi Pages 2429-2447

    The present study is a quasi-experimental and applied study and has been done through different stages and with a qualitative or judgmental method. The qualitative part of the research includes interviews with experts. The statistical population of the present study consisted of experts familiar with the subject of research (university professors in the field of accounting and finance) and relevant managers and experts of auditing firms and those in charge of affairs in the auditing organization. After counting the comments, the research data were analyzed using the fuzzy Delphi method and fuzzy network analysis. Based on the results obtained from the fuzzy Delphi method, external auditing quality, legal requirements, professional standards, auditors' professional independence, professional ethics standards in auditing, auditing clients, professional staff of auditing firms, competition In the auditing market and finally the general public were the factors affecting the quality of auditing firms. The findings of the study indicated that the standards of professional ethics and legal requirements had the first and second priorities, respectively.

    Keywords: Professional ethics, Fuzzy Delphi, Fuzzy network analysis, Quality of auditing institutions
  • Hossein Shoja, Mohammadreza Mehrabanpour, Hossein Jahangirnia, Reza Gholami Jamkarani Pages 2449-2460

    Given that auditors, as economic entities, do not do anything that would impair their long-term economic well-being, there is moral capital alongside the economic outlook. In addition, given that company auditors are professionals who have a social duty to support their owners and other stakeholders, social capital causes them to not only strive economically for their own benefit but also their commitment. To act independently of any interests of departments and individuals. Cultural capital, on the other hand, plays a unique role in the professional integrity of the accountant while performing audit and judgment work. Therefore, the purpose of this study was to design a model of the impact of cultural, social and moral capital on the auditors' quality. In this study, sampling is a census (census) and based on this, all statistical samples are 350 people. The results show that cultural capital (89\%), social capital (85\%) and moral capital (86\%) have a significant effect on auditor quality.

    Keywords: Cultural capital, Social capital, Moral capital, Auditor quality
  • Marzieh Dormishi, Masoud Taherinia *, Azar Moslemi, Ibrahim Givaki Pages 2461-2474

    The auditor's performance and the quality of services provided rely heavily on the behavior of the individuals performing the audit fieldwork. The auditors' failure to perform analytical methods correctly can be attributed to the inefficient behaviors resulting from the auditor's behavior. When the auditor is under the pressure of complex work, he does not perform the audit procedures properly to reduce his responsibility and reduces the quality of the audit. Accordingly, the purpose of this study is to investigate the relationship between the inefficient behavior of auditors and audit quality. The research tool consists of a questionnaire with 17 questions based on the Likert scale with grades 1 to 5. The interrogation period was 3 months, which was conducted in 1400. A sample of 380 auditors working in the auditing organization and private sector institutions members of the Iranian Society of Certified Public Accountants was selected and statistical analysis was performed on 276 questionnaires received. Structural equations using software (PLS) were used to test the research hypotheses. The results showed that the inefficient behavior of auditors reduces the quality of auditing. In other words, based on the research findings, it was found that due to the auditor's non-adherence to the code of ethical behavior and their involvement with inefficient behaviors in the audit, the advantages identified in the auditing standards are limited and neutral and the quality of the auditor's work is undermined.

    Keywords: Auditors' inefficient behavior, Auditing quality, Analytical methods, Structural equations
  • Amir Rahmani, Mohsen Rostamy-Malkhalifeh *, Farhad Hosseinzadeh Lotfi, Toofigh Allahviranloo Pages 2475-2491

    Cross efficiency evaluation of data envelopment analysis (DEA) is an effective tool in measuring the performance of decision-making units. In general, in cross efficiency evaluation models, it is assumed that decision makers are completely rational, in which case they refrain from considering the risk attitude that plays an important role in the evaluation process. In order to fill this gap, cross efficiency evaluation in DEA was performed based on prospect theory. In the real world, many inputs and outputs are not known, which are called inaccurate data; what is expected is that even if one of the data is not accurate, the answer will probably not be accurate. To solve this problem, the present study presents models that are able to evaluate the prospect cross efficiency with interval data and proves the feasibility of the models by proving the theorems.

    Keywords: Data Envelopment Analysis, Fuzzy numbers, Interval Efficiency, Performance evaluation, Prospect Cross Efficiency
  • Yousef Sedaghat, Naser Parhizgar *, Ahmad Keshavarz Pages 2493-2508

    Ensuring the production of non-defect high-quality tires is an essential part of the tire industry. X-ray inspection is one of the best methods to detect tire defects. In this paper, a new approach has been presented for detecting tire defects in X-ray images based on an entropy filter, the extraction of texture properties of patches by Local Binary Pattern, and, finally, the classification of defects using the Support Vector Machine method. In the proposed method, an entropy filter was first applied to the input. The parts of the image with different patterns were then selected as candidate regions and these regions were classified by the patch classifier. All the defects were detected and classified and, finally, the efficiency of the algorithm was evaluated. By applying this algorithm to the dataset the best performance was obtained by the LBP descriptor and the linear SVM classifier with 98\% defect location accuracy and 97\% defect detection accuracy were achieved. In order to analyze the performance, used the deep model as a classifier, thus demonstrating that the deep model has a high capability for learning complex patterns. This proposed method is sensitive to local texture and could well describe texture information, which is appropriate for most kinds of tire defects.

    Keywords: Tire Defects Detection, Local Binary Pattern, Entropy Filter, Patch Classification, Support Vector Machine
  • Mohanad Kudham Salman, Salah Abdal Qader Al Nuaimi * Pages 2509-2520

    This paper aims to know the extent of the impact of performance management in its dimensions (performance planning, performance measures, performance standards, performance reports) in enhancing the strategic response.  Design / Methodology / Approach: An intentional descriptive-analytical study was used that included 122 managers of senior, middle or operational levels, and structural equation modeling was used to test hypotheses. The performance management variable and strategic response achieved a high level of importance, and this indicates the agreement of the sample members that the company is interested in providing the necessary approach for interaction and cooperation to raise the level of performance, improve productivity, and adapt to changes that occur in the internal and external environment.  Constraints/implications of the research: targeting a sample of the large oil sector, which includes more than 15 companies distributed over geographical areas spread throughout the country, and a refinery company adopted the research variables.  Using the (Stepwise) method of testing the significance of dimensions and after deleting the non-significant dimensions, it becomes clear that the model ultimately depends on two sizes (performance planning, performance reports) from the dimensions of the performance management variable, which represents a significant impact in enhancing strategic response operations.  This paper seeks to include the company's proactive business strategy and rapid response to environmental challenges and current and future crises and provide the basic requirements to confront competitors by paying attention to developing perceptions and expectations associated with potential environmental changes.

    Keywords: Performance management, Strategic response processes
  • Waggas Galib Atshan, Salam Abdulhussein Sehen * Pages 2521-2530

    In this paper, we study some differential subordination and subordination results for certain subclass of univalent functions in the open unit disc U using generalized operator Hλ,δη,μHη,μλ,δ. Also, we derive some sandwich theorems.

    Keywords: Analytic function, Subordinate, Differential Subordination, Dominant, Generalized Operator, Sandwich Theorems