فهرست مطالب

Electrical and Computer Engineering Innovations - Volume:10 Issue: 2, Summer-Autumn 2022

Journal of Electrical and Computer Engineering Innovations
Volume:10 Issue: 2, Summer-Autumn 2022

  • تاریخ انتشار: 1401/03/15
  • تعداد عناوین: 20
|
  • M. Hosseini Shirvani *, A. Akbarifar Pages 259-272
    Background and Objectives

    Among miscellaneous networks, onion-based routing network technologies such as The Onion-based Routing (ToR), Invisible Internet Project (I2P), and Riffle networks are used to communicate anonymously by different worldwide users for security, privacy, and safety requirements. Sometimes, these types of networks sacrifice anonymity for the sake of efficient communication or vice-versa. This paper surveys aforementioned networks for investigating their potential and challenges.

    Methods

    Onion-routing networks encapsulate messages in several layers of encryption similar to layers of an onion. The anonymous communication networks are involved dining cryptographers (DC) problem so-called DC-nets, which need sending anonymous message with unconditional sender and untraceable receipt. So, DC-nets must be resistant against traffic analysis attacks although they will attenuate the network bandwidth. In this line, ToR is a free software that provides anonymous communication, I2P networks are based on hidden internet service project which uses tunnelling for anonymous communications, and Riffle networks include a small set of camouflaging servers that provide anonymity for authorized users. This paper presents a comparative study on anonymizing ToR, I2P, and Riffle networks in terms of associated prominent parameters in this vein.

    Results

    The comparison is based on similarities, differences, and challenges in network behaviors. This comparison is beneficial for further researches and future improvements.

    Conclusion

    The review of the current paper reveals that the Riffle networks are more resilient and have great confidentiality and integrity against other onion-based routing networks.

    Keywords: Tor Project, I2P Networks, Riffle, Dark-web, Anonymity
  • K. Kiaei, H. Omranpour * Pages 273-286
    Background and Objectives
    Time series classification (TSC) means classifying the data over time and based on their behavior. TSC is one of the main machine learning tasks related to time series. Because the classification accuracy is of particular importance, we have decided to increase it in this research.
    Methods
    In this paper, we proposed a simple method for TSC problems to achieve higher classification accuracy than other existing methods. Fast Fourier transform is a method that uses in raw time series data preprocess. In this study, we apply the fast Fourier transform (FFT) over the raw datasets. Then we use the polar form of a complex number to create a histogram. The proposed method consists of three steps: preprocessing using FFT, feature extraction by histogram computation, and decision making using a random forest classifier.
    Results
    The presented method was tested on 12 datasets of the UCR time series classification archive from different domains. Evaluation of our method was performed using k-fold cross-validation and classification accuracy. The experimental results state that our model has been achieved classification accuracy higher or comparable than related methods. Computational complexity has also been significantly reduced.
    Conclusion
    In the latest years, the TSC problems have been increased. In this work, we proposed a simple method with extracted features from fast Fourier transforms that is efficient to gain more high accuracy.
    Keywords: Time-series, Classification, Fast Fourier transform, Polar form, Random forest classifier
  • S.V. Moravvej *, M.J. Maleki Kahaki, M. Salimi Sartakhti, M. Joodaki Pages 287-298
    Background and Objectives
    Text summarization plays an essential role in reducing time and cost in many domains such as medicine, engineering, etc. On the other hand, manual summarization requires much time. So, we need an automated system for summarizing. How to select sentences is critical in summarizing. Summarization techniques that have been introduced in recent years are usually greedy in the choice of sentences, which leads to a decrease in the quality of the summary. In this paper, a non-greedily method for selecting essential sentences from a text is presented.
    Methods
    The present paper presents a method based on a generative adversarial network and attention mechanism called GAN-AM for extractive summarization. Generative adversarial networks have two generator and discriminator networks whose parameters are independent of each other. First, the features of the sentences are extracted by two traditional and embedded methods. We extract 12 traditional features. Some of these features are extracted from sentence words and others from the sentence. In addition, we use the well-known Skip-Gram model for embedding. Then, the features are entered into the generator as a condition, and the generator calculates the probability of each sentence in summary. A discriminator is used to check the generated summary of the generator and to strengthen its performance. We introduce a new loss function for discriminator training that includes generator output, real and fake summaries of each document. During training and testing, each document enters the generator with different noises. It allows the generator to see many combinations of sentences that are suitable for quality summaries.
    Results
    We evaluate our results on CNN/Daily Mail and Medical datasets. Summaries produced by the generator show that our model performs better than other methods compared based on the ROUGE metric. We apply different sizes of noise to the generator to check the effect of noise on our model. The results indicate that the noise-free model has poor results.
    Conclusion
    Unlike recent works, in our method, the generator selects sentences non-greedily. Experimental results show that the generator with noise can produce summaries that are related to the main subject.
    Keywords: Text summarization, non-greedily, Generative adversarial network, attention mechanism, extractive summarization
  • A. Teymouri, H. Dorosti *, M. Ersali Salehi Nasab, S.M. Fakhraie Pages 299-310
    Background and Objectives
    The future demands of multimedia and signal processing applications forced the IC designers to utilize efficient high performance techniques in more complex SoCs to achieve higher computing throughput besides energy/power efficiency improvement. In recent technologies, variation effects and leakage power highly affect the design specifications and designers need to consider these parameters in design time. Considering both challenges as well as boosting the computation throughput makes the design more difficult.
    Methods
    In this article, we propose a simple serial core for higher energy/power efficiency and also utilize data level parallel structures to achieve required computation throughput.
    Results
    Using the proposed core we have 35% (75%) energy (power) improvement and also using parallel structure results in 8x higher throughput. The proposed architecture is able to provide 76 MIPS computation throughput by consuming only 2.7 pj per instruction. The outstanding feature of this processor is its resiliency against the variation effects.
    Conclusion
    Simple serial architecture reduces the effect of variations on design paths, furthermore, the effect of process variation on throughput loss and energy dissipation is negligible and almost zero. Proposed processor architecture is proper for energy/power constrained applications such as internet of things (IoT) and mobile devices to enable easy energy harvesting for longer lifetime.
    Keywords: Massive Parallel Processing, SSTA, ultra-low-energy, Variation-aware, High-throughput
  • M. Rahmanian, R. Nassiri *, M. Mohsenzadeh, R. Ravanmehr Pages 311-328
    Background and Objectives
    The area of enterprise architecture encompasses various domains, the most complicated of which concerns developing an enterprise business architecture. Although many state-of-the-art enterprise architecture frameworks describe the architecture by abstract levels, they still fail to provide accurate syntactic and semantic descriptions. Several previous conducted studies were looking for different objectives elaborated on modeling enterprise architectures. However, none of those studies tried to develop a modeling that generates test cases which would later be used for validation and/or verification. Therefore, the main contribution of this study is generating a set of test cases based on the descriptions yielded from enterprise business processes in early steps; then, the amount of later reviews and changes can be significantly lessened.
    Methods
    Following the objective of accurate validation and/or verification of the enterprise business processes within an enterprise's architecture development, this paper proposes a new method based on the enterprise architecture design. Throughout the iterative cycle of the proposed method, initially, the enterprise goals will be extracted based on the TOGAF framework. Afterwards, it will be subjected to syntactical modeling based on the Archimate language. Then, semantics will be added to the syntactic model of the enterprise business processes based on the WSMO framework and formalize manually to B language by using defined transition rule. Therefore, in order to discover test cases, a set of test coverage will be tested on the formal model.
    Results
    The proposed method has been implemented in the marketing and sales department of a petrochemical corporation, where the results show the validity and also the effectiveness of the method. Based on the implementation of our method on the selected case study, the details of the business process have been defined based on an enterprise level, the level of abstraction is decreased by syntactic and semantic modeling of enterprise architecture description, the formal descriptions created using the proposed transition rules for sampling.
    Conclusion
    The proposed method starts from the goals of enterprises; therefore, the output samples are efficiently precise. By adding semantics to the syntactic models of enterprise architecture, the degree of abstraction has been decreased. By creating a formal model, the model can be subjected to sampling. For future work, it is suggested to use the proposed method for the automatic generation of codes.
    Keywords: Enterprise Architecture, Business Process, Syntactic, Semantic Modeling, Test Case Generation
  • N. Khatami, M. Majidi * Pages 329-340
    Background and Objectives
    The purpose of a wireless body area network (WBAN) is to collect and send vital body signals to the physician to make timely decisions, improve the efficiency of medical informatics systems, and save costs. The sensors of the WBAN network have limited size and energy, and hence, to extend the lifetime of these sensors, they can be powered wirelessly. Our focus in this paper is on a two-tier full-duplex (FD) cooperative WBAN in which sensors, in addition to transmitting physiological information, harvest energy from radio frequency (RF) coordinator signals and body sources. Our goal is to maximize average weighted sum throughput (AWST) under the constraints of each sensor, including meeting the minimum data rate, delay limitation, energy and transmission power constraints.
    Methods
    The resources allocated to solve this optimization problem are the time slots, the transmission rates of the sensors and coordinator, and the transmission powers of sensors in each time slot. The time scheduling problem in the first step is modeled in the form of a mixed-integer linear programming (MILP) problem and the second step problem is convex. Also, Karush–Kuhn–Tucker (KKT) conditions are presented for power and rate allocation.
    Results
    In the optimal allocation (OA) mode, contrary to the equal time allocation (ETA) one, with increasing the relay power, the AWST increases despite increasing self-interference (SI). Energy harvesting from the body, nevertheless the power consumption for transmission, makes positive the slope of the instantaneous energy curve for the motion sensor and reduces the corresponding slope for the electrocardiogram (ECG) one. Comparison of the proposed method with previous methods shows that the proposed method has better control over the information flow of sensors, and also in allocating rate to users, fairness is satisfied.
    Conclusion
    According to the simulation results in our method, the system showed better performance than the equal time allocation mode. We also used the FD technique and with the help of the optimal time scheduling index, we were able to control the SI.
    Keywords: Wireless body area network, Simultaneous Wireless Information, Power Transfer, Full-duplex, Resource Allocation, Body Energy Harvesting
  • H. Sardari, B. Mozafari *, H. A. Shayanfar Pages 341-350
    Background and Objectives
    Current and voltage signals' distortion caused by the fault in the power system has negative effects upon the operation of the protective devices. One of the influencing factors is the existence of the exponential DC which can significantly distort the signals and lead to a possible malfunction of the protective devices, especially distance and over-current relays. The main problem is the lack of clarity about this component due to the dependence of its time constant and initial amplitude to the configuration of the electrical grid, location and resistance of faulty point. This makes it hard to extract the main frequency phasors of the voltage and current.
    Methods
    Considering the importance of a fast clearance of the fault, this paper offers a method for an effective and fast removal of the decaying-DC that employs a data window with a length that is equal to the half cycle of the main frequency, while the conventional methods mostly use data from one cycle or even more. The proposed method is based upon the extraction of the decaying-DC component's parameters.
    Results
    The efficiency of this method is compared to the conventional Fourier algorithm of Half-Cycle (HCFA) and the mimic filter plus the HCFA.
    Conclusion
    The outcomes display that the proposed method presents a better efficiency from the point of view of the speed and the accuracy of convergence to the final results.
    Keywords: Phasor Estimation, Digital filter, Half-Cycle Fourier algorithm, Mimic filter, Exponential decaying-DC component
  • M. Abdollahi *, Z. Boujarnezhad Pages 351-362
    Background and Objectives
    As cities are developing and the population increases significantly, one of the most important challenges for city managers is the urban transportation system. An Intelligent Transportation System (ITS) uses information, communication, and control techniques to assist the transportation system. The ITS includes a large number of traffic sensors that collect high volumes of data to provide information to support and improve traffic management operations. Due to the high traffic volume, the classic methods of traffic control are unable to satisfy the requirements of the variable, and the dynamic nature of traffic. Accordingly, Artificial Intelligence and the Internet of Things meet this demand as a decentralized solution.
    Methods
    This paper presents an optimal method to find the best route and compare it with the previous methods. The proposed method has three phases. First, the area should be clustered under servicing and, second, the requests will be predicted using the time series neural network. then, the Whale Optimization Algorithm (WOA) will be run to select the best route.
    Results
    To evaluate the parameters, different scenarios were designed and implemented. The simulation results show that the service time parameter of the proposed method is improved by about 18% and 40% in comparison with the Grey Wolf Optimizer (GWO) and Random Movement methods. Also, the difference between this parameter in the two methods of Harris Hawks Optimizer (HHO) and WOA is about 5% and the HHO has performed better.
    Conclusion
    The interaction of AI and IoT can lead to solutions to improve ITS and to increase client satisfaction. We use WOA to improve time servicing and throughput. The Simulation results show that this method can be increase satisfaction for clients.
    Keywords: Grey Wolf Optimization, Whale optimization algorithm, Internet of Things, Intelligent Transportation System
  • A.H. Mehrfar, A. Eslami Majd * Pages 363-370
    Background and Objectives
    The use of two-dimensional materials in the photodetector fabrication has received much attention in recent years. Graphene is a two-dimensional material that has been extensively researched to make photodetectors. The responsivity of graphene photodetectors was limited by the low optical absorption in graphene (~2.3% for single layer graphene). Therefore, graphene along with other materials has been used to fabricate a photodetector with the desired properties. The graphene is used for the improvement of the silicide platinum photodetector.
    Methods
    The platinum silicide photodetector with graphene has been experimentally fabricated and characterized, and all steps of the device fabrication and the characterization are completely provided in addition to required equations for device analysis is completely provided. A graphene layer is transferred on the platinum silicide layer, and the graphene layer creates the photoconductor gain in the platinum silicide photodetector.
    Results
    In the proposed device, near-infrared light is detected in the platinum silicide, and by placing a layer of graphene on the platinum silicide, the optical current and responsivity increase compared to the platinum silicide photodetector without graphene. Experimental results show that the optical current, external quantum efficiency, and responsivity increase in the platinum silicide photodetector with graphene. The graphene not only functions as the charge transport channel, but also works as a photoconductor.
    Conclusion
    The optical current and responsivity are increased by the platinum silicide photodetector with graphene. In our photodetector, the highest responsivity is 120 mA/W in the 1310 nm wavelength, and the optical current is 100 nA at the applied voltage of 8 V. Our photodetector has optical current, responsivity, and external quantum efficiency twice as much as platinum silicide photodetector. Experimental results show the good performance of graphene with platinum silicide photodetector.
    Keywords: Two-dimensional, Near-infrared, Optoelectronic
  • M. Rashidinejad *, S. Dorahaki, S. S. Zadsar, ‪M.R. Salehizadeh Pages 371-380
    Background and Objectives
    The smart energy hub framework encompasses physical assets such as thermal storage, boiler, wind turbine, PV panel, water storage and, water desalination unit to ensure continuity of electricity, water, thermal, and gas provision in the case of unexpected outages in the upstream networks. In this regard, the smart energy hub as an integrated structure provides a suitable platform for energy supply. Considering the drinking water resources in the smart hub structure can cause operational efficiency improvement.
    Methods
    This paper proposes an integrated scheduling model for energy and water supply. To address the issue of increasing operational flexibility, a set of new technologies such as Compressed Air Energy Storage (CAES) and Power-to-Gas (P2G) system are provided. Also, the energy price is modeled as an uncertain parameter using a robust optimization approach. The proposed model is established as a Mixed Integer Linear Function (MILP). The mentioned model is implemented using the CPLEX solver in GAMS software. The proposed model is simulated in different scenarios in the energy hub and the optimization results are compared with each other to validate the proposed method.
    Results
    The results show that using CAES technology and the P2G system can lead to reducing the operating costs to a desirable level. Moreover, the impact of the P2G unit on the operation cost is more than the CAES unit.
    Conclusion
    The energy hub operator should tradeoff between robustness and operation cost of the system. The obtained results ensured that the proposed methodology was robust, optimal, and economical for energy hub schedules.
    Keywords: Energy Hub, Demand Response Programs, Power-to-Gas (P2G), Compressed Air Energy Storage, Robust Optimization
  • H. Nunoo-Mensah *, S. Kuseh, J. Yankey, F. Acheampong Pages 381-392
    Background and Objectives
    To a large extent, low production of maize can be attributed to diseases and pests. Accurate, fast, and early detection of maize plant disease is critical for efficient maize production. Early detection of a disease enables growers, breeders and researchers to effectively apply the appropriate controlled measures to mitigate the disease’s effects. Unfortunately, the lack of expertise in this area and the cost involved often result in an incorrect diagnosis of maize plant diseases which can cause significant economic loss. Over the years, there have been many techniques that have been developed for the detection of plant diseases. In recent years, computer-aided methods, especially Machine learning (ML) techniques combined with crop images (image-based phenotyping), have become dominant for plant disease detection. Deep learning techniques (DL) have demonstrated high accuracies of performing complex cognitive tasks like humans among machine learning approaches. This paper aims at presenting a comprehensive review of state-of-the-art DL techniques used for detecting disease in the leaves of maize.
    Methods
    In achieving the aims of this paper, we divided the methodology into two main sections; Article Selection and Detailed review of selected articles. An algorithm was used in selecting the state-of-the-art DL techniques for maize disease detection spanning from 2016 to 2021. Each selected article is then reviewed in detail taking into considerations the DL technique, dataset used, strengths and limitations of each technique.
    Results
    DL techniques have demonstrated high accuracies in maize disease detection. It was revealed that transfer learning reduces training time and improves the accuracies of models. Models trained with images taking from a controlled environment (single leaves) perform poorly when deployed in the field where there are several leaves. Two-stage object detection models show superior performance when deployed in the field.
    Conclusion
    From the results, lack of experts to annotate accurately, Model architecture, hyperparameter tuning, and training resources are some of the challenges facing maize leaf disease detection. DL techniques based on two-stage object detection algorithms are best suited for several plant leaves and complex backgrounds images.
    Keywords: Convolutional Neural Network, Deep Learning, Maize Disease, Transfer Learning
  • M. Kalantari *, M. Mohammadpour Tuyserkani, S.H. Amiri Pages 393-402
    Background and Objectives
    Operating frequency range of a microphone array is limited by the array configuration. Spatial aliasing occurs at frequencies considered to be out of the microphone array operating range that leads to side-lobes in the array beam pattern and consequently degrades the performance of the microphone array. In this paper, a general approach for increasing the operational bandwidth of the spherical microphone array without physical changes to the microphone array is proposed.
    Methods
    Recently, Alon and Rafaely proposed a beamforming method with aliasing cancellation and formulated it for some well-known beamformers such as maximum directivity (MD), maximum white noise gain (WNG), and minimum variance distortionless response (MVDR) which have been called MDAC, MGAC, MVDR-AC beamformer respectively. In this paper, we derive MDAC method from different point of view. Then, based on our perspective, we propose a new method that is easily applicable for any beamforming algorithms.
    Results
    Comparing with MDAC and MGAC beamformers, performance measures for our approach show improvement in directivity index (DI) and white noise gain (WNG) by nearly 19% and 15% respectively.
    Conclusion
    Aliasing and, in consequence, unwanted side lobe formation is the main factor in spherical microphone arrays operational bandwidth determination. Most of the methods previously presented to reduce aliasing demanded physical changes in the array structure which comes at a cost. In this paper we propose a new method based on Alon and Rafaely’s approach via designing a constrained optimization problem using orthogonality property of spherical harmonics, to achieve better performance.
    Keywords: Spherical Microphone arrays, Aliasing, Beamforming, Operational Bandwidth
  • M. Bod *, F. Geran Pages 403-410
    Background and Objectives
    Self-supported rear-radiating feeds have been widely used as reflector antenna feeds for mini terrestrial and satellite links. While in most terrestrial and satellite links a dual-polarized antenna for send and receive applications are required, all of the reported works regarding this topic are presenting a single polarized self-supported reflector antenna. In this paper, a dual-polarized hat feed reflector antenna with a low sidelobe and low cross-polarization level is presented.
    Methods
    The proposed antenna consists of an orthogonal mode transducer (OMT), a 60 cm ring focus reflector, and a rear radiating waveguide feed known as the hat feed. 21 parameters of hat feed structure are selected and optimized with a genetic algorithm (GA). A predefined ring focus curve is used as a reflector in the optimization procedure. Dual polarization for send and receive applications is also obtained by an OMT at the rear side of the reflector antenna.
    Results
    A prototype of the proposed hat feed reflector antenna is fabricated and the measurement results are compared with simulation ones. The proposed antenna has return loss better than 15 dB at both polarizations in the 17.7~19.7 GHz frequency range. The 60cm reflector antenna has 40dBi gain which means that the proposed antenna has about 70% radiation efficiency. About 20dB sidelobe level and more than 40 dB cross-polarization have also been realized in the measurement patterns of the proposed antenna.
    Conclusion
    A dual-polarized hat feed reflector antenna with excellent radiation efficiency, high sidelobe, and low cross-polarization level is proposed. The proposed antenna can be a good candidate for high-frequency terrestrial and satellite communications.
    Keywords: Self-supported Feed, Ring-focus Reflector, Genetic Algorithm, Orthogonal Mode Transducer
  • I. Zabbah *, K. Layeghi, Reza Ebrahimpour Pages 411-424
    Background and Objectives
    COVID-19 disease still has a devastating effect on society health. The use of X-ray images is one of the most important methods of diagnosing the disease. One of the challenges specialists are faced is no diagnosing in time. Using Deep learning can reduce the diagnostic error of COVID-19 and help specialists in this field.
    Methods
    The aim of this study is to provide a method based on a combination of deep learning(s) in parallel so that it can lead to more accurate results in COVID-19 disease by gathering opinions. In this research, 4 pre-trained (fine-tuned) deep model have been used. The dataset of this study is X-ray images from Github containing 1125 samples in 3 classes include normal, COVID-19 and pneumonia contaminated.
    Results
    In all networks, 70% of the samples were used for training and 30% for testing. To ensure accuracy, the K-fold method was used in the training process. After modeling and comparing the generated models and recording the results, the accuracy of diagnosis of COVID-19 disease showed 84.3% and 87.2% when learners were not combined and experts were combined respectively.
    Conclusion
    The use of machine learning techniques can lead to the early diagnosis of COVID-19 and help physicians to accelerate the healing process. This study shows that a combination of deep experts leads to improved diagnosis accuracy.
    Keywords: covid-19, Deep Networks, Data mining, Machine Learning, Mixture of Experts
  • R. Havangi * Pages 425-436
    Background and Objectives
    The target tracking problem is an essential component of many engineering applications.The extended Kalman filter (EKF) is one of the most well-known suboptimal filter to solve target tracking. However, since EKF uses the first-order terms of the Taylor series nonlinear extension functions, it often makes large errors in the estimates of state. As a result, target tracking based on EKF may diverge.
    Methods
    In this manuscript, an adaptive square root cubature Kalman filter (ASRCKF) is poposed to solve the maneuvering target tracking problem. In the proposed method, the covariance of process and measurement noises is estimated adaptively. Thus, the performance of proposed method does not depend on the noise statistics and its performance is robust with unknown prior knowledge of the noise statistics. Morover, it has a consistently improved numerical stability why the matrices of covariance are guaranteed to remain semi- positive. The performance of the proposed method is compared with EKF, and the unscented Kalman filter (UKF) for target tracking problem.
    Results
    To evaluate the proposed method, many experiments is performed. The proposed method is evaluated on the non-maneuvering and maneuvering target tracking.
    Conclusion
    The results show that the proposed method has lower estimation errors with faster convergence rate than other methods. The proposed method can track the tates of moving target effectively and improve the accuracy of the system.
    Keywords: Target tracking, Cubature Kalman filter, Square root cubature Kalman filter
  • S. Mukherjee * Pages 437-446
    Background and Objectives
    The increasing requirement of different voltage and power levels in various power electronics applications, especially based on renewable energy, is escalating the growth of the different DC-DC converter topologies. Besides single-input single-output (SISO), multi-input multi-output (MIMO) type topologies become famous. So, in this paper, a Single-Ended Primary Inductance Converter (SEPIC), Cuk and Canonical Switch Cell (CSC) based single-input multi-output (SIMO) boost converter is proposed with a maximum power point tracking (MPPT) controller.
    Methods
    The Design of the three different DC-DC converter-based SIMO topology has been developed and thereafter the operation of the proposed converter is verified with Solar Photovoltaic (SPV), connected as an input to the converter. To extract maximum power from the SPV and MPPT controller is also developed. Finally, the converter's transfer function is developed using small-signal analysis and the system's stability is analyzed with and without compensation.
    Results
    A MATLAB simulation has been done to verify the theoretical analysis. Successful extraction of the maximum power from the SPV panel (65W, Vmpp 18.2V, Impp = 3.55A) with Particle Swarm Optimization (PSO) is verified. SEPIC and Cuk-based DC-DC converter can successfully operate in boost mode with a gain of 2.66. A significant reduction in the Cuk converter capacitor voltage ripple is also established.
    Conclusion
    So, this paper represents an SPV-fed SIMO boost converter based on SEPIC Cuk CSC topology. In addition to that, a PSO-based MPPT controller is also introduced for maximum power extraction. Verification of the theoretical analysis with simulation results is also described.
    Keywords: Particle swarm optimization -MPPT, SPV based application, Single input Multiple outputs, DC-DC power conversion
  • I. Behravan, S.M. Razavi * Pages 447-462
    Background and Objectives
    Stock markets have a key role in the economic situation of the countries. Thus one of the major methods of flourishing the economy can be getting people to invest their money in the stock market. For this purpose, reducing the risk of investment can persuade people to trust the market and invest. Hence, Productive tools for predicting the future of the stock market have an undeniable effect on investors and traders’ profit.
    Methods
    In this research, a two-stage method has been introduced to predict the next week's index value of the market, and the Tehran Stock Exchange Market has been selected as a case study. In the first stage of the proposed method, a novel clustering method has been used to divide the data points of the training dataset into different groups and in the second phase for each cluster’s data, a hybrid regression method (HHO-SVR) has been trained to detect the patterns hidden in each group. For unknown samples, after determining their cluster, the corresponding trained regression model estimates the target value. In the hybrid regression method, HHO is hired to select the best feature subset and also to tune the parameters of SVR.
    Results
    The experimental results show the high accuracy of the proposed method in predicting the market index value of the next week. Also, the comparisons made with other metaheuristics indicate the superiority of HHO over other metaheuristics in solving such a hard and complex optimization problem. Using the historical information of the last 20 days, our method has achieved 99% accuracy in predicting the market index of the next 7 days while PSO, MVO, GSA, IPO, linear regression and fine-tuned SVR has achieved 67%, 98%, 38%, 4%, 5.6% and 98 % accuracy respectively.
    Conclusion
    in this research we have tried to forecast the market index of the next m (from 1 to 7) days using the historical data of the past n (from 10 to 100) days. The experiments showed that increasing the number of days (n), used to create the dataset, will not necessarily improve the performance of the method.
    Keywords: Tehran Stock Market, Harris Hawks Optimization (HHO), Support Vector Regression (SVR), APSO-Clustering, Metaheuristics
  • H. Salimi, A. Zakipour *, M. Asadi Pages 463-476
    Background and Objectives
    Permanent magnet synchronous motors (PMSM) have received much attention due to their high torque as well as low noise values. However, several PI blocks are needed for field, torque, and speed control of the PMSM which complicates controller design in the vector control approach. To cope with these issues, a novel analytical approach for time-response shaping of the Pi controller in the filed oriented control (FOC) of the PMSM is presented in this manuscript. In the proposed method, it is possible to design the controlling loops based on the pre-defined dynamic responses of the motor speed and currents in dq axis. It should be noted that as decoupled model of the motor is employed in the controller development, a closed loop system has a linear model and hence, designed PI controllers are able to stabilize the PMSM in a wide range of operation.
    Methods
    To design the controllers and choose PI gains, characteristic of the closed loop response is formulated analytically. According to pre-defined dynamic responses of the motor speed and currents in dq-axis e.g., desired maximum overshoot and rise-time values, gains of the controllers are calculated analytically. As extracted equation set of the controller tuning includes a nonlinear term, the Newton-Raphson numerical approach is employed for calculation of the nonlinear equation set. In addition, designed system is evaluated under different tests, such as step changes of the references. Finally, it should be noted that as the decoupled models are employed for the PMSM system, hence exact closed loop behavior of the closed loop system can be expressed via a linear model. As a result, stability of the proposed approach can be guaranteed in the whole operational range of the system.
    Results
    Controlling loops of the closed loop system are designed for speed control of the PMSM. To evaluate accuracy and effectiveness of the controllers, it has been simulated using MATLAB/Simulink software. Moreover, the TMS320F28335 digital signal processor (DSP) from Texas Instruments is used for experimental investigation of the controllers.
    Conclusion
    Considering the simulation and practical results, it is shown that the proposed analytical approach is able to select the controlling gains with negligible error. It has shown that the proposed approach for rise time and overshoot calculations has at most 0.01% for step response of the motor speed at 500 rpm.
    Keywords: Electrical Motor Drive, Filed Oriented Control (FOC), Permanent Magnet Synchronous Motors (PMSM), Controller Tuning, Dynamic Responses
  • H. Dorosti * Pages 477-486
    Background and Objectives
    Considering the fast growing low-power internet of things, the power/energy and performance constraints have become more challenging in design and operation time. Static and dynamic variations make the situation worse in terms of reliability, performance, and energy consumption. In this work, a novel slack measurement circuit is proposed to have precise frequency management based on timing violation measurement.
    Methods
    Proposed slack measurement circuit is based on measuring the delay difference between the edge clock pulse and possible transition on path end-points (primary outputs of design). The output of proposed slack monitoring circuits is a digital code related to the current state of target critical path delay. In order to convert this digital code to equivalent delay difference, the delay of a reference gate is mandatory which is basic unit in proposed monitor. This monitor enables the design to have more precise and efficient frequency management, while maintaining the correct functionality regarding low-power mode.
    Results
    Applying this method on a MIPS processor reduces the amount of performance penalty and recovery energy overhead up to 30% with only 2% additional hardware. Results for benchmark applications in low-power mode, show 7-30% power improvement in normal execution mode. If the application is resilient against occurred errors duo to timing violations, proposed method achieves 20-60% power reduction considering approximate computation as long as application is showing resilience. The performance of proposed method depends on the degree of application resilience against the timing errors. In order to keep generality of propsoed monitor for different applications, the resilience threshold is user programmable to configure according to the requirements of each application.
    Conclusion
    The results show that precise frequency scheduling is more energy/power efficient in static and dynamic variation management. Utilizing a proper monitor capable of measureing the amount of violation will help to have finer frequency management. At the other hand, this method will help to use the resilience of application according to estimation about the possible error value based on measured vilation amount.
    Keywords: Timing Slack Monitoring, Negative Slack Measurement, Clock Stretching, Frequency Scaling, ultra-low-energy
  • A. Tajary *, E. Tahanian Pages 487-496
    Background and Objectives
    Wireless Network on Chip (WNoC) is one of the promising interconnection architectures for future many-core processors. Besides the architectures and topologies of these WNoCs, designing an efficient routing algorithm that uses the provided frequency band to achieve better network latency is one of the challenges.
    Methods
    Using wireless connections reduces the number of hops for sending data in a network, which can lead to lower latency for data delivery and higher throughput in WNoCs. On the other hand, since using wireless links reduces the number of hops for data transfer; this can result in congestion around the wireless nodes. The congestion may result in more delay in data transfer which reduces the network throughput of WNoCs. Although there are some good routing algorithms that balance traffic using wired and wireless connections for synthetic traffic patterns, they cannot deal with dynamic traffic patterns that existed in real-world applications. In this paper, we propose a new routing algorithm that uses the wireless connections as much as possible, and in the case of congestion, it uses the wired connection instead.
    Results
    We investigated the proposed method using eight applications from the Parsec benchmark suite. Simulation results show that the proposed method can achieve up to 13.9% higher network throughput with a power consumption reduction up to 1.4%.
    Conclusion
    In this paper, we proposed an adaptive routing algorithm that uses wireless links to deliver data over the network on chip. We investigated the proposed method on real-work applications. Simulation results show that the proposed method can achieve higher network throughput and lower power consumption.
    Keywords: Wireless Network on Chip, Routing, NoC