فهرست مطالب

Electrical and Computer Engineering Innovations - Volume:12 Issue: 1, Winter-Spring 2024

Journal of Electrical and Computer Engineering Innovations
Volume:12 Issue: 1, Winter-Spring 2024

  • تاریخ انتشار: 1402/10/04
  • تعداد عناوین: 20
|
  • N. Ghanbari, S. H. Zahiri *, H. Shahraki Pages 1-14
    Background and Objectives
    In this paper, a new version of the particle swarm optimization (PSO) algorithm using a linear ranking function is proposed for clustering uncertain data. In the proposed Uncertain Particle Swarm Clustering method, called UPSC method, triangular fuzzy numbers (TFNs) are used to represent uncertain data. Triangular fuzzy numbers are a good type of fuzzy numbers and have many applications in the real world.
    Methods
    In the UPSC method input data are fuzzy numbers. Therefore, to upgrade the standard version of PSO, calculating the distance between the fuzzy numbers is necessary. For this purpose, a linear ranking function is applied in the fitness function of the PSO algorithm to describe the distance between fuzzy vectors.
    Results
    The performance of the UPSC is tested on six artificial and nine benchmark datasets. The features of these datasets are represented by TFNs.
    Conclusion
    The experimental results on fuzzy artificial datasets show that the proposed clustering method (UPSC) can cluster fuzzy datasets like or superior to other standard uncertain data clustering methods such as Uncertain K-Means Clustering (UK-means) and Uncertain K-Medoids Clustering (UK-medoids) algorithms. Also, the experimental results on fuzzy benchmark datasets demonstrate that in all datasets except Libras, the UPSC method provides better results in accuracy when compared to other methods. For example, in iris data, the clustering accuracy has increased by 2.67% compared to the UK-means method. In the case of wine data, the accuracy increased with the UPSC method is 1.69%. As another example, it can be said that the increase in accuracy for abalone data was 4%. Comparing the results with the rand index (RI) also shows the superiority of the proposed clustering method.
    Keywords: Heuristic clustering, Particle swarm optimization, Uncertain data, Fuzzy dataset, Ranking function
  • S. M. Hashemi, A. Sahafi *, A. M. Rahmani, M. Bohlouli Pages 15-38
    Background and Objectives

    Today, the increased number of Internet-connected smart devices require powerful computer processing servers such as cloud and fog and necessitate fulfilling requests and services more than ever before. The geographical distance of IoT devices to fog and cloud servers have turned issues such as delay and energy consumption into major challenges. However, fog computing technology has emerged as a promising technology in this field.

    Methods

    In this paper, service/energy management approaches are generally surveyed. Then, we explain our motivation for the systematic literature review procedure (SLR) and how to select the related works.

    Results

    This paper introduces four domains of service management and energy management, including Architecture, Resource Management, Scheduling management, and Service Management. Scheduling management has been used in 38% of the papers. Therefore, they have the highest service management and energy management. Also, Resource Management is the second domain that has been able to attract about 26% of the papers in service management and energy management.

    Conclusion

    About 81% of the fog computing papers simulated their approaches, and the others implemented their schemes using a testbed in the real environment. Furthermore, 30% of the papers presented an architecture or framework for their research, along with their research. In this systematic literature review, papers have been extracted from five valid databases, including IEEE Xplore, Wiley, Science Direct (Elsevier), Springer Link, and Taylor & Francis, from 2013 to 2022. We obtained 1596 papers related to the discussed subject. We filtered them and achieved 47 distinct studies. In the following, we analyze and discuss these studies; then we review the parameters of service quality in the papers, and ultimately, we present the benefits, drawbacks, and innovations of each study.

    Keywords: Fog Computing, Internet of Things (IOT), Systematic literature review (SLR), Service management, Energy management
  • S.H. Zahiri *, S. M. Naji Esfahani, M. Delshad Pages 39-56
    Background and Objectives
    The interleaved approach has a long history of use in power electronics, particularly for high-power systems. The voltage and current stress in these applications exceed the tolerance limit of a power element. The present paper introduces an improved version of an interleaved boost converter, which uses voltage mode control. The objectives of this research are improvement in the interleaved boost converter’s performance in terms of the temporal parameters associated with settling duration, rising time, and overshoot.
    Methods
    An improved PI controller (proportional integral controller) is used for adjusting the proposed converter’s output voltage. In the present work, the Grey Wolf Optimization algorithm with aggregation function definition (GWO_AF) is utilized to adjust the free coefficients of the PI controller. In reality, the closed-loop dynamic performance and stability can be improved by designing and implementing an optimized PI controller.
    Results
    The improvement of the freedom degree in the interleaved boost converter resulted from the existence of a few power switches in a parallel channel in the proposed circuit. An additional advantage of the interleaved boost converter, compared to the conventional one, is that it produces a lower output voltage ripple.
    Conclusion
    The usage of multi-objective optimization algorithms in designing a PI controller can significantly improve the performance parameters of an interleaved boost converter. Also, our findings indicated the excellent stability of the proposed converter when connected to the network.
    Keywords: Interleaved Boost Converter, Non-Minimum Phase System, Optimized Proportional Integral Controller, Grey Wolf Optimization algorithm with aggregation function definition (GWO, AF), Switch-Mode Power Supply
  • S. Nemati * Pages 57-68
    Background and Objectives
    Twitter is a microblogging platform for expressing assessments, opinions, and sentiments on different topics and events. While there have been several studies around sentiment analysis of tweets and their popularity in the form of the number of retweets, predicting the sentiment of first-order replies remained a neglected challenge. Predicting the sentiment of tweet replies is helpful for both users and enterprises. In this study, we define a novel problem; given just a tweet's text, the goal is to predict the overall sentiment polarity of its upcoming replies.
    Methods
    To address this problem, we proposed a graph convolutional neural network model that exploits the text's dependencies. The proposed model contains two parallel branches. The first branch extracts the contextual representation of the input tweets. The second branch extracts the structural and semantic information from tweets. Specifically, a Bi-LSTM network and a self-attention layer are used in the first layer for extracting syntactical relations, and an affective knowledge-enhanced dependency tree is used in the second branch for extracting semantic relations. Moreover, a graph convolutional network is used on the top of these branches to learn the joint feature representation. Finally, a retrieval-based attention mechanism is used on the output of the graph convolutional network for learning essential features from the final affective picture of tweets.
    Results
    In the experiments, we only used the original tweets of the RETWEET dataset for training the models and ignored the replies of the tweets in the training process. The results on three versions of the RETWEET dataset showed that the proposed model outperforms the LSTM-based models and similar state-of-the-art graph convolutional network models.
    Conclusion
    The proposed model showed promising results in confirming that by using only the content of a tweet, we can predict the overall sentiment of its replies. Moreover, the results showed that the proposed model achieves similar or comparable results with simpler deep models when trained on a public tweet dataset such as ACL 2014 dataset while outperforming both simple deep models and state-of-the-art graph convolutional deep models when trained on the RETWEET dataset. This shows the proposed model's effectiveness in extracting structural and semantic relations in the tweets.
    Keywords: Sentiment Analysis, Deep Leaning, Social media, Twitter, Graph Convolutional Neural Networks
  • M. Karimi *, D. Dideban Pages 69-78
    Background and Objectives
    The H-bridge (HB) driver design with high efficiency is one of the most challenging issues in power systems that drive AC/DC loads. HB driver circuit based upon complementary MOSFET type used as a driving system of DC motor, power converters, and battery charger for electrical vehicles. In Driving DC motors, dead-time (DT) generation has been considered a major factor such as preventive power line short-circuit (shoot-through) over high and low-side MOSFETs. In this paper, the HB driver is designed for linear actuators with consideration for the prevention of shoot-through.
    Methods
    The propagation delay of logic gates are used to postpone the arrival gate drive signal for high/low side MOSFETs resulting in short circuit elimination on the DC source.
    Results
    As mentioned, logic gates’ propagation delay by their values causes interruption between the high and low-side power switches gate drive signal resulting in shoot-through elimination. Although the existence of DT influences the performance of the rotational speed and output torque of a DC motor by increasing the distortion and pulse interval, Linear actuators due to low-velocity linear motion do not require the PWM control, therefore DT has no substantial effect on driver performance.
    Conclusion
    Simulation and experimental results validate the method proposed in this paper. According to the specifications of the circuit designed in this paper, for loads that do not need rotational speed control, logic gates with proper propagation delay can be chosen to eliminate short circuits in complementary MOS switches without requiring DT compensation methods.
    Keywords: H-Bridge, Dead-time, Shoot-through, Logic gates, Propagation delay
  • E. Shamsinejad, T. Banirostam *, M. M. Pedram, A. Rahmani Pages 79-98
    Background and Objectives
    Nowadays, with the rapid growth of social networks extracting valuable information from voluminous sources of social networks, alongside privacy protection and preventing the disclosure of unique data, is among the most challenging objects. In this paper, a model for maintaining privacy in big data is presented.
    Methods
    The proposed model is implemented with Spark in-memory tool in big data in four steps. The first step is to enter the raw data from HDFS to RDDs. The second step is to determine m clusters and cluster heads. The third step is to parallelly put the produced tuples in separate RDDs. the fourth step is to release the anonymized clusters. The suggested model is based on a K-means clustering algorithm and is located in the Spark framework. also, the proposed model uses the capacities of RDD and Mlib components. Determining the optimized cluster heads in each tuple's content, considering data type, and using the formula of the suggested solution, leads to the release of data in the optimized cluster with the lowest rate of data loss and identity disclosure.
    Results
    Using Spark framework Factors and Optimized Clusters in the K-means Algorithm in the proposed model, the algorithm implementation time in different megabyte intervals relies on multiple expiration time and purposeful elimination of clusters, data loss rates based on two-level clustering. According to the results of the simulations, while the volume of data increases, the rate of data loss decreases compared to FADS and FAST clustering algorithms, which is due to the increase of records in the proposed model. with the formula presented in the proposed model, how to determine the multiple selected attributes is reduced. According to the presented results and 2-anonomity, the value of the cost factor at k=9 will be at its lowest value of 0.20.
    Conclusion
    The proposed model provides the right balance for high-speed process execution, minimizing data loss and minimal data disclosure. Also, the mentioned model presents a parallel algorithm for increasing the efficiency in anonymizing data streams and, simultaneously, decreasing the information loss rate.
    Keywords: big data, Anonymity, Confidentiality, Data Disclosure, Privacy
  • M. Farmani, S. Farnam, M. Khani, Z. Torabi, Z. Shirmohammadi * Pages 99-114
    Background and Objectives
    With the increase of population in the world along with the decrease of natural resources, agricultural land and the increase of unpredictable environmental conditions, it causes concerns in the field of food supply, which is one of the serious concerns for all countries of the world. Therefore, the agricultural industry has moved towards smart agriculture. Smart agriculture using the Internet of Things, which uses different types of sensors to collect data (such as temperature, humidity, light, etc.), a communication network to send and receive data, and information systems to manage and analyze data. Smart agriculture deals with a huge amount of data collected from farms, which has fundamental challenges for analysis using old systems such as lack of storage space, processing delay. Computational paradigm is a key solution to solve the problems of time delay, security, storage space management, real-time analysis. Computing paradigms include cloud, fog and edge computing, which by combining each of them in smart agriculture has caused a great transformation in this industry. The purpose of this article is to provide a comprehensive review of the architecture of computing paradigms in smart agriculture applications.
    Methods
    To achieve the goals of this article, the methodology is divided into two parts: article selection and review of the selected articles. The computational paradigms used in the selected articles are from 2019 to 2022. Each selected paper is then reviewed in detail in terms of categories of computing paradigms, architectures, key points, advantages, and challenges.
    Results
    Computational paradigms have significant advantages. Combining these paradigms with each other in a complementary way covers many challenges. The architecture based on the combination of edge-fog-cloud computing is one of the best architectures combined with smart agriculture.
    Conclusion
    By combining computing paradigms and smart agriculture, the challenges based on traditional and old systems are overcome. Combining these paradigms complement each other's challenges.
    Keywords: intelligent agriculture, computing technology, cloud computing, Fog Computing, edge computing
  • M. Yousefzadeh, A. Golmakani *, G. Sarbishaei Pages 115-132
    Background and Objectives
    To design an efficient tracker in a crowded environment based on artificial intelligence and image processing, there are several challenges such as the occlusion, fast motion, in-plane rotation, variations in target illumination and Other challenges of online tracking are the time complexity of the algorithm, increasing memory space, and tracker dependence on the target model. In this paper, for the first time, sketch matrix theory in ridge regression for video sequences has been proposed.
    Methods
    A new tracking object method based on the element-wise matrix with an online training method is proposed including the kernel correlation Filter (KCF), circular, and sketch matrix. The proposed algorithm is not only the free model but also increases the robustness of the tracker related to the scale variation, occlusion, fast motion, and reduces KCF drift.
    Results
    The simulation results demonstrate that the proposed sketch kernel correlation filter (SHKCF) can increase the computational speed of the algorithm and reduces both the time complexity and the memory space. Finally, the proposed tracker is implemented and experimentally evaluated based on video sequences of OTB50, OTB100 and VOT2016 benchmarks.
    Conclusion
    The experimental results show that the SHKCF method obtains not only OPE partial evaluation of Out of view, Occlusion and Motion Blur in object accuracy but also achieved the partial evaluation of Illumination Variation, Out of Plane Rotation, Scale Variation, Out of View, Occlusion, In of Plane Rotation, Background Clutter, Fast Motion and Deformation in object overlap which are the first rank compared to the state-the-art works. The result of accuracy, robustness and time complexity are obtained 0.929, 0.93 and 35.4, respectively.
    Keywords: Artificial Intelligence, Video analysis, Object tracking, SHKCF, KCF, online tracker
  • E. Heydari, M. Shams Esfand Abadi *, S.M. Khademiyan Pages 133-146
    Background and Objectives
    In order to improve the performance of normalized subband adaptive filter algorithm (NSAF) for identifying the block-sparse (BS) systems, this paper introduces the novel adaptive algorithm which is called BSNSAF. In the following, an improved multiband structured subband adaptive filter (IMSAF) algorithms for BS system identification is also proposed. The BS-IMSAF has faster convergence speed than BS-NSAF. Since the computational complexity of BS-IMSAF is high, the selective regressor (SR) and dynamic selection (DS) approaches are utilized and BS-SR-IMSAF and BS-DS-IMSAF are introduced. Furthermore, the theoretical steady-state performance analysis of the presented algorithms is studied.
    Methods
    All algorithms are established based on the 𝐿2,0-norm constraint to the proposed cost function and the method of Lagrange multipliers is used to optimize the cost function.
    Results
    The good performance of the proposed algorithms is demonstrated through several simulation results in the system identification setup. The algorithms are justified and compared in various scenarios and optimum values of the parameters are obtained. Also, the computational complexity of different algorithms are studied. In addition, the theoretical steady state values of mean square error (MSE) values are compared with simulation values.
    Conclusion
    The BS-NSAF algorithm has better performance than NSAF for BS system identification. The BSIMSAF algorithm has better convergence speed than BS-NSAF. To reduce the computational complexity, the BS-SR-IMSAF and BS-DSR-IMSAF
    Keywords: SparseL, 2, 0-norm IMSAF, Selective Regressors, Dynamic Selection
  • Z. Ghasemi-Naraghi, A. Nickabadi *, R. Safabakhsh Pages 147-162

    Background and Obejctives: 

    Multi-task learning is a widespread mechanism to improve the learning of multiple objectives with a shared representation in one deep neural network. In multi-task learning, it is critical to determine how to combine the tasks loss functions. The straightforward way is to optimize the weighted linear sum of multiple objectives with equal weights. Despite some studies that have attempted to solve the realtime multi-person pose estimation problem from a 2D image, major challenges still remain unresolved.

    Methods

    The prevailing solutions are two-stream, learning two tasks simultaneously. They intrinsically use a multi-task learning approach for predicting the confidence maps of body parts and the part affinity fields to associate the parts to each other. They optimize the average of the two tasks loss functions, while the two tasks have different levels of difficulty and uncertainty. In this work, we overcome this problem by applying a multi-task objective that captures task-based uncertainties without any additional parameters. Since the estimated poses can be more certain, the proposed method is called “CertainPose”.

    Results

    Experiments are carried out on the COCO keypoints data sets. The results show that capturing the task-dependent uncertainty makes the training procedure faster and causes some improvements in human pose estimation.

    Conclusion

    The highlight advantage of our method is improving the realtime multi-person pose estimation without increasing computational complexity.

    Keywords: Realtime Multi-Person Pose Estimation, Multi-Task Learning, Loss Function, Task-Dependent Uncertainty
  • Z. Raisi *, J. Zelek Pages 163-174
    Background and Objectives
    Signage is everywhere, and a robot should be able to take advantage of signs to help it localize (including Visual Place Recognition (VPR)) and map. Robust text detection & recognition in the wild is challenging due to pose, irregular text instances, illumination variations, viewpoint changes, and occlusion factors.
    Methods
    This paper proposes an end-to-end scene text spotting model that simultaneously outputs the text string and bounding boxes. The proposed model leverages a pre-trained Vision Transformer based (ViT) architecture combined with a multi-task transformer-based text detector more suitable for the VPR task. Our central contribution is introducing an end-to-end scene text spotting framework to adequately capture the irregular and occluded text regions in different challenging places. We first equip the ViT backbone using a masked autoencoder (MAE) to capture partially occluded characters to address the occlusion problem. Then, we use a multi-task prediction head for the proposed model to handle arbitrary shapes of text instances with polygon bounding boxes.
    Results
    The evaluation of the proposed architecture's performance for VPR involved conducting several experiments on the challenging Self-Collected Text Place (SCTP) benchmark dataset. The well-known evaluation metric, Precision-Recall, was employed to measure the performance of the proposed pipeline. The final model achieved the following performances, Recall = 0.93 and Precision = 0.8, upon testing on this benchmark.
    Conclusion
    The initial experimental results show that the proposed model outperforms the state-of-the-art (SOTA) methods in comparison to the SCTP dataset, which confirms the robustness of the proposed end-to-end scene text detection and recognition model.
    Keywords: Text detection, Text Recognition, Robotics Localization, Deep Learning, Visual Place Recognition
  • A. Bijari *, M. A. Mallaki Pages 175-186
    Background and Objectives
    In wireless communications, receivers play an essential role. Among receiver architectures, the direct-conversion receiver (DCR) architecture has been selected due to its high level of integration and low cost. However, it suffers from DC offset due to self-mixing, I/Q imbalance, and flicker noise.
    Methods
    This paper presents a new LNA-mixer with variable conversion gain (VG-LM) for wireless local area network (WLAN) applications. A low noise transconductance amplifier (LNTA) is used as the transconductance stage in the Gilbert cell mixer. The wide variable conversion gain range is achieved by the change in LNTA’s transconductance and transconductance of the mixer switching transistors.
    Results
    The proposed LNA-mixer is designed and simulated using 0.18µm CMOS technology in Cadence Spectre RF. The post-layout simulations exhibit the proposed circuit operates at 2.4 GHz with a bandwidth of 10 MHz. In addition, the conversion gain is changed from -3.9 dB to 23.9 dB with the variation of the controlled DC voltage from 0.5 to 1.8. At the high gain, the double-sideband noise figure (DSB-NF) is less than 3.7 dB, and its third-order intermodulation point (IIP3) is -9 dBm. The power consumption is 22 mW from the supply voltage of 1.8 V. The circuit occupies 743 µm×775 µm of core chip area.
    Conclusion
    Using the proposed circuit, the RF front end receiver does not need the low noise amplifier (LNA) and variable gain amplifier (VGA).
    Keywords: Variable Conversion Gain, Low Noise Transconductance Amplifier (LNTA), Active mixer, Noise figure
  • A. Matani, A. Sahafi *, A. Broumandnia Pages 187-216
    Background and Objectives

    Blockchain technology as a distributed and tamper-proof data ledger is attracting more and more attention from various fields around the world. Due to the continuously growing of the blockchain in both transaction data and the number of nodes joining the network, scalability emerges as a challenging issue.

    Methods

    In this survey, the existing scalability solutions in the blockchain are discussed under five categories including on-chain scalability, off-chain scalability, scalable consensus mechanisms, DAG-based scalability, and horizontal scalability through sharding. Meanwhile, the novelties they have created on the fundamental layers of the blockchain architecture are investigated.

    Results

    As a result, the advantages and disadvantages of the discussed mechanisms are pointed out, and a comparison between them in terms of different scalability metrics such as throughput, latency, bandwidth, and storage usage is presented. Therefore, this study provides a comprehensive understanding of the various aspects of blockchain scalability and the available scalability solutions. Finally, the research directions and open issues in each category are argued to motivate further improvement efforts for blockchain scalability in the future.

    Conclusion

    Scalability allows blockchain system to sustain its performance as it grows up. Lack of scalability has a negative effect on the mass adoption of the blockchain in practical environments. This paper presents a profound analysis of the existing scalability solutions, the issues and challenges they address, and the ones that are not resolved yet. Consequently, it inspires novel ideas for more scalable and efficient blockchains in the future.

    Keywords: Blockchain, Scalability, consensus, Sharding, Throughput
  • H. Karim Tabbahfar, F. Tabib Mahmoudi * Pages 217-226
    Background and Objectives
    Considering the drought and global warming, it is very important to monitor changes in water bodies for surface water management and preserve water resources in the natural ecosystem. For this purpose, using the appropriate spectral indices has high capabilities to distinguish surface water bodies from other land covers. This research has a special consideration to the effect of different types of land covers around water bodies. For this reason, two different water bodies, lake and wetland, have been used to evaluate the implementation results.
    Methods
    The main objective of this research is to evaluate the capabilities of the genetic algorithm in optimum selection of the spectral indices extracted from Sentinel-2 satellite image due to distinguish surface water bodies in two case studies: 1) the pure water behind the Karkheh dam and 2) the Shadegan wetland having water mixed with vegetation. In this regard, the set of optimal indices is obtained with the genetic algorithm followed by the support vector machine (SVM) classifier.
    Results
    The evaluation of the classification results based on the optimum selected spectral indices showed that the overall accuracy and Kappa coefficient of the recognized surface water bodies are 98.18 and 0.9827 in the Karkheh dam and 98.04 and 0.93 in Shadegan wetland, respectively. Evaluation of each of the spectral indices measured in both study areas was carried out using quantitative decision tree (DT) classifier. The best obtained DT classification results show the improvements in overall accuracy by 1.42% in the Karkheh Dam area and 1.56% in the Shadegan Wetland area based on the optimum selected indices by genetic algorithm followed by SVM classifier. Moreover, the obtained classification results are superior compared with Random Forest classifier using the optimized set of spectral features.
    Conclusion
    Applying the genetic algorithm on the spectral indices was able to obtain two optimal sets of effective indices that have the highest amount of accuracy in classifying water bodies from other land cover objects in the study areas. Considering the collective performance, genetic algorithm selects an optimal set of indices that can detect water bodies more accurately than any single index.
    Keywords: Genetic Algorithm, Spectral Indices, Water Bodies, Classifier, Optimization
  • A. Kiani, F. Geran *, S. M. Hashemi Pages 227-234
    Background and Objectives
    In this paper, a closed-form mathematical formula has been presented using of the proposed periodic structure E-field distribution, that helps designers to calculate the width of the slots in Quasi Non-Uniform Leaky Wave Antenna (QNULWA).
    Methods
    This method is based on two steps. In the first step, some important parameters for the proposed antenna design will be extracted using simulation. In the second step, by solving a discrete differential equation, a general relation will be obtained for these types of antennas. This method has been investigated in the case of slot LWA families.
    Results
    A Leaky wave antenna has been synthesized in the 15.5- 18 GHz frequency range for Gaussian radiation pattern. The results of simulation and antenna design will be very close to each other in 2.5 GHz Bandwidth (15.5 - 18 GHz), which shows the accuracy of this formula. Also, By changing the frequency range 2-5 GHz, the main lobe direction of the antenna will scan the space approximately 10 degrees(from 63 to 73 degree). The antenna has an SLL value of about -25 dB and 13 dB Gain at whole band 15.5-18 GHz .
    Conclusion
    The obtained formula helps the antenna designers to calculate the dimensions this type of antenna for any pattern distribution.
    Keywords: Leaky Wave Antenna (LWA), Quasi Uniform LWA, Quasi Non ULWA, Waveguide Atenna
  • G. Asadi Ghiasvand, M. Zare *, M. Mahdavi Pages 235-246
    Background and Objectives
    Quantum-dot Cellular Automata technology is a new method for digital circuits and systems designs. This method can be attractive for researchers due to its special features such as power consumption, high calculation speed and small dimensions.
    Methods
    This paper tries to design a three-bits counter with minimum area and delay among the other circuits. As the circuit dimensions are reduced, the area and consequently, the delay are decreased, too. Therefore, this paper tries to design a three-bits counter with minimum dimensions and delay. The proposed counter contains 96 cells and is designed in three layers. It has the least area and delay compared to the priors.
    Results
    The circuit simulation illustrates 0.08 µm2 of area occupation and one clock cycle delay. In comparison with the best previous design, which includes 110 cells, the cells number, area and delay are decreased by 12.72%, 27.27% and 33.33%, respectively. Also, the cost of the circuit has been improved by 54.32%. The power analysis of the design shows 13% reduction in the total energy dissipation of the circuit compared to the best prior work. The circuit reliability versus temperature variations has been simulated and the results represent suitable stability. The fault tolerance of the circuit which is occurred by the displacement faults represents normal operation of the circuit.
    Conclusion
    As the counter is an element which is implemented in several digital systems, its area reduction causes the whole system area to be reduced. Also, the circuit delay has been decreased significantly which means that the circuit can be employed by high speed systems.
    Keywords: Digital circuit design, Quantum-Dot-Cellular Automata, Multi-layers design, Three Bits Counter
  • H. Hajibabaei, V. Seydi *, A. Koochari Pages 247-258
    Background and Objectives
    The detection of community in networks is an important tool for revealing hidden data in network analysis. One of the signs that the community exists in the network is the neighborhood density between nodes. Also, the existence of a concept called a motif indicates that a community with a high edge density has a correlation between nodes that go beyond their close neighbors. Motifs are repetitive edge patterns that are frequently seen in the network.
    Methods
    By estimating the triangular motif in the network, our proposed probabilistic motif-based community detection model (PMCD) helps to find the communities in the network. The idea of the proposed model is network analysis based on structural density between nodes and detecting communities by estimating motifs using probabilistic methods.
    Results
    The suggested model's output is the strength of each node's affiliation to the communities and detecting overlaps in communities. To evaluate the performance and accuracy of the proposed method, experiments are done on real-world and synthetic networks. The findings show that, compared to other algorithms, the proposed method is acting more accurately and densely in detecting communities.
    Conclusion
    The advantage of PMCD in using the probabilistic generative model is speeding up the computation of the hidden parameters and establishing the community based on the likelihood of triangular motifs. In fact, the proposed method proves there is a probabilistic correlation between the observation of two node pairs in different communities and the increased existence of motif structure in the network.
    Keywords: community detection, Motif, Complex Networks, Probabilistic Model
  • V. Esmaeili, M. Mohassel Feghhi *, S. O. Shahdi Pages 259-270
    Background and Objectives
    The world we live in everyday, accompany with enormous numbers of minute variations which affect us and our surroundings in several aspects. These variations, so called micro-changes, are low in intensity and brief in duration which makes them almost invisible to naked eyes. Nonetheless, revealing them could open up a new wide range of applications from security, business, engineering, medical, and seismology to psychology.
    Methods
    In this paper, we adopted a novel autonomous approach comprising Partial Differential Equations (PDE) and Cubic Uniform Local Binary Pattern (Cubic-ULBP) to spot micro-changes. Cubic-ULBP extracts 15 planes containing robust features against noise and illumination variations. Afterwards, PDE pick out single plane out of 15 to reduce time consumption.
    Results
    The proposed method is not only optimized to get the job done perfectly but also provides promising results comparing with most state-of-the-art methods. So that the accuracy is increased about 36% and 40% on the CASME and the CASME II databases, respectively.
    Conclusion
    The combination of the PDE and the Cubic-ULBP creates a strong and optimal method for detecting the apex frame and micro-movement. This method's performance is found to be promising when the training samples are scarce, too.
    Keywords: Apex, Cubic-ULBP, micro-changes, micro-expression, Partial Differential Equations (PDE)
  • M. Moradi, R. Havangi * Pages 271-282
    Background and Objectives
    ‎ Traction system and adhesion between wheel and rail are fundamental aspects in rail transportation. Depending on the vehicle's running conditions, different levels of adhesion are needed. Low adhesion between wheel and rail can be caused by leaves on the line or other contaminants, such as rust or grease. Low adhesion can occur at any time of year especially in autumn, resulting in disruptions to passenger journeys. Increased wheel-rail adhesion for transit rail services results in better operating performance and system cost savings. Deceleration caused by low adhesion, will extend the braking distance, which is a safety issue. Because of many uncertain or even unknown factors, adhesion modelling is a time taking task. Furthermore, as direct measurement of adhesion force poses inherent challenges, state observers emerge as the most viable choice for employing indirect estimation techniques. Certain level of adhesion between wheel and rail leads to reliable, efficient, and economical operation.
    Methods
    ‎ This study introduces an advantageous approach that leverages the behavior of traction motors to provide support in achieving control over wheel slip and adhesion in railway applications. The proposed method aims to enhance the utilization of existing adhesion, minimize wheel deterioration, and mitigate high creep levels. In this regard, estimation of wheel-rail adhesion force is done indirectly by concentrating on induction motor parameters as railway traction system and dynamic relationships.  Meanwhile, in this study, we focus on developing and applying the sixth-order Extended Kalman Filter (EKF) to create a highly efficient sensorless re-adhesion control system for railway vehicles.
    Results
    ‎EKF based design is compared with Unscented Kalman Filter (UKF) based and actual conditions and implemented in Matlab to check the accuracy and performance ability for state and parameter estimation. Experimental results showed fast convergence, high precision and low error value for EKF.
    Conclusion
    ‎ The proposed technique has the capability to identify and assess the current state of local adhesion, while also providing real-time predictions of wear. Besides, in combination with control methods, this approach can be very useful in achieving high wheel-rail adhesion performance under variable complex road conditions
    Keywords: Extended Kalman Filter, Adhesion Model, Railway Traction, Torque, Estimation
  • H. Roodaki * Pages 283-293
    Background and Objectives
    From the multiview recorded video, free viewpoint video provides flexible viewpoint navigation. Thus, a lot of views need to be sent to the receivers in an encoded format. The scalable nature of the coded bitstream is one method of lowering the volume of data. However, adhering to the limitations of the free viewpoint application heavily relies on the kind of scalable modality chosen. The perceptual quality of the received sequences and the efficiency of the compression technique are significantly impacted by the scalable modality that was chosen.
    Methods
    In order to address the primary issues with free-viewpoint video, such as high bandwidth requirements and computational complexity, this paper suggests a scalable framework. The two components of the suggested framework are as follows: 1) introducing appropriate scalable modality and data assignment to the base and enhancement layers; and 2) bit budget allocation to the base and enhancement layers using a rate control algorithm. In our novel scalable modality, termed Tile-based scalability, the idea of Region of Interest (ROI) is employed, and the region of interest is extracted using the tile coding concept first presented in the MV-HEVC.
    Results
    When compared to the state-of-the-art techniques, our approach's computational complexity can be reduced by an average of 44% thanks to the concept of tile-coding with parallel processing capabilities. Furthermore, in comparison to standard MV-HEVC, our suggested rate control achieves an average 17.7 reduction in bandwidth and 1.2 improvement in video quality in the Bjøntegaard-Bitrate and Bjøntegaard-PSNR scales.
    Conclusion
    Using new tile-based scalability, a novel scalable framework for free-viewpoint video applications is proposed. It assigns appropriate regions to the base and enhancement layers based on the unique features of free viewpoint scalability. Next, a rate control strategy is put forth to allocate a suitable bitrate to both the base and enhancement layers. According to experimental results, the suggested method can achieve a good coding efficiency with significantly less computational complexity than state-of-the-art techniques that used the λ-domain rate control method.
    Keywords: Tile-based Scalability, Region of Interest, λ-domain rate control algorithm, MV-HEVC, Parallel processing