فهرست مطالب

  • Volume:16 Issue: 2, 2020
  • تاریخ انتشار: 1398/11/22
  • تعداد عناوین: 6
|
  • Abdollah Eshghi, Mehrdad Kargari* Pages 1-13

    Every year a large amount of money is lost due to fraud in financial institutions. Detecting frauds is a complicated task and limiting fraud detection systems to certain kinds of detection methods like supervised or unsupervised methods does not seem efficient. In this paper, a combined framework for fraud detection systems, consisting of both supervised and semi-supervised methods in three main components namely rule-based component, trend-analysis-based component and, a scenario-based component is proposed. The rule-based component is the supervised part of the framework and a decision tree, which is a cost-insensitive classification algorithm, is used for this component. In the trend-analysis-based component, which is the semi-supervised part of our proposed framework, the normal behavior of users are modeled and the extent of dissimilarities of newly arrived transactions are calculated. Finally, in the scenario-based component, which is another semi-supervised part of the proposed framework, the extent of similarities of the sequence of transactions with the known fraud scenarios are calculated. The final result is obtained through combining the results of all these three components using a bagging method. Combining the outputs of all these components together using the proposed bagging model, rather than detecting more frauds, the results are more stable and the number of false alarms is reduced remarkably.

    Keywords: Fraud detection, supervised methods, Semi-supervised methods, Trend analysis, Bagging Fraud detection, supervised method, Semi-supervised method, Trend Analysis, bagging
  • Soheila Ramezani Pour, Marjan Naderan, SaeidAllah Mortazavi* Pages 14-29

    The emergence of digital information era and rapid development of the Internet makes information to change gradually from paper form to the electronic one. This makes the users capable to search the news and books in an electronic way. Thus, the existenceof systems for information retrieval appears to be essential. This paper suggests a system for text classification by means of semi-supervised fuzzy clustering with a weighted feature vector. In the proposed method, after a preprocessing phase, a Genetic Algorithm together with the TF-IDF method is used for dimensionality reduction. Accordingly, features with highest discriminating power are chosen and finally, the documents are classified with the clustering algorithm, C-W-FCM. In fact, the proposed clustering algorithm applies the Euclidean distance with different weights for different dimensions. For evaluation of the proposed approach, a number of prominent criteria for clustering, namely Fukuyama and Sugeno (FS), are used conducted on the Reuters dataset. It is assumed that a small number of documents have labels which are called the seeded set. Simulation results show that the proposed approach is 27 to 33% superior to conventional clustering algorithms based on the evaluation criteria in determining clusters. In addition, the proposed clustering algorithm increases the system effectiveness especially when documents are highly similar to each other.

    Keywords: Text classification, fuzzy clustering, semi-supervised, genetic algorithm
  • Saeed Hajizadeh, MohammadAmin Fazli, Jafar Habibi* Pages 30-37

    Scoring rules are common methods for incentivizing experts to present the opinions consistent with their beliefs. Information markets such as prediction markets and decision markets use scoring rules for eliciting the most accurate predictions experts can make. In these markets, experts are invited to buy and sell contracts, according to which they will be paid if their prediction about a future uncertain event is true. The final trading prices can be interpreted as an aggregation of their prediction for an uncertain future event. In this paper, we propose new mechanisms (prediction oriented and decision oriented) for predicting the value of uncertain continuous variables in the future. These mechanisms in their basic form, are scoring rules with a new paradigm. This paper also includes the results of performing a case study (a prediction oriented mechanism) for predicting the outcome of the 11th presidential election of Iran in 2013 to analyze the performance of our model. Besides the fact that the mechanism's average absolute error in predicting nominees' percentages was low, about 6.53%, it also predicted the final outcome order of all nominees correctly.

    Keywords: Scoring Rule, Decision Scoring Rule, Variable Prediction, Election Prediction
  • Zeinab Torabi, Armin Belghadr* Pages 38-43

    It is common to speed-up addition, subtraction and multiplication via Residue Number Systems (RNS). The key advantage of the RNS is its limited carry propagation scheme, which is due to breaking down the long-word operations into a number of independent small-word parallel operations. Applications with repeated use of addition and multiplication have mostly benefited from RNS, while lack of efficient realizations for other operations does not allow for wider range of applications. Comparison, as a basic building block for other difficult RNS operations such as division, has been subject of numerous studies. Comparison via dynamic range partitioning has been shown to be the most successful implementation for the 3-moduli set {2 𝑛 , 2 𝑛 ± 1}. In this paper, we proposed an efficient RNS comparator for the moduli set {2 𝑛 − 1, 2 𝑛 , 2 𝑛+1 − 1} via dynamic range partitioning technique. Synthesis results reveal 27.5% delay, 64% area, 59.7% power dissipation and 71% energy consumption reduction for the proposed design against straightforward comparator (i.e. reverse conversion followed by binary comparator)

    Keywords: Computer Arithmetic, Residue Number System, Difficult Operations, Dynamic Range Partitioning, Comparison
  • MohammadAmin Ahmad Akhoundi, Ehsan Valavi* Pages 44-53

    This paper proposes a new approach to multi-sensor data fusion. It suggests that aggregation of data from multiple sensors can be done more efficiently when we consider information about sensors’ different characteristics. Similar to most research on effective sensors’ characteristics, especially in control systems, our focus is on sensors’ accuracy and frequency response. A rule-based fuzzy system is presented for fusion of raw data obtained from the sensors that have complement characteristics in accuracy and bandwidth. Furthermore, a fuzzy predictor system is suggested aiming for extreme accuracy which is a common need in highly sensitive applications. Advantages of our proposed sensor fusion system are shown by simulation of a control system utilizing the fusion system for output estimation.

    Keywords: Sensor data fusion, Fuzzy Systems
  • MohammadSadegh Sadeghi, Siavash Bayat Sarmadi, Shaahin Hessabi* Pages 54-68

    On-chip network security is a dominant necessity for secure many-core platforms to leverage the concepts of the cloud and embedded systemon-chip. In fact, there is a need for a secure, low-latency and highly utilized on-chip communication. While there are various techniques to mitigate and eliminate timing side-channel attacks, these attacks present a security threat for networks-on-chip (NoCs). This threat stems from the experienced latencies of the malicious application, which can divulge information when the victim applications are accessing shared resources. We propose an approach to eliminate the timing side-channel attacks, using isolation dynamic mapping. In order to meet the noninterference flows between malicious and victim applications and the security of the NoC, our proposed schemes (Isolated and Liso) employ several techniques, including (1) a scalable strategy that maximizes utilization of the system, and (2) a low-overhead isolation approach for the NoC through rectangular and L-shape mappings. We show that Isolated mapping scheme degrades the throughput of our secure NoC system while Liso scheme improves it without leaking information. Our best solution offers throughput that is on average 16\% lower than that of an optimized non-secure baseline, and has no router modification in contrast to the best known competing schemes.

    Keywords: Non-interference, Security, On-Chip Networks, Timing Side-Channel Attacks