فهرست مطالب

International Journal of Information Security
Volume:6 Issue: 1, Jan 2014

  • تاریخ انتشار: 1393/06/13
  • تعداد عناوین: 6
|
  • Saeed Shokrollahi, Fereidoon Shams, Javad Esmaeili Page 1
    The primary characteristic of an Ultra-Large-Scale (ULS) system is ultra-large size on any related dimension. A ULS system is generally considered as a system-of-systems with heterogeneous nodes and autonomous domains. As the size of a system of-systems grows, and interoperability demands between sub-systems is increased, achieving more scalable and dynamic access control system becomes an important issue. The Attribute-Based Access Control (ABAC) model is a proper candidate to be used in such an access control system. The correct deployment and enforcement of ABAC policies in a ULS system requires secure and scalable collaboration among dierent distributed authorization components. A large number of these authorization components should be able to join dierent domains dynamically and communicate with each other anonymously. Dynamic conguration and reconguration of authorization components makes authorization system more complex to manage and maintain in a ULS system. In this paper, an access control middleware is proposed to overcome the complexity of deployment and enforcement of ABAC policies in ULS systems. The proposed middleware is data-centric and consists of two layers. The lower layer is a Data-Distribution-Service (DDS) middlewareused for loosely-coupled-communication among authorization components. The upper layer is used for secure conguration and reconguration of authorization components. This layer exploits the OASIS model to provide a mechanism to allow only authorized components to get the related conguration and reconguration information. In the proposed middleware the notion of combining ABAC policies, combining decisions, and combining information is used to achieve multi-policy, multi-decision, and multi-information capabilities. An executable model of the proposed middleware is also represented by a Colored-Petri-Net (CPN) model. This executable model is used to analyze the behavior of the proposed middleware.
  • Shahram Rasoolzadeh, Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref Page 2
    An AES-like lightweight block cipher, namely Zorro, was proposed in CHES 2013. While it has a 16-byte state, it uses only 4 S-Boxes per round. This weak nonlinearity was widely criticized, insofar as it has been directly exploited in all the attacks on Zorro reported by now, including the weak key, reduced round, and even full round attacks. In this paper, using some properties discovered by Wang et al. we present new differential and linear attacks on Zorro, both of which recover the full secret key with practical complexities. These attacks are based on very efficient distinguishers that have only two active S-Boxes per four rounds. The time complexity of our differential and linear attacks are 255:40 and 245:44 and the data complexity are 255:15 chosen plaintexts and 245:44 known plaintexts, respectively. The results clearly show that the block cipher Zorro does not have enough security against differential and linear attacks.
  • Fatemeh Raji, Ali Miri, Mohammad Davarpanah Jazi Page 3
    There are some critical privacy concerns in the current online social networks (OSNs). Users'' information is disclosed to different entities that they were not supposed to access. Furthermore, the notion of friendship is inadequate in OSNs since the degree of social relationships between users dynamically changes over the time. Additionally, users may define similar privacy settings for their friends in an OSN. In this paper, we present a centralized privacy-preserving framework for OSNs to address these issues. Using the proposed approach, the users enforce confidentiality and access control on the shared data while their connections/relationships with other users are kept anonymous in OSNs. In this way, the users themselves create and modify personalized privacy settings for their shared data while employing each other''s privacy settings. Detailed evaluations of the proposed framework show the advantages of the proposed architecture compared to the most analogous recent approach.
  • Hamzeh Ghasemzaedh, Ali Payandeh, Mohammad Reza Aref Page 4
    Due to wireless nature and hostile environment, security is a critical and vital task in wireless sensor networks (WSNs). It is known that key management is an integral part of a secure network. Unfortunately, in most of the previous methods, security is compromised in favor of reducing energy consumption. Consequently, they lack perfect resiliency and are not fit for applications with high security demands. To improve security of key management system, based on broadcast messages from base station, a novel key management system is proposed. Another problem with WSNs are dead nodes with cryptographic materials (like their keying materials) stored in them. Adversary may exploit these nodes to mount more effective attacks. Any secure key management system should also address this problem. It is argued that in the proposed method keying materials of dead nodes have lost their validity, and therefore are of no use for adversary. Finally, through simulation it is shown that proposed method is almost three times more energy efficient than conventional certificate based key management systems.
  • Asghar Tavakoly, Reza Ebrahimi Atani Page 5
    The Tor network is probably one of the most popular online anonymity systems in the world. It has been built based on the volunteer relays from all around the world. It has a strong scientific basis which is structured very well to work in low latency mode that makes it suitable for tasks such as web browsing. Despite the advantages, the low latency also makes Tor insecure against timing and traffic analysis attacks, which are the most dominant attacks on Tor network in recent past years. In this paper, first all kinds of attacks on Tor network will be classified and then timing and traffic analysis attacks will be described in more details. Then we present a new circuit scheduling for Tor network in order to preserve two properties, fairness and randomness. Both properties are trying to make pattern and timing analysis attacks more difficult and even in some cases impractical. Our scheduler distorts timing patterns and size of packets in a random way (randomness) without imposing artificial delays or paddings (fairness). Finally, by using our new scheduler, one of the most powerful attacks in this area is debilitated, and by it is shown that analyzing traffic patterns and size of packets will be more difficult to manage.
  • Mahdieh Zabihi, Majid Vafaei Jahan, Javad Hamidzadeh Page 6
    Today world''s dependence on the Internet and the emerging of Web 2.0 applications significantly increased the requirement of web robots crawling the sites to support services and technologies. Regardless of the advantages of robots, they may occupy the bandwidth and reduce the performance of web servers. Despite a variety of researches, there is no accurate method for classifying huge data sets of web visitors in a reasonable amount of time. Moreover, this technique should be insensitive to the ordering of instances and produce deterministic accurate results. Therefore, this paper presents a density-based clustering approach using Density-Based Spatial Clustering of Applications with Noises (DBSCAN), to classify web visitors of two real large data sets. We propose two new features based on the behavioral patterns of visitors to describe them. What''s more, we consider 12 common features and use the significance of the difference test (T-test) to reduce the dimensions and overcome one of the disadvantages of DBSCAN. Based on the supervised evaluation metrics, the proposed algorithm has the 95% of Jaccard metric and produces two clusters having the entropy and purity rates of 0.024 and 0.97, respectively. Furthermore, from the standpoint of clustering quality and accuracy, the proposed method performs better than state-of the-art algorithms. Finally, it can be concluded that some known web robots through imitating human users make it difficult to be identified.