فهرست مطالب

International Journal of Web Research
Volume:1 Issue: 1, Spring-Summer 2018

  • تاریخ انتشار: 1397/01/12
  • تعداد عناوین: 7
|
  • Fatemeh Ebadifard, Seyed Morteza Babamir * Pages 1-10

    A workflow consists of a set of independent tasks, while workflow scheduling in a cloud environment is a proper permutation of these tasks involving virtual machines. Selecting the permutation with minimum completion time from among all of the arrangements, in which the requests and diversity of virtual machines increase, is an NP-hard problem. Given that, in addition to the makespan, other objectives should be considered in the scheduling problem in a real environment, which, in most cases, are conflicting objectives, the scheduling problem becomes more complicated. Therefore, multi-objective heuristic algorithms represent the perfect solution to these problems. To this end, we extended a recent heuristic algorithm known as black hole optimization (BHO) and presented a multi-objective scheduling method for a workflow application based on the Pareto optimizer algorithm. Since multi-objective algorithms select a set of permutations with an optimal trade-off from among conflicting objectives, we use a decision-making method – the weighted aggregated sum product assessment (WASPAS) – in the following and select a solution that offers suitable permutation from among all solutions of the Pareto optimal set. Our proposed method is able to consider user requirements, as well as the interests of service providers. Using a balanced and unbalanced workflow, we compare our proposed method with the SPEA2 and NSGA2 algorithms based on conflicting objectives (1) makespan, (2) cost and (3) resource efficiency.

    Keywords: cloud, makespan, Cost, effeciency, WASPAS
  • Vahid Bashiri, Hossein Rahmani *, Hamid Bashiri Pages 11-17
    Discovering important nodes in graph data attracted a lot of attention. Social networks are good examples of graph data in which each node represents a person and each edge represents a relationship between two people. There are several methods for the task of discovering important nodes in graph data. In this paper, important people are defined with their roles in society or organization. We propose an efficient method to discover leaders in graph network. For this purpose, both structural feature like entropy and inherent features including from, to, subject and message's time of social networks are used to propose a novel method for discovering important nodes in social networks. The proposed method was applied to Enron dataset and compared with previous methods. The proposed method succeeded to first, discover more important roles in Enron dataset, second, determine CEO as leader of Enron Corporation and third, discover two out of four CEOs among top VIPs.
    Keywords: Graph Mining, social network, Important Node, Entropy
  • Mohsen Arab, Mahdieh Hasheminezhad * Pages 18-29
    The discovery and analysis of community structures in networks has attracted increasing attention in recent years. However there are some well-known quality metrics for detecting and evaluating communities, each of them has its own limitations. In this paper, we first deeply discuss these limitations for community detection and evaluation based on the definitions and formulations of these quality metrics. Then, we perform some experiments on the artificial and real-world networks to demonstrate these limitations. Analyzed quality metrics in this paper include modularity, performance, coverage, normalized mutual information (NMI), conductance, internal density, triangle participation ratio and cut ratio. Comparing with previous works, we go through the limitations of modularity with much more accurate details. Moreover, for the first time, we present some limitations of NMI.  In addition, however it is known that performance has tendency to get high values in large graphs, we explore this limitation by its formulation and discuss several specific cases in which performance even on small graphs gets high scores
    Keywords: Limitations of quality metrics, community detection, quality function, Social Networks, Data mining
  • Hosniyeh S.Arian *, Omid R.B. Speily Pages 30-38

    Social media has made major changes in various e-commerce areas. One of these marketing cases is in e-commerce systems. The relationship between customers and business is very much appreciated by marketers. The use of social media by customers has given marketers the opportunity to get more information from customer feedback. Recently, in social media, marketers look for customers who have the most impact on other customers. They can influence the ideas of other customers with their opinions about a new product. In addition, influential users can have the greatest impact on specific domains. This domain may be in the domain of a product or service. Therefore, in this article influential users on social media have been studied in terms of impact in different areas. The proposed approach is for influential users using the social knowledge management approach. The knowledge cycle consists of knowledge organization, storage, retrieval, and knowledge discovery and knowledge management, where all explicit and implicit knowledge has been tried to accurately disclose affected users. In this paper, firstly, the problem was adapted to the knowledge management cycle, and in the steps of this cycle, artificial intelligence techniques such as Baysian networks were used to classify and identify influential users .In order to investigate the proposed method, various scenarios based on a variety of data sets are used for evaluation and the results of these studies show the high accuracy of the proposed method in identifying influential users.

    Keywords: influential users, publishing, Knowledge Management, Social Networks, e-shops, domains
  • Farzaneh Norouzi, Fatemeh Azimzadeh Pages 39-50

    With the increasing spread of science, various methods have been proposed to restore more and better scientific documents according to the needs and requests of users. Since there is no complete information for some documents, users have to access the metadata including the name of authors and their affiliation, the publication date, and references used for the document by accessing to the documents. Therefore, extraction of information based on the structural and geometrical characteristics of the document can be very helpful in retrieving relevant and required documents. In this paper, after extracting metadata using geometrical features of documents and graph-based model, the relationships between different entities such as documents, authors, journals, and conferences are modeled for more efficient information retrieval. The extracted and refined data, stored in the graph model, are available in a web-based user interface. To produce the results of each query, the related documents are retrieved based on the graph’s relationships, the quality of each document, and their citation score. To evaluate the proposed method, the PubMed and D2SPR databases are used. The results from the experiments show that the number of retrieved documents in the proposed method is 60% higher than the PubMed database search engine and 80% higher than D2SPR. Moreover, nDCG with an average of 0.824 in the proposed approach has a significant distance with the average of 0.30 in Pubmed search engine. While the average of F-measure on D2SPR dataset is 0.834 for the suggested system, the value is 0.71 in the current study.

    Keywords: Metadata Extraction, Information Retrieval, Knowledge Graph Data Model, Structural Data
  • Hamid Jafary*, MohammadTaghi Taghavifard, Payam Hanafizadeh, Abolfazl Kazazi Pages 51-61

    Effective measurements and accurate analysis of the quality of news sites are the necessary steps in improving the sites’ performance and reporting methods. The present research was conducted to propose an intelligent quality assessment method for news sites. This study consists of two parts. In the first part, the Delphi technique as a qualitative method is used to identify the indicators for assessing the quality of news sites. For this purpose, 5 dimensions, 69 indicators, and several sub-criteria from five perspectives of design and graphic design, content, technical system, services, and executive management of news sites were identified. Subsequently, the quality of 61 news sites was calculated and the quality score of each site was obtained using a questionnaire. In the second part, a neural network model was designed with 9 inputs and one output. The results and the values of Mean Squared Error (MSE) indicate the validity of the proposed model.

    Keywords: Intelligent, Evaluation, Service Quality, NewsSites, Neural Network, ANFIS
  • Soghra Lazemi*, Hossein Ebrahimpour Komleh Pages 62-67

    Emotions are a part of everyday communications of people and one of the important elements of human nature. We can distinguish a person’s emotions from some outcome behaviors such as speech, facial expression, body movements, and gestures. Another outcome behavior is his/her grammar and written method that reflects the inner states of the person. Since people are nowadays more likely to use textual tools to make the connection, emotion extraction from the text has attracted much attention. The majority of methods in this regard consider emotion extraction from the text as a classification problem. Therefore, most studies depend on a huge number of handcrafted features and are done on feature engineering to enhance the classification performance. Considering that a text may include more than one emotion that only one of them is text dominant emotion, we model the emotion extraction problem as a multi-label classification problem by removing the fixed boundaries of emotions. Next, we recognize all the existing emotions in the sentence and in dominant emotion. Our goal is to achieve a better performance only with minimal feature engineering. To this end, we propose a hybrid deep learning model that benefits both CNN and RNN deep models. The experiments are done on a multi-label dataset including 629 sentences with eight emotional categories. Based on the results, our proposed method shows a better performance (about 0.12%) compared with available multi-label learning methods (e.g., BR, RAKEL, and MLkNN).

    Keywords: Emotion Extraction, Multi-LabelClassification, Machine Learning, structural, semanticinformation, Deep Learning, Natural Language Processing