فهرست مطالب

International Journal Information and Communication Technology Research
Volume:1 Issue: 4, Autumn 2009

  • تاریخ انتشار: 1388/06/15
  • تعداد عناوین: 10
|
  • Nasrin Dastranj Mamaghani, Masoumeh Sadeghi, Kolsoom Abbasi Shahkooh, Kambiz Badie Page 1
    This paper represents a framework for enterprise interoperability. The aim of this framework is to develop and improve interoperability between organizations in order to deliver electronic services effectively. For this purpose, different interoperability frameworks were reviewed and their main concepts were described. After that, the key dimensions and components of the enterprise interoperability framework were extracted by using the meta-synthesis approach. Then, a framework was proposed for improving interoperability between organization in which the goals and components of each dimension were specified. Finally, a survey of expert opinions was used in order to validate and analyze the proposed framework.
  • Masomeh Azimzadeh, Alireza Yari, Abolfazl Aleahmad Page 15
    The huge content and dynamic nature of the web creates new challenges in the area of web crawling. Crawling methods are categorized into general and focused crawling. In the general crawling method, all web pages are gathered but in focused crawling only those web pages are gathered that belong to a specific subject. Language specific crawling is a type of focused crawling that collects web pages and documents which are written in a target language. Since the web contains a wide spread area of unstructured and written data in different languages, web crawling based on a specific language is a big challenge in the area of web information retrieval. In this paper, a Persian language specific crawler has been presented and described. The results of our experiments show that the proposed Persian crawler improved the performance of Persian web pages crawling.
  • Ali Ahmadi, Mahdi Zamanian, Mohsen Mohammadi Takami Page 29
    Intelligent approaches have been applied recently in classification of Web pages, and as a specific application, in filtering of pornographic Web pages. The existing methods are mainly based on using textual features and in some cases visual features, but the main problem with these approaches is the high rate of over-blocking. In this paper, we have proposed a new intelligent model for filtering of immoral pages which employs three types of structural, textual, and visual features in a hierarchical structure of Bayesian and neural network classifier. As for structural and textual features, a list of keywords is extracted from the training pages and taking a correlation and statistical analysis over the features, we select the most effective features. As for visual features, we use skin color feature together with a set of part-based features extracted from the images within the page. The algorithm was tested over 1,295 Web pages including 700 porn pages (coming with text and image) in English and Persian language, and 505 non-porn pages including pages with medical, health, sports, etc. topics, and a 90% accuracy rate was obtained.
  • Maryam Mohamedpour, Mahdi Fasanghari, Kambiz Badie Page 45
    Researches imply that modeling is a suitable tool for designing and implementing foresight in organizations. In order to make an accurate understanding, and ensure right actions according to the given model, in one hand and for planning, programming and implementing the foresight survey in the other hand there is an absolute need for modeling. Considering the widespread researches in the field of foresight and the importance of experts'' cooperation, a framework for clarifying dimensions and key concepts of foresight is needed. Thus, ontology can be used as a tool for integrating the different dimensions of proposed models in the literature review. In fact, ontology proposed the concepts of each special dimensions while signifies the relation of the concepts in the symbolic method and with different degrees of formality. In this paper, a framework for technology foresight is developed based on the reviewing of the previous most famous frameworks with preparing the ontology of them. At last but not the last, the 40 expert’s opinion satisfied the accuracy of the proposed framework.
  • Ali Mohammad Zareh Bidoki, Mohammad Azadnia, Nasser Yazdani, Amir Hossein Keyhanipour Page 59
    The proliferation and huge size of information in the web is an important issue in response to the user query in information retrieval. Currently, different ranking algorithms such as PageRank and BM25 have been proposed. In this paper an adaptive combinational ranking algorithm for achieving to higher precision and efficiency is proposed. This algorithm uses combination of Coarse-grained features such as BM25 and TF-IDF and also fine-grained features such as Term Frequency and In-Degree of pages, in the learning process to produce better results. In learning process, the Ordered Weighted Aggregation Operator (OWA) operation and experts judgments are used. For evaluation we used a standard test collection called LETOR including “WEB TREC 2004” and we found considerable results.
  • Maryam Mahmoudi, Alireza Yari, Mojgan Farhoodi, Majid Hadizadeh Page 71
    There have been many attempts to assess web sites based on their information, format, functionality or even the impact they have in the web space. For this purpose numerous indices have been introduced which some of them consider web space features and some other related to the link structure of web sites.The aim of this research is identification of quantity and quality characteristics of government websites from different aspects in order to evaluate Iran e-government situation from web viewpoint. There have been carried out lots of studies about quantitative analysis of e-government internationally in the form of rankings of countries by international organizations, private sector consultancies and academic researchers.In this paper, we have proposed a new method by combining various features for evaluating and comparing Iranian government websites and also studying related web domain status
  • Abbas Rasoolzadegan, Mohammad Reza Meybodi Page 79
    Active database systems (ADBS) can react to the occurrence of predefined events automatically by definition a collection of active rules. One of the most important modules of ADBS is the rule scheduler, which has considerable impact on performance and efficiency of ADBS. The job of rule scheduler is the selection of a rule for execution from the set of ready for execution rules. In this paper, we propose a new approach based on learning automata to improve the rule scheduling performance in terms of average response time, response time variance, and throughput. Learning automata have been used to obtain better estimations for rule execution probabilities. The results of experimentations show that the performance of the proposed method outperforms the most effective existing rule scheduling method.
  • Hassan Zareian, Vahid Tabataba Vakili Page 93
    The power amplifier (PA) linearization performance of adaptive digital predistorter (PD) are degraded by the imperfections in quadrature modulator (QM) such as in-phase and quadrature (IQ) imbalance in the direct up-conversion transmitter chain. In this paper, we first propose a new adaptive algorithm to estimate and compensate the IQ imbalance in QM. The effectiveness of the proposed IQ imbalance compensator is validated by MATLAB simulations. The results clearly show the performance of the MOP PD is enhanced significantly by adding the proposed IQ imbalance compensator. Then, based on the previously presented method, we propose a modified joint adaptive algorithm to compensate the IQ imbalance of QM in conjunction with the nonlinearity of PA. The computer simulations results clearly demonstrate that the proposed method has excellent linearization performances similar to the previous method, while has lower computational complexity than previous method.
  • Majid Komeili, Narges Armanfard, Ehsanollah Kabir Page 107
    In this paper, we propose a fuzzy inference system by which reliability of features can be measured. The reliability is defined as discriminative power of a feature in separating the target from background. We use particle filter because of its power and versatility in visual tracking. The reliability is measured based on observations diversity and spatial scattering of particles. Efficiency of our algorithm is demonstrated using color, edge and texture features. Experimental results over a set of real-world sequences show that our method''s performance is better than four common feature weighting methods.
  • Maryam Barshan, Mahmoud Fathi, Saleh Yousefi Page 117
    In this paper we propose a 3-tier hierarchical architecture which is based on peer to peer model for network management purpose. The main focus of the proposed architecture is provisioning fault tolerance property which in turn leads to increasing the availability of the Network Management System (NMS). In each tier of the architecture we use redundancy to achieve the aforementioned goal. However we do not use redundant peers thus no peer redundancy is imposed to the system. Instead we use some selected peers in several roles and therefore only add some software redundancy. We examined the effect of failure of peers which play different roles in the architecture on the availability of the system by means of extensive simulation study. The results show that the proposed architecture offers higher availability in comparison to previously proposed peer to peer NMS. It also offered lower sensitivity to failure of nodes.