فهرست مطالب

رایانش نرم و فناوری اطلاعات - سال نهم شماره 1 (بهار 1399)
  • سال نهم شماره 1 (بهار 1399)
  • تاریخ انتشار: 1399/04/11
  • تعداد عناوین: 6
|
  • عارفه خانلری، مهدی ازوجی * صفحات 1-6

    در این مقاله، به مساله حذف نویز تصویر مبتنی بر کمینه سازی یک تابع انرژی مناسب پرداخته میشود. این تابع انرژی شامل بخش هماندهی داده و بخش هموارسازی تصویر هدف است. در این مقاله، موازنه ی بهینه ای بین این دو بخش مورد توجه قرار میگیرد. این رویکرد به گونه ای به برطرف کردن نویز از تصویر میانجامد که سازوکار آن مستقل از تصویر ورودی است و همزمان لبه ها و جزییات تصویر هم حفظ میشود. نتیجه آزمایش ها، کارآمدی این رویکرد را تایید میکند.

    کلیدواژگان: حذف نویز از تصویر، تابع انرژی، موازنه محلی، هماندهی داده، همواری
  • میرسعید حسینی صفحات 7-17

    برونسپاری فناوری اطلاعات سازمان به مراکز ابر در برابر فناوری اطلاعات سنتی از لحاظ هزینه به کارایی امیدبخش تر است. در این راستا، اخذ یک تصمیم پایدار برای پذیرش رایانش ابری نیاز به درک عمیق از هزینه ها، موضوعات محیطی و اجتماعی دارد. چالشها و دغدغه های زیادی در سر راه سیاستگزاران در مواجه با انتخاب بین گزینه های مختلف فناوری اطلاعات وجود دارد. اگرچه مهاجرت به)، به صورت بالقوه مزایایی دارد ولی همچنان ضعف هایی با توجه به شرایط خاص صنعت  TCO ابر به دلیل کاهش کل مجموع هزینه مالکیت (یا سازمان پذیرنده ابر مثل دغدغه های پذیرنده ابر در برابر میزان درجه عدم قطعیت روی خلوت خصوصی داده، امنیت و تاخیر ارتباطات شبکه ای وجود دارد. این مقاله، یک مدل تصمیم چهار گذره ی پذیرش رایانش ابری برای برونسپاری فناوری اطلاعات ارایه می دهد تا دغدغه ها و چالش های سازمان ها و صنایع پذیرنده را با درنظر گرفتن هزینه های تحمیلی هر گزینه محتمل با به کارگیری مفهوم ارزش) مربوط به هر راه حل محتمل در دوره سرمایه گذاری به همراه تحلیل موضوعات غیر اقتصادی حل کند. همچنین، این NPV خالص فعلی (مدل از قانون مور و مصاحبه تیم دلفی به ترتیب برای تخمین قیمت ابزار فناوری اطلاعات در آینده و وزن دادن به پارامترهای تعیین کننده و بازدازنده بهره می برد. جهت ارزیابی عملکرد مدل پیشنهادی، سرویس های جدید شرکت مخابرات استان مازندران به عنوان یک صنعت مقیاس بزرگ در ایران به عنوان یک مورد مطالعه در مدت سرمایه گذاری شش ساله در نظر گرفته شد. پیاده سازی مدل برای شرکت مخابرات، نشان می دهد بهترین گزینه تاسیس یک مرکز داده ای خصوصی و استفاده از استقرار ترکیبی ابر در زمان اوج درخواست لحظه ای منابع کامپیوتری است.

    کلیدواژگان: رایانش ابری، مجموع هزینه مالکیت، اقتصاد مقیاس، ارزش خالص فعلی
  • زلیخا جهانبخش نقده، آرش شریفی، محمدرضا فیضی درخشی* صفحات 18-27
  • کیومرث قاضی پور، اسدالله شاه بهرامی * صفحات 28-35
  • علی فهمی جعفرقلخانلو، موسی شمسی * صفحات 36-45
|
  • Arefeh Khanlari, Mehdi Ezoji * Pages 1-6

    This paper addresses image denoising problem based on minimization of an appropriate energy function. This energy function consists of data fidelity term and targeted smoothness term. In this paper, a local optimal balance between these two terms is considered. This strategy leads to image invariant denoising and also preserves edges simultaneously. Experimental results verify the performance of this approach.

    Keywords: Image Denoising, Energy Function, Local Balance, Data Fidelity, smoothness
  • Mirsaeid Hosseini Shirvani * Pages 7-17

    Information technology outsourcing (ITO) of organizations on cloud datacenter promises cost effectiveness over traditional on-premises deployment. In this regard, making sustainable decision toward cloud adoption needs profound understanding of cost implications, and social environmental issues. There are several concerns and challenges policymakers face when they encounter with IT options’ dilemma. Although the cloud migration has potentially merits account for reduction in total cost of ownership (TCO), there may exist demerits for especial situation of each industry and organization such as degree of uncertainty on privacy, security and communication delay concerns. This paper introduces an iterative four-phase cloud adoption decision model for IT outsourcing to solve industries’ and organizations’ concerns and challenges by considering cost implication of each contingent options and applying the net present value (NPV) of each alternative during the period of investment along with non-economic issues analysis. Also, the model leverages Moore law and Delphi panelists’ interview to estimate price of IT devices in future and to weight cloud adoption determinants and inhibitors respectively. The new services of Telecommunication Company of Mazandaran province (TCM) which is a large scale industry in IRAN are used as a case study to evaluate the effectiveness of proposed model for six years of investment. Implementation of the model for TCM shows that it is better to establish private datacenter on-premises and apply hybrid deployment in burst of resource demand.

    Keywords: cloud computing, TCO, Economies of Scale, Net Present Value
  • Zoleikha Jahanbakhsh Nagadeh, MohammadReza Feizi Derakhshi *, Arash Sharifi Pages 18-27

    Speech Acts (SAs) are one of the important areas of pragmatics, which give us a better understanding of the state of mind of the people and convey an intended language function. Knowledge of the SA of a text can be helpful in analyzing that text in natural language processing applications. This study presents a dictionary-based statistical technique for Persian SA recognition. The proposed technique classifies a text into seven classes of SA based on four criteria: lexical, syntactic, semantic, and surface features. WordNet as the tool for extracting synonym and enriching features dictionary is utilized. To evaluate the proposed technique, we utilized four classification methods including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB), and K-Nearest Neighbors (KNN). The experimental results demonstrate that the proposed method using RF and SVM as the best classifiers achieved a state-of-the-art performance with an accuracy of 0.95 for classification of Persian SAs. Our original vision of this work is introducing an application of SA recognition on social media content, especially identifying the common SA in rumors and its application in the rumor detection. Therefore, the proposed system utilized to determine the common SAs in rumors. The results showed that Persian rumors are often expressed in three SA classes including narrative, question, and threat, and in some cases with the request SA. Also, the evaluation results indicate that SA as a distinctive feature between rumors and non-rumors improves the accuracy of rumor identification from 0.762 (based on common context features) to 0.791 (the combination of common context features and four SA classes).

    Keywords: Speech Act, Persian text classification, Feature Extraction, WordNet, Rumor detection
  • Asadollah Shahbahrami *, Kiumars Ghazipour Pages 28-35

    Optimization is the process of making something as good or effective as possible. Optimization problems are used over many fields such as economics, science, industry and engineering. The growing use of optimization makes it essential for researchers in every branch of science and technology. To solve optimization problems many algorithms have been introduced, while achieving a higher quality of results in terms of accuracy and robustness is still an issue. Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. In this study, to achieve a higher quality of results in numerical functions optimization, two new operators named N-digit lock search (NLS) and Two-Math crossover are introduced to enhance the genetic algorithm (GA) as a widely used metaheuristic. The NLS operator is inspired by the N-digit combination lock pattern and enhances the exploitative behavior of the GA by calibrating the current best solution and the relatively new Two-Math crossover operator combines both two-point and arithmetic crossover techniques to guide the overall search process better. The proposed enhanced genetic algorithm (EGA) is tested over 33 benchmark mathematical functions and the results are compared to some population-based, particle swarm optimization (PSO2011) and artificial bee colony (ABC) algorithms, and single-solution based, simulated annealing (SA), pattern search (PS), and vortex search (VS). A problem-based test is performed to compare the performance of the algorithms, which results shows the proposed EGA outperforms all other algorithms, SA, PS, VS, PSO2011 and ABC. In addition, it surprisingly finds the global best points for almost all 33 test functions with a constant value for 2 out of 3 EGA operators.

    Keywords: Metaheuristics, Genetic Algorithm, Function optimization, global optimization
  • Ali Fahmi Jafargholkhanloo, Mousa Shamsi * Pages 36-45

    Analysis of facial color images is very important as a result of its numerous applications in facial surgeries. The development of different tools in the field of facial surgery analysis has helped surgeons before and after surgery. In this article, an Active Contour Model (ACM) based on Local Gaussian Distribution Fitting (LGDF) is introduced for the contour extraction of the ear area. The LGDF model is a region-based method in the Active Contour Model that unlike other models such as the Chan-Vese is not sensitive to intensity inhomogeneity of the image. After the contour extraction of the ear area, in the second step, with the four landmarks detection, ear parameters containing: length, width and external angle of ear were measured for analysis in the Otoplasty surgery. The proposed algorithm was evaluated on the AMI and Sahand University of Technology (SUT) databases. The proposed algorithm has an accuracy of %96.432, %97.423 and %85.546 in the AMI database and an accuracy of %98.381, %97.237 and %87.864 in the SUT database for the length, width and external angle of the ear parameters, respectively.

    Keywords: Active Contour Model, Color Images Analysis, Analysis of Ear Anthropometry, Facial Surgery Analysis, Otoplasty Surgery
  • Majid Ziaratban * Pages 46-60

    Text line segmentation is an important stage of the optical character recognition (OCR) algorithms. To analyze and recognize a document, text lines have to be segmented accurately. Text line segmentation of handwritten documents is more difficult than that of machine-printed ones. Curved and multi-skewed text lines, overlapping text lines, and very small text lines are the main challenges. Most of the proposed approaches did not consider local features of text lines in a document image. In our proposed method, both global and local features are considered. The proposed method is based on using directional 2D anisotropic filters. The parameters of our method are tuned based on a main global parameter which is computed for each document, separately. Hence, the proposed method is a dataset-independent method. A document is divided into several blocks for which some local characteristics are calculated. In each block, text regions are detected by using local characteristics such as the block skew. In order to estimate the skew of text regions in a block, a novel text block skew estimation algorithm is proposed in this paper. Experimental results show that the proposed method outperforms all the state-of-the-art methods on three standard datasets. Our final F-Measure are 0.54%, 0.03%, and 0.02% greater than the winner of ICDAR2013 text line segmentation contests on ICDAR2013, ICDAR09, and HIT-MW datasets, respectively. The experiments proved that the proposed method can accurately segment text lines of complicated handwritings.

    Keywords: Text line segmentation, handwritten documents, script-independent method, directional 2D filters