به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت
جستجوی مقالات مرتبط با کلیدواژه

algorithm

در نشریات گروه برق
تکرار جستجوی کلیدواژه algorithm در نشریات گروه فنی و مهندسی
  • Hadis Rezaei Mirghaed, Ladan Khedri Gharibvand *
    Urban land use maps, in addition to different classes of land use with spatial patterns, specify the type and intensity of land use; therefore, they can be used for current and future planning of urban land. In this study, land use changes in Lali city in 30 years (1987-2017) were investigated. To evaluate the land use changes in this time interval, several spectral images of Landsat satellites 5, 7, and 8 from the years 1987, 2001 and 2017 were utilized. After collecting data and the application of necessary pre-processing on them, also for the preparation of land use maps for the specified time intervals, data analysis was carried out by Maximum Likelihood Classification Algorithm. The findings obtained each year were monitored and controlled through field operations, and land use maps in 7 classes of agriculture, rangeland, forest, mountain, residential, river, and other areas were produced. Then, the changes in each land use were determined in the specified periods during 1987 to 2001, 2001 to 2017, and eventually 1987 to 2017. While the results obtained from the final changes illustrate that the overall level of vegetation compared to the beginning of the period has declined markedly which is an indication of deforestation in the region, urban areas, agriculture, and rangelands have maintained an ascending trend which can be due to increasing urban development and rural expansion, and the growing need of residents for housing, agriculture, and gardens.
    Keywords: Lali City, Maximum likelihood, algorithm, land use
  • Abbas Bashiri *, Hasan Hasanimoghaddam, Adel Tabeshkar
    Most underwater intelligent vehicles and marine remote-control vehicles are equipped with optical cameras for underwater imaging. However, due to the properties of water and its impurity, the quality of the images taken by these imaging devices is not good enough, because the water weakens the light, and the deeper the water, the more the intensity of the light will decreases. Since different wavelengths show different behavior in the collision with the water column, processing and study of these wavelengths is very important to obtain the desired image. The spectral signature can be used for underwater applications. In this research, to increase the quality of underwater images, a new method has been introduced to improve image contrast. In this method, first, with structured lighting, different wavelengths are irradiated to the underwater target in a laboratory environment, then underwater images are processed by the proposed algorithm, and finally, a multispectral image is achieved by stacking images with different wavelengths. The results showed the relative superiority of the proposed method over other methods.
    Keywords: Underwater imaging, Spectral, Underwater target, algorithm
  • محمدنبی امیدوار، صمد نجاتیان*، حمید پروین، کرم الله باقری فرد، وحیده رضایی

    بهینه سازی یک فعالیت مهم و تعیین کننده در طراحی ساختاری است. طراحان زمانی قادر خواهند بود طرح های بهتری تولید کنند که بتوانند با روش های بهینه سازی در صرف زمان و هزینه طراحی صرفه جویی نمایند. بسیاری از مسایل بهینه سازی در مهندسی، طبیعتا پیچیده تر و مشکل تر از آن هستند که با روش های مرسوم بهینه سازی نظیر روش برنامه ریزی ریاضی و نظایر آن قابل حل باشند. جهان اطراف ما می تواند پایه بسیاری از رفتارهای هدفمند باشد که دقت در اشیاء پیرامون، ما را در شناخت این رفتارها و  نظم رو به سمت هدف یاری می رساند. در این مقاله یک الگوریتم بهینه سازی جدید بر پایه الگوی بازی دومینو ارایه گردیده است. بازی دومینو متشکل از مجموعه ای از تکه هایی است که با یک وحدت گروهی، یک نظم روبه هدف را شکل داده اند. تلاش برای ایجاد یک الگوریتم بهینه سازی جدید بر پایه تیوری این بازی، ما را به انجام این تحقیق رهنمون ساخت. الگوی حرکت دومینویی در یک محیط شبیه ساز پیاده سازی گردید و نتایج نشان داد که الگوریتم حاصل، الگوی مناسبی برای یافتن پاسخ های بهینه جهت مسایل پیچیده می باشد.

    کلید واژگان: الگوریتم، بهینه سازی، دومینو، تابع هزینه، رفتار
    Mohammadnabi Omidvar, Samad Nejatian*, Hamid Parvin, Karamolla Bagherifard, Vahideh Rezaie

    Optimization is a very important process in engineering. Engineers can create better production only if they make use of optimization tools in reduction of its costs including consumption time. Many of the engineering real-word problems are of course non-solvable mathematically (by mathematical programming solvers). Therefore, meta-heuristic optimization algorithms are needed to solve these problems. Based on this assumption, many new meta-heuristic optimization algorithms have been proposed inspired by natural phenomena, such as IWO [58], BBO [59], WWO [61], and so on. Inspired by domino toppling theory, we proposed an optimization algorithm. Using domino pieces, we can create countless complex structures. To simulate the domino movement in the search space of a problem, we consider the particles in the search space as the domino pieces and, by creating an optimal path, we will try to direct the dominoes to the optimal path. The optimal paths will be updated in each iteration. After initializing the dominoes randomly at the beginning of each evaluation, the picking piece or the first moving piece will be identified and then the particles will be selected by the optimal path. Applying a motion equation to each domino will move the dominoes forward in that direction. At first, a predefined dominoes will be randomly distributed in the problem space. Choosing the optimal path will accelerate the convergence of the domino particles towards the target. After choosing the path in current iteration, we now have to do the domino movement. The particles will move to a new location by applying the new location equation. By applying this equation, each domino piece will sit on the track ahead of itself. The front piece will also move to a new location by applying an equation separate from the rest. After moving the dominoes to the new location, the worst iteration of the previous iteration will be removed from the problem space. In the new iteration, the optimal domino path, the new locations of domino pieces and the global optimum will be updated. At the end of the algorithm, the global optimum will be determined as the optimal solution. This method is implemented in a simulator environment. To evaluate the performance of the Domino Optimization algorithm, we use a complete benchmark including 30 objective functions called CEC 2014 [67] that are single-objective numerical functions. In all cases, we set the population size to 50, the dimension size to 30, and the number of fitness function evaluation to 150,000. We compare the proposed Domino Optimization algorithm (DO) with the algorithms LOA [57], ICS [62], NPSO [63], MOHS [64], BCSO [65] and FFFA [66]. The results obtained from the 3 unimodal functions show that the proposed method is able to achieve a better solution than any of the state of the art algorithms at the equal resources. Results in the multimodal functions show that the proposed method has the best performance in finding the optimal solution in all of the available 13 functions in this section. In all of 6 functions in the hybrid section, the quality of the proposed method is better than all of the state of the art algorithms at the equal resources. The standard deviation values ​​of the proposed method, which are often small numbers, indicate algorithm convergence around the optimal solution. Also among the available methods, two algorithms, named NPSO and LOA, have good results after the proposed method. In the convergence analysis of dominoes, the diversity of objective functions in 100 distinct iterations shows a big value at the beginning of the algorithm, and a low value at the end of the algorithm.

    Keywords: Algorithm, Optimization, Dominoes, Cost Function, Behavior
  • Hadis Rezaei Mirghaed, Ladan Khedri Gharibvand *
    Urban land use maps, in addition to different classes of land use with spatial patterns, specify the type and intensity of land use; therefore, they can be used for current and future planning of urban land. In this study, land use changes in Lali city in 30 years (1987-2017) were investigated. To evaluate the land use changes in this time interval, several spectral images of Landsat satellites 5, 7, and 8 from the years 1987, 2001 and 2017 were utilized. After collecting data and the application of necessary pre-processing on them, also for the preparation of land use maps for the specified time intervals, data analysis was carried out by Maximum Likelihood Classification Algorithm. The findings obtained each year were monitored and controlled through field operations, and land use maps in 7 classes of agriculture, rangeland, forest, mountain, residential, river, and other areas were produced. Then, the changes in each land use were determined in the specified periods during 1987 to 2001, 2001 to 2017, and eventually 1987 to 2017. While the results obtained from the final changes illustrate that the overall level of vegetation compared to the beginning of the period has declined markedly which is an indication of deforestation in the region, urban areas, agriculture, and rangelands have maintained an ascending trend which can be due to increasing urban development and rural expansion, and the growing need of residents for housing, agriculture, and gardens.
    Keywords: Lali City, Maximum likelihood, algorithm, land use
  • ریحانه مسجودی شچانی *

    واحد پردازش عصبی در جهان امروز، در بسیاری از تجهیزات مورد استفاده قرار می گیرد. تلفن های هوشمند، کامپیوترها، ماشین های خودران و فناوری اطلاعات. با این حال در چند سال گذشته تغییرات و نوآوری های باورنکردنی در یادگیری ماشین ایجاد شده است. واحد پردازش عصبی سخت افزاری است که برای پردازش عصبی خاص مورد استفاده قرار می گیرد. به طور معمول واحد پردازش عصبی زمانی میتواند کارآمد عمل کند که یک مدل نرم افزاری با نام شبکه عصبی مصنوعی در دسترس باشد. یادگیری عمیق نیز زیر مجموعه ای از شبکه عصبی مصنوعی است که اساسا در دستکاری الگوریتمهای پیچیده استفاده میشود. این مقاله مروری بر رابطه و هماهنگی بین واحد پردازش عصبی، شبکه عصبی مصنوعی و یادگیری عمیق ارایه میدهد، همچنین بر نحوه عملکرد یادگیری عمیق و همکاری آن با شبکه عصبی مصنوعی و واحد پردازش عصبی، کاربردهای مختلف آن و مزایا و معایب آن تمرکز دارد.

    کلید واژگان: واحد پردازش عصبی، شبکه عصبی مصنوعی، یادگیری عمیق، الگوریتم، شبکه عصبی عمیق
    Reihane masjoudi shachani

    The neural processing unit is used in many equipments in today's world. Smartphones, computers, automobiles and information technology. In the last few years, however, there have been incredible changes and innovations in machine learning. Neural processing unit is hardware that is used for specific neural processing. Normally a neural processing unit can work efficiently when a software model called an artificial neural network is available. Deep learning is also a subset of artificial neural networks that are primarily used to manipulate complex algorithms. This article provides an overview of the relationship and coordination between neural processing unit, artificial neural network and deep learning. It also focuses on how deep learning works and its cooperation with artificial neural network and neural processing unit, its different applications and its advantages and disadvantages.

    Keywords: Neural Processing Unit, Artificial Neural Network, Deep Learning, Algorithm, Deep Neural Network
  • Hadis Rezaei Mirghaed, Ladan Khedri Gharibvand *
    Urban land use maps, in addition to different classes of land use with spatial patterns, specify the type and intensity of land use; therefore, they can be used for current and future planning of urban land. In this study, land use changes in Lali city in 30 years (1987-2017) were investigated. To evaluate the land use changes in this time interval, several spectral images of Landsat satellites 5, 7, and 8 from the years 1987, 2001 and 2017 were utilized. After collecting data and the application of necessary pre-processing on them, also for the preparation of land use maps for the specified time intervals, data analysis was carried out by Maximum Likelihood Classification Algorithm. The findings obtained each year were monitored and controlled through field operations, and land use maps in 7 classes of agriculture, rangeland, forest, mountain, residential, river, and other areas were produced. Then, the changes in each land use were determined in the specified periods during 1987 to 2001, 2001 to 2017, and eventually 1987 to 2017. While the results obtained from the final changes illustrate that the overall level of vegetation compared to the beginning of the period has declined markedly which is an indication of deforestation in the region, urban areas, agriculture, and rangelands have maintained an ascending trend which can be due to increasing urban development and rural expansion, and the growing need of residents for housing, agriculture, and gardens.
    Keywords: Lali City, Maximum likelihood, algorithm, land use
  • Mosa Hajipoor

    This work proposes a new meta-mathematical method called arithmetic optimization algorithm that uses the distributive behavior of the main arithmetic operators in mathematics, including division, subtraction, and addition for now the number of distinct entries in the multiplication table n. Therefore, there is some interest in algorithms for calculating M (n) exactly or as an approximation. We compare several algorithms for exact calculation of M (n) and come up with a new algorithm with suborder execution time. We also present two Monte Carlo algorithms to approximate M (n). We give exact calculation results for values of n to 230 and compare our experimental results with Ford's order of magnitude results. Experimental results show that INTEGERS provides very promising results in solving difficult optimization problems compared to 11 other well-known optimization algorithms.

    Keywords: Integer, calculation, algorithm
  • Z. Dai, K. Gao, X. Zheng

    In this paper, two inventory models with starting shortages and without shortages for perishable products in supply chain are proposed. The demand for perishable products is dependent on price and stock. Supply chain is composed of one manufacturer, one distribution center, and one retailer. The objective of these two models is to maximize the average profit per unit time by determining the optimal replenishment cycle, frequency, and quantity. The property of optimal solutions for two cases of two models is discussed to verify the existence of optimal solutions. Algorithms for searching optimal solutions are presented. In order to investigate the effect of parameters on optimal solutions and obtain some management insights, computational experiments with sensitivity analyses are carried out. Finally, conclusions and future researches are provided.

    Keywords: Optimizing, Algorithm, Multi-echeloninventory systems, Perishable productswith price, stockdependent demand, Supply chain
  • Hadis Rezaei Mirghaed, Ladan khedri gharibvand

    Urban land use maps, in addition to different classes of land use with spatial patterns, specify the type and intensity of land use; therefore, they can be used for current and future planning of urban land. In this study, land use changes in Lali city in 30 years (1987-2017) were investigated. To evaluate the land use changes in this time interval, several spectral images of Landsat satellites 5, 7, and 8 from the years 1987, 2001 and 2017 were utilized. After collecting data and the application of necessary pre-processing on them, also for the preparation of land use maps for the specified time intervals, data analysis was carried out by Maximum Likelihood Classification Algorithm. The findings obtained each year were monitored and controlled through field operations, and land use maps in 7 classes of agriculture, rangeland, forest, mountain, residential, river, and other areas were produced. Then, the changes in each land use were determined in the specified periods during 1987 to 2001, 2001 to 2017, and eventually 1987 to 2017. While the results obtained from the final changes illustrate that the overall level of vegetation compared to the beginning of the period has declined markedly which is an indication of deforestation in the region, urban areas, agriculture, and rangelands have maintained an ascending trend which can be due to increasing urban development and rural expansion, and the growing need of residents for housing, agriculture, and gardens.

    Keywords: Lali City, Maximum Likelihood, Algorithm, Land Use
  • علی ناصری*، رضا طباطبایی منش

    در یک فرایند تولید قطعات الکترونیکی هیچ وقت قطعات تولیدی کاملا مشابه نیستند. این موضوع باعث می شود سیگنال خروجی از فرستنده های رادیویی (حتی فرستنده های رادیویی مشابه) با هم متفاوت باشند. این تفاوت ها ناچیز است بنابراین استاندارهای مخابراتی را نقض نمی کنند ولی می توان از این تفاوت های ذاتی و یکتا به عنوان ویژگی های لایه فیزیکی برای احراز هویت فرستنده های رادیویی بهره برد. به طور معمول احراز هویت های نرم افزاری قابل جعل است. جعل احراز هویت بر اساس ویژگی های ذاتی رادیوها بسیار مشکل و یا غیر ممکن است. زیرا این کار به دانش، هزینه و زمان زیاد نیاز دارد. تقویت کننده توان آخرین بخش فرستنده می باشد پس بیشترین اثر را روی سیگنال خروجی فرستنده می گذارد. در تحقیق های قبلی، گیرنده با فرض دانستن مقدار SNR سیگنال دریافتی، از ضرایب مدل غیرخطی تقویت کننده توان برای طبقه بندی فرستنده ها استفاده شده است. در این مقاله یک ویژگی جدید ارایه شده تا عملیات احراز هویت، مستقل از مقدار SNR سیگنال دریافتی انجام شود. نتایج شبیه سازی نشان می دهد که با استفاده از ویژگی جدید ارایه شده، فرآیند احراز هویت فرستنده رادیویی برای مقادیر SNR کم (حداقل dB 15) تا زیاد، بدون نیاز به اندازه گیری SNR، به خوبی با احتمال موفقیت بیش از 80% انجام می شود.

    کلید واژگان: احراز هویت، رادیو، تقویت کننده توان، مدل، غیرخطی، فرستنده، لایه فیزیکی، ویژگی ذاتی، الگوریتم، SNR
    Ali Naseri*

    In a process of producing electronic components, the components are never exactly the same. This causes the output signal from radio transmitters (even similar ones) to be different. The differences are negligible, so it does not violate telecommunications standards.  But these inherent and unique differences can be considered as physical layer features and used to authenticate radio transmitters. Normally, authentications based on software can be forged. Forgery based on the inherent characteristics of radios is very difficult or impossible. Because it requires a lot of knowledge, money, and time. The power amplifier is the last part of the transmitter. So it has the most effect on the transmitter output signal. In previous researches, the receiver assuming to have the SNR value of the received signal, used the coefficients of the power amplifier nonlinear model to classify the transmitters. In this paper, a new feature is provided to perform authentication operations independently of the SNR value of the received signal. The simulation results show that using the proposed new feature, the radio transmitter authentication process from low SNR values (at least 15dB) to high ones, without the need for SNR measurement, is performed well with a likelihood of more than 80% success.

    Keywords: authentication, radio, power amplifier, model, nonlinear, transmitter, physical layer, intrinsic feature, algorithm, SNR
  • صمد نجاتیان، روح الله امیدوار، حمید پروین*، وحیده رضایی، میلاد یثربی
    بهینه سازی یک فعالیت مهم و تعیین کننده در طراحی ساختاری است. بسیاری از مسائل بهینه سازی در مهندسی، طبیعتا پیچیده تر و مشکل تر از آن هستند که با روش های مرسوم بهینه سازی نظیر روش برنامه ریزی ریاضی و نظایر آن قابل حل باشند. طبیعت پایه بسیاری از الگوریتم های بهینه سازی می باشد، لذا محققان الگوی رفتاری پدیده ها و جانداران موجود در طبیعت در قالب یک ساختار رو به هدف قرار داده اند. در این مقاله یک الگوریتم فراابتکاری جدید بر پایه الگوی رفتاری موش های وحشی ارائه گردیده است. با مطالعه رفتارهای هدفمند و سودمند موش های وحشی در قالب کلونی ها این انگیزه را ایجاد کرد که این رفتارهای هدفمند می تواند الگویی برای حرکت به سمت حل یک مسئله پیچیده غیرقطعی باشد. در این تحقیق براساس نتایج آزمایشگاهی که بر روی این جانور انجام گرفته رفتار موش ها در فازهای تولید جمعیت، جفت گیری، مبارزه بقا پیاده سازی شده است. موش ها در چند کلونی سازماندهی شده که بر پایه فرماندهی یک سرکلونی نخبه برای بقا مبارزه خواهند کرد. ضمنا الگوی حرکتی موش ها براساس مکان سرکلونی و دیگر اعضای کلونی تعریف شده که در یک جستجوی بهینه در فضای مسئله موثر بوده است. الگوی رفتاری این جاندار در یک محیط شبیه ساز پیاده سازی گردید و نتایج نشان داد که الگوریتم حاصل یک الگویی مناسب برای یافتن پاسخی بهینه جهت مسائل پیچیده می باشد.
    کلید واژگان: الگوریتم، بهینه سازی، موش های وحشی، کلونی، تابع هزینه، رفتار
    S. Nejatian, R. Omidvar, H. Parvin *, V. Rezaei, M. Yasrebi
    Optimization is an important and determinant task in structural design. Better designs will be achieved if designers be able to reduce design time and cost using optimization methods. Many optimization problems in engineering are naturally more complicated and difficult to be solved by conventional optimization methods such as mathematical programming. Nature is a basis of many optimizations algorithms, so researchers focus on behavioral patterns of organisms and events in nature by considering a structure toward a target. In this study, a new optimization algorithm is proposed based on the behavioral pattern of wild mice. Studying targeted and beneficial behaviors of wild mice in colony motivates these kinds of behaviors could be a pattern for solving an uncertain complex problem. In this research, based on the experimental results on this animal, the behavior of the mice in the production phases of the population, mating, struggle for survival has been implemented. The mice are organized in several colonies that will fight for survival based on the command of an colony head that is elite. Also, the motor pattern of the mice was defined based on the colony-head location and the average colony members that were effective in an optimal search in the problem space. The behavioral pattern of this living organism was implemented in the simulation environment and results show that the proposed algorithm is a suitable pattern to find an optimal solution for complicated problems.
    Keywords: Algorithm, optimization, wild Mice, colony, cost function, behavior
  • مریم ملکی شهرکی، علیرضا باقری *، مهدیس نیری
    مسئله پوشش از مسایل مهم و پرکاربرد در هندسه محاسباتی است که در این مساله، نقاط بایستی با حداقل یک شکل هندسی پوشانده شوند. نوع خاصی از مساله پوشش، مسئله جداسازی نقاط است که در این مسئله حداقل دو دسته نقطه وجود دارد که تمایز آنها با رنگ نشان داده می شود (برای مثال نقاط آبی و قرمز) و بایستی نقاط با یک شکل هندسی از هم جدا شوند که به این شکل هندسی، جدا کننده می گویند. در این مقاله مسئله جداسازی نقاط آبی و قرمز با دو- گوه جدا کننده با زاویه مشخص مورد بررسی قرار می گیرد. الگوریتم ارائه شده برای این مساله تمام دو- گوه های جدا کننده با زاویه مشخص را در زمان بهینه O (n log n) گزارش می کند.
    کلید واژگان: هندسه محاسباتی، پوشش، جدا سازی، دو، گوه، الگوریتم، نقاط دو رنگ
    M. Maleki Shahrakiand, A. Bagheri *, M. Nayeri
    The point-set covering is one of the important problems in computational geometry, which has many applications. In this problem, the given points should be covered by at least one geometric shape. A variant of the problem is the point-set separation, in which there are at least two different kinds of points which are colored by different colors. The geometric shapes, which are called separators, should only cover the points of the same color. In this paper, separation of blue and red points by a double-wedge of a given angle θ is considered. The proposed algorithm reports all separator θ angle double-wedges in optimal time O(nlogn).
    Keywords: Computational geometry, covering, separation, double-wedge, algorithm, bichromatic point set
  • میثم بیات، محمدحسین مدنی *
    منابع مختلف خطا که از جمله مهمترین آن ها تداخل موج پیوسته (CWI) است، باعث خطا در اندازه گیری زمان ورود و در نهایت کاهش دقت ناوبری در سامانه لورن می شود. در این مقاله به ارائه یک الگوریتم جدید به منظور حذف CWI هم زمان با استفاده از تحلیل ساختار پالس های لورن در حوزه زمان و ارایه یک کدینگ فاز جدید پرداخته شده است. این الگوریتم از بار محاسباتی بسیار پایینی برخوردار است. نتایج حاصل از شبیه سازی نشان می دهد SIR در خروجی الگوریتم پیشنهادی تقریبا مستقل ازSIR سیگنال دریافتی بوده و عامل محدود کننده در این الگوریتم تنها SNR سیگنال دریافتی است. الگوریتم پیشنهادی با استفاده از کدینگ فاز جدید برای اندازه گیری زمان ورود (TOA) ، عملا باعث حذف اثر تداخل هم زمان می شود. این الگوریتم برای نسبت سیگنال به تداخل برابر با dB 15- خطای زمان ورود را تا 693 برابر نسبت به الگوریتم متداول میانگین گیری و کدگشایی فاز که از سیگنال با کدینگ فاز استاندارد لورن استفاده می کند، بهبود می بخشد. این بهبود چشمگیر در ازای کاهش SIR دریافتی ناشی از تداخل موج پیوسته غیرهم زمان به اندازه حداکثر dB3، که می توان آن را با استفاده از یک فیلتر میان گذر به راحتی حذف نمود، حاصل می شود.
    کلید واژگان: سامنه لورن، تداخل موج پیوسته، ردیابی فاز، SIR، TOA
    Meysam Bayat, Mohammad Hossein madani

    CWI is one of the most important error sources that makes the error in measuring Time Of Arrival (TOA) and then decreasing the navigation accuracy in Loran system. This paper presents a novel algorithm to remove CWI. It coincides with analyzing the structure of Loran pulses in time domain plus to presenting a new phase code. The aforementioned algorithm has not complex computations. Additionally, the proposed method has better performance than ones which are conventional. It improves the effectiveness and accuracy in navigation system. The simulation results show that Signal to Interference Rate (SIR) in the output of the proposed algorithm is highly independent from SIR of the received signal and the limiting factor in this algorithm is only Signal to Noise Rate (SNR) of the received signal. The proposed algorithm for the measurement of the (TOA) makes mainly rejecting the effect of synchronous interference. This algorithm is equal to -15 dB for the factor of signal to interference and improves the error of TOA up to 693 times the averaging and phase decoding algorithms. This progress stems from decreasing the received SIR originated from asynchronous CWI up to 3 dB, which it can be rejected using a band pass filter.
    Keywords: SIR, TOA, phase code Interval, CWI, Algorithm
  • * F.J. Ardakani, M. M. Ardehali

    Electricity demand is forecasted to double in 2035, and it is vital to address the economics of electrical energy generation for planning purposes. This study aims to examine the applicability of Gravitational Search Algorithm (GSA) and the newly improved GSA (IGSA) for optimization of the mixed-integer non-linear electricity generation expansion planning (GEP) problem. The performance index of GEP problem is defined as the total cost (TC) based on the sum of costs for investment and maintenance, unserved load, and salvage. In IGSA, the search space is sub-divided for escaping from local minima and decreasing the computation time. Four different GEP case studies are considered to evaluate the performances of GSA and IGSA, and the results are compared with those from implementing particle swarm optimization algorithm. It is found that IGSA results in lower TC by 7.01%, 4.08%, 11.00%, and 6.40%, in comparison with GSA, for the four case studies. Moreover, as compared with GSA, the simulation results show that IGSA requires less computation time, in all cases.

    Keywords: Generation expansion planning, Improved gravitational search, algorithm, Optimization, Power system planning
  • Payam Khanteimouri, Ali Mohades, Mohammad Ali Abam, Mohammad Reza Kazemi, Saeed Sedighin
    For a set of colored points, a region is called extit{color-spanning} if it contains at least one point of each color. In this paper, we first consider the problem of maintaining extit{the smallest color-spanning interval} for a set of $n$ points with $k$ colors on the real line such that the insertion and deletion of an arbitrary point takes $O(log^2 n)$ worst-case time. Then, we exploit the data structure to show that there is an $O(nlog^2 n)$ time algorithm to compute extit{the smallest color-spanning square} for a set of $n$ points with $k$ colors in the plane. This is a new way to improve the $O(nk log n)$ time algorithm presented by Abellanasetal~citeAbellanas01smallestcolor-spanning} when $k=omega(log n)$. We also consider the problem of computing the smallest color-spanning square in a special case in which we have at most two points from each color. We present an $O(nlog n)$ time algorithm to solve the problem which improves the result presented by Arkinetal~citeArkin} by a factor of $log n$
    Keywords: Algorithm, location planning, dynamic data structure, color, spanning objects
  • Prabir Saha, Deepak Kumar
    Vedic mathematics, is an ancient computing methodology, has a unique computational technique based on 16 sutras (formulae). These formulae can be directly applied for the optimization of the algebraic computation. A new algorithm for the computation of the decimals of the inverse based on such ancient mathematics is reported in this paper. Sahayaks (auxiliary fraction) sutra has been used for the hardware implementation of the decimals of the inverse. On account of the Vedic formulae, reciprocal approximation of a numbers can generate “on the fly” either the n first exact decimals of the inverse, this n being either arbitrary large, or, at least, in almost all cases, 6. The reported algorithm has been implemented and functionality has been checked in T-Spice. Performance parameters like propagation delay, dynamic switching power consumption are calculated through spice-spectre of 90nm CMOS technology. The propagation delay of the resulting 4-digit reciprocal approximation algorithm was only ~1.8uS and consumes ~24.7mW power. The implementation methodology offered substantial reduction of propagation delay, and dynamic switching power consumption from its counterpart (NR) based implementation.
    Keywords: Algorithm, Arithmetic, Decimal inverse, T, Spice, Propagation delay, Vedic mathematics
  • Sadaf Roostaee, Hamid Reza Ghaffary
    Data analysis in cardiovascular diseases is difficult due to large massive of information. All of features are not impressive in the final results. So it is very important to identify more effective features. In this study, the method of feature selection with binary cuckoo optimization algorithm is implemented to reduce property. According to the results, the most appropriate classification for support vector machine is featured diagnoses heart disease. The main purpose of this article is feature reduction and providing a more precise diagnosis of the disease. The proposed method is evaluated using three measures: accuracy, sensitivity and specificity. For comparison, a data set of Machine Learning Repository database including information about 303 people with 14 features was used. In addition to the high accuracy of current methods, are expensive and time-consuming. The results indicate that the proposed method is superior on other algorithms in terms of Performance, accuracy and run time.
    Keywords: Heart Disease, Support Vector Machine, Binary Cuckoo Optimization, Algorithm, Features Selection
  • حجت حمیدی
    در این مقاله چارچوبی برای روش های تحمل پذیری خطای مبتنی بر الگوریتم در سیستم های محاسباتی ارائه می دهیم. خطاهایی که در الگوریتم های پردازش داده رخ می دهند، با بکارگیری کدهای کانولوشن سیستماتیک تعریف شده بر روی ساختارهای حسابی نقطه ثابت، آشکارسازی و تصحیح می شوند. کدهای کانولوشن، امکان آشکارسازی خطا در عملیات پردازش داده ی عددی را ، فراهم می نمایند. بسیاری از تکنیک های تحمل پذیر خطا که مربوط به حفاظت از عملیات های ماتریسی عددی می باشند از کدهای بلوکی با افزونگی بالا برای قابلیت تحصیح خطا، استفاده می کنند. در این مقاله، از کدهای کانولوشن استفاده شده و از ویژگی های آزمون پیوسته برای آشکارسازی خطاها استفاده می شود. این تکنیک آشکارسازی خطای مبتنی بر الگوریتم، بر مقایسه مقادیر توازن محاسبه شده از دو مسیر تکیه دارد. خطاهای پردازش داده های عددی توسط مقایسه مقادیر توازن مرتبط با کدهای کانولوشن آشکار می شوند. این روش یک رویکرد جدید برای دستیابی به اصلاح خطای همزمان در سیستم های محاسبات تحمل پذیری خطا می باشد. این مقاله یک طرح محاسباتی جدید را به منظور فراهم آوردن تحمل پذیری خطا برای الگوریتم های عددی ارائه می دهد. در حالی که تصحیح داده های خروجی توسط یک کدگشای (کدبردار) آستانه که تحت تاثیر خطاهای گرد کردن و خطاهای ناشی از محاسبه گر می باشد، صورت می گیرد. هدف، توصیف یک رویکرد حفاظتی است که به آسانی با تکنیک های پردازش داده ادغام شده و به تحمل پذیری موثرتر خطا می انجامد. نتایج شبیه سازی، نشان می دهند که اختلاف میان خطای گردکردن و خطای ناشی از محاسبه گر به اندازه کافی برای متمایز شدن از یکدیگر بزرگ می باشد. این اختلاف بزرگ میان این دو خطا، محدوده و گستره ای از آستانه های آشکارسازی خطا را برای انتخاب به ما می دهد.
    کلید واژگان: پردازش داده، تحمل پذیری خطا مبتنی بر الگوریتم (ABFT)، کدهای کانولوشن، مقادیر توازن(پریتی)، خطاهای گرد کردن، آشکارسازی خطا
    Dr Hodjat Hamidi
    We present a framework for algorithm-based fault tolerance methods in the design of fault tolerant computing systems. The ABFT error detection technique relies on the comparison of parity values computed in two ways. The parallel processing of input parity values produce output parity values comparable with parity values regenerated from the original processed outputs. Number data processing errors are detected by comparing parity values associated with a convolution code. This article proposes a new computing paradigm to provide fault tolerance for numerical algorithms. The data processing system is protected through parity values defined by a high-rate real convolution code. Parity comparisons provide error detection, while output data correction is affected by a decoding method that includes both round-off error and computer-induced errors. To use ABFT methods efficiently, a systematic form is desirable. A class of burst-correcting convolution codes will be investigated. The purpose is to describe new protection techniques that are easily combined with data processing methods, leading to more effective fault tolerance.
    Keywords: algorithm, based fault tolerance (ABFT), convolution codes, parity values, syndrome
  • Zahra Moslehi, Alireza Bagheri
    Given a set P of red points and a set Q of blue points in the plane of total size n, we investigate the problem of finding two disjoint isothetic rectangles containing all the points of Q avoiding any points of P. Such rectangles are called two separating disjoint isothetic rectangles. We fi rst compute two separating disjoint axis-aligned rectangles in O(n log n) time. Then, we relax the axis-aligned constraint and report all combinatorially di erent two separating disjoint isothetic rectangles. To compute these rectangles, we introduce some events by rotating the coordinate system and process these events. Computing and processing all of the events are done in O(n^2 log n) time. Thus, our algorithm reports all combinatorially di erent separating rectangles in O(n^2 log n) time.
    Keywords: algorithm, computational geometry, separability, bichromatic point sets, isothetic rectangles
  • Shahin Shafei*, Tohid Sedghi
    We developed a new semi-supervised EM-like algorithm that is given the set of objects present in each training image, but does not know which regions correspond to which objects. We have tested the algorithm on a dataset of 860 hand-labeled color images using only color and texture features, and the results show that our EM variant is able to break the symmetry in the initial solution. We compared two different methods of combining different types of abstract regions, one that keeps them independent and one that intersects them. The intersection method had a higher performance as shown by the ROC curves in our paper. We extended the EM-variant algorithm to model each object as a Gaussian mixture, and the EM-variant extension outperforms the original EM-variant on the image data set having generalized labels. Intersecting abstract regions was the winner in our experiments on combining two different types of abstract regions. However, one issue is the tiny regions generated after intersection. The problem gets more serious if more types of abstract regions are applied. Another issue is the correctness of doing so. In some situations, it may be not appropriate to intersect abstract regions. For example, a line structure region corresponding to a building will be broken into pieces if intersected with a color region. In future works, we attack these issues with two phase approach classification problem.
    Keywords: Algorithm, models, mixture, segmentation
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال