به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت
جستجوی مقالات مرتبط با کلیدواژه

unconstrained optimization

در نشریات گروه ریاضی
تکرار جستجوی کلیدواژه unconstrained optimization در نشریات گروه علوم پایه
  • Seyed Hamzeh Mirzaie, Ali Ashrafi *
    This paper concerns an efficient trust region framework that exploits a new non-monotone line search method. The new algorithm avoids the sudden increase of the objective function values in the non-monotone trust region method. Instead of resolving the trust region subproblem whenever the trial step is rejected, the proposed algorithm employs an Armijo-type line search method in the direction of the rejected trial step to construct a new point. Global and superlinear properties are preserved under appropriate conditions. Comparative numerical experiments depict the efficiency and robustness of the new algorithm using the Dolan-More performance profiles.
    Keywords: Unconstrained Optimization, Trust Region, Line Search, Non-Monotone Technique
  • C. Souli, R. Ziadi *, I. Lakhdari, A. Leulmi
    The conjugate gradient (CG) method is an optimization technique known for its rapid convergence; it has blossomed into significant developments and applications. Numerous variations of CG methods have emerged to en-hance computational efficiency and address real-world challenges. In this work, a novel conjugate gradient method is introduced to solve nonlinear unconstrained optimization problems. Based on the combination of PRP (Polak–Ribière–Polyak), HRM (Hamoda–Rivaie–Mamat) and NMFR (new modified Fletcher–Reeves) algorithms, our method produces a descent di-rection without depending on any line search. Moreover, it enjoys global convergence under mild assumptions and is applied successfully on various standard test problems as well as image processing. The numerical results indicate that the proposed method outperforms several existing methods in terms of efficiency.
    Keywords: Unconstrained Optimization, Hybrid Conjugate Gradient, Global Convergence, Image Restoration
  • H. Sharma *, R.K. Nayak
    The Barzilai–Borwein method offers efficient step sizes for large-scale un-constrained optimization problems. However, it may not guarantee global convergence for nonquadratic objective functions. Simulated annealing-based on Barzilai–Borwein (SABB) method addresses this issue by in-corporating a simulated annealing rule. This work proposes a novel step-size strategy for the SABB method, referred to as the SABBm method. Furthermore, we introduce two stabilized variants: SABBstab and SABBmstab. SABBstab combines a simulated annealing rule with a sta-bilization step to ensure convergence. SABBmstab builds upon SABBstab, incorporating the modified step size derived from the SABBm method. The effectiveness and competitiveness of the proposed methods are demon-strated through numerical experiments on CUTEr benchmark problems.
    Keywords: Unconstrained Optimization, Barzilai–Borwein Method, Simu-Lated Annealing Method, Stabilized BB Method
  • Bachir Barrouk *, Mohammed Belloufi, Rachid Benzine, Taher Bechouat
    The conjugate gradient method plays a very important role in several fields, to solve problems of large sizes. To improve the efficiency of this method, a lot of work has been done; in this paper, we propose a new modification of PRP method to solve a large scale unconstrained optimization problems in relation with strong Wolf Powell Line Search property, when the latter was used under some conditions, a global convergence result was proved. In comparison with other known methods the efficiency of this method proved that it is better in the number of iterations and in time on $90$ proposed problems by use of Matlab.
    Keywords: Unconstrained Optimization, Conjugate Gradient Method, Strong Wolfe Line Search, Numerical Comparisons
  • Choubeila Souli, Raouf Ziadi *, Abdelatif Bencherif-Madani, Hisham Khudhur
    This paper presents a new hybrid conjugate gradient method for solving  nonlinear unconstrained optimization problems; it is based on a combination of $RMIL$  (Rivaie-Mustafa-Ismail-Leong)  and $hSM$  (hybrid Sulaiman- Mohammed) methods. The proposed algorithm enjoys the sufficient descent condition without depending on any line search; moreover, it is globally convergent under the usual and strong Wolfe line search assumptions.  The performance of the algorithm is demonstrated through numerical experiments on a set of 100 test functions from [1] and four image restoration problems with two noise levels. The numerical comparisons with four existing methods show that the proposed method is promising and effective.
    Keywords: Unconstrained Optimization, Hybrid Conjugate Gradient, Global Convergence, Image Restoration
  • علی ابراهیم نژاد، زهره امینی فرد*، سامان بابایی کفاکی

    به منظور بهبود روش گرادیان مزدوج کلاسیک هستنس-اشتیفل، شنگوی و همکاران یک روش گرادیان مزدوج موثر را پیشنهاد کردند که با استفاده از جستجوی خطی ولف قوی (با محدود کردن پارامترهای جستجوی خطی) در خاصیت کافی کاهشی صدق می کند. با الهام از توسیع مقیاس بندی شده ی روش هستنس-اشتیفل که اخیرا توسط دانگ و همکاران مطرح شده است، یک اصلاح مقیاس بندی شده از روش گرادیان مزدوج شنگوی و همکاران پیشنهاد می شود که قادر است شرط کافی کاهشی را مستقل از تکنیک جستجوی خطی و بدون فرض تحدب تابع هدف برقرار سازد. همچنین، همگرایی سراسری روش مطرح شده بر اساس فرضیات استاندارد مورد بحث قرار می گیرد. به علاوه، یک تقریب هموار برای مساله بهینه سازی حسگری فشرده ارائه می شود. عملکرد عددی بر مجموعه ای از مسائل کلاسیک از کتابخانه CUTEr و نیز در حل مساله حسگری فشرده مورد ارزیابی قرار می گیرد. نتایج مقایسات برتری رویکرد پیشنهادی را به تصویر می کشند.

    کلید واژگان: بهینه سازی نامقید، روش گرادیان مزدوج، خاصیت کافی کاهشی، همگرایی سراسری، حسگری فشرده
    Ali Ebrahimnejad, Zohre Aminifard*, Saman Babaie-Kafaki

    To improve the classic Hestense-Stiefel conjugate gradient method, Shengwei et al. proposed an efficient conjugate gradient method which possesses the sufficient descent property when the line search fulfills the strong Wolfe conditions (by restricting the line search parameters). Inspired by the scaled extension of the Hestense-Stiefel method which is recently presented by Dong et al., a scaled modification of the conjugate gradient method of Shengwei et al. is proposed which satisfies the sufficient descent condition independent of the line search technique as well as the convexity assumption of the objective function. Furthermore, the global convergence of the suggested method is discussed based on standard suppositions. In addition, a smooth approximation for the compressed sensing optimization problem is put forward. Numerical experiments are done on a set of classical problems of the CUTEr library as well as in solving compressed sensing problem. Results of the comparisons illustrate the superiority of the proposed approach.

    Keywords: Compressed Sensing, Unconstrained Optimization, Sufficient Descent Property, Global Convergence, Conjugate Gradient Method
  • Keyvan Amini *, Mehri Rashidi
    This paper proposes two effective nonmonotone trust-region frameworks for solving nonlinear unconstrained optimization problems while provide a new effective policy to update the trust-region radius. Conventional nonmonotone trust-region algorithms apply a specific nonmonotone ratio to accept  new trial step and update the trust-region radius. This paper recommends using the nonmonotone ratio only as an acceptance criterion for a new trial step. In contrast, the monotone ratio or a hybrid of monotone and nonmonotone ratios is proposed as a criterion for updating the trust-region radius. We investigate the global convergence to first- and second-order stationary points for the proposed approaches under certain classical assumptions.  Initial numerical results indicate that the proposed methods significantly enhance the performance of nonmonotone trust-region methods.
    Keywords: Unconstrained optimization, trust-region framework, trust-region radius, nonmonotone technique
  • Mohamad Jourak, Saeed Nezhadhosein *, Farzad Rahpeymaii
    A new version of the augmented self-scaling memoryless BFGS quasi-Newton update,  proposed in [Appl. Numer. Math. 167,  187--201,  (2021)],  is suggested for unconstrained optimization problems. To use the corresponding scaled parameter,  the clustering of the eigenvalues of the approximate Hessian matrix about one point is applied with three approaches. The first and second approaches are based on the trace and the determinant of the matrix. The third approach is based on minimizing the measure function. The sufficient descent property is guaranteed for uniformly convex functions,  and the global convergence of the proposed algorithm is proved both for the uniformly convex and general nonlinear objective functions,  separately. Numerical experiments on a set of test functions of the CUTEr collection show that the proposed method is robust. In addition,  the proposed algorithm is effectively applied to the salt and pepper noise elimination problem.
    Keywords: Unconstrained optimization, augmented BFGS, noise elimination problem
  • Mina Lotfi *
    In this paper, we present a new hybrid conjugate gradient method for unconstrained optimization that possesses sufficient descent property independent of any line search. In our method, a convex combination of the Hestenes-Stiefel (HS) and the Fletcher-Reeves (FR) methods, is used as the conjugate parameter and the hybridization parameter is determined by minimizing the distance between the hybrid conjugate gradient direction and direction of the three-term HS method proposed by M. Li (emph{A family of three-term nonlinear conjugate gradient methods close to the memoryless BFGS method,} Optim. Lett. textbf{12} (8) (2018) 1911--1927). Under some standard assumptions, the global convergence property on general functions is established. Numerical results on some test problems in the CUTEst library illustrate the efficiency and robustness of our proposed method in practice.
    Keywords: Unconstrained optimization, Conjugate gradient method, sufficient descent, least-squares, Global convergence
  • Zohre Aminifard *

    In an effort to make modification on the classical Hestenes--Stiefel method, Shengwei et al. proposed an efficient conjugate gradient method which possesses the sufficient descent condition when the line search fulfills the strong Wolfe conditions (by restricting the line search parameters). Here, we develop a three--term extension of the method which guarantees the sufficient descent condition independent to the line search. Also, we establish global convergence of the method using convexity assumption. At last, practical merits of the proposed method are investigated by numerical experiments on a set of CUTEr test functions. The results show numerical efficiency of the method.

    Keywords: Unconstrained optimization, Conjugate gradient method, sufficient descent condition, Global convergence
  • محمد احمدوند*، محسن اسماعیل بیگی، احمد کمندی، فرج الله محمدی یعقوبی

    ORBIT یک الگوریتم بهینه سازی دارای ساختار ناحیه اطمینان بی نیاز از مشتق است. در این ساختار به جای استفاده از مدل های جایگزین چندجمله ای از مدل های جایگزین مبتنی بر درونیاب تابع پایه شعاعی استفاده می گردد. بنابراین با تعداد کمتری از ارزیابی های تابع هدف، قادر خواهیم بود مساله بهینه سازی را حل نماییم. در این الگوریتم در هر تکرار، نقاط درونیاب و مقادیر تابع در آنها ذخیره شده و در تکرارهای بعدی مورد استفاده قرار می گیرد. با این حال این الگوریتم توجهی به  مرتب کردن نقاط درونیاب نمی کند. در این مقاله بر اساس دو ایده، یکی مرتب کردن نقاط درونیاب بر حسب مقادیر تابع و دیگری انتخاب نقطه ای به عنوان مرکز ناحیه اطمینان که کمترین مقدار تابع را دارد، یک الگوریتم جدید به نام SORT-ORBIT ارایه می کنیم. با استفاده از این رویکرد، تعداد دفعات ارزیابی تابع و تعداد تکرار های الگوریتم ORBIT کاهش می یابد. نتایج عددی حاکی از آن است که کارایی الگوریتم جدید به طور مشهودی افزایش می یابد. برای بررسی عملکرد الگوریتم ارایه شده در این مقاله در مقایسه با الگوریتم اصلی از شاخص کارایی دولان- موری و شاخص داده موری-ویلد استفاده شده است.

    کلید واژگان: الگوریتم ناحیه اطمینان، تابع پایه شعاعی، بهینه سازی نامقید، الگوریتم بی نیاز از مشتق، تغییر مرکز، آزمون همگرایی
    Mohammad Ahmadvand*, Mohsen Esmaeilbeigi, Ahmad Kamandi, Farajollah Mohammadi Yaghoobi

    Optimization using radial basis functions as an interpolation tool in trust-region (ORBIT), is a derivative-free framework based on fully linear models to solve unconstrained local optimization, especially when the function evaluations are computationally expensive. This algorithm stores the interpolation points and function values to using at subsequent iterations. Despite the comparatively advanced management used for interpolation points, we maintain that ORBIT ignores sorting the interpolation points based on the function values. In this paper, we propose an improved version SORT-ORBIT by sorting the interpolation points and selecting a point as the trust-region center in which the objective function reaches its minimum value. Numerical results indicate the efficiency of the improved version compared with the original version. In addition, to estimate high-accuracy solutions, we equip the ORBIT with a new gradient-free convergence test.

    Keywords: Trust-region algorithm, Radial basis function, Unconstrained optimization, Derivative-free algorithm, Center changing, Convergence test
  • Saman Babaie Kafaki *, Zohre Aminifard
    Recently, based on a singular value analysis on the Dai--Liao conjugate gradient method, Babaie-Kafaki and Aminifard  suggested a fixed point equation. The prominent feature of  the proposed equation is that its solutions may increase numerical stability of the method  while improving the global convergence. Here, the  same fixed point equation  is employed to upgrade previously proposed choices of the Dai--Liao parameter based on the well-known functional iteration method.  Global convergence analysis is conducted and numerical experiments are done to support our discussions.
    Keywords: Unconstrained optimization, Conjugate gradient method, maximum magnification, fixed point equation, functional iteration
  • Ahmad Kamandi *, Keyvan Amini
    In this paper,  a new conjugate gradient-like algorithm is proposed to solve unconstrained optimization problems. The step directions generated by the new algorithm satisfy sufficient descent condition independent of the line search. The global convergence of the new algorithm, with the Armijo backtracking line search,  is proved.  Numerical experiments indicate the efficiency and robustness of the new algorithm.
    Keywords: Unconstrained optimization, conjugate gradient algorithm, Global convergence, Armijo condition
  • Z. Aminifard *, S. Babaie-Kafaki
    Recently, a one-parameter extension of the Polak–Rebière–Polyak method has been suggested, having acceptable theoretical features and promising numerical behavior. Here, based on an eigenvalue analysis on the method with the aim of avoiding a search direction in the direction of the maximum magnification by a symmetric version of the search direction matrix, an adaptive formula for computing parameter of the method is proposed. Under standard assumptions, the given formula ensures the sufficient descent property and guarantees the global convergence of the method. Numerical experiments are done on a collection of CUTEr test problems. They show practical effectiveness of the suggested formula for the parameter of the method.
    Keywords: Unconstrained optimization, Conjugate Gradient Method, Maximum magnification, Line search
  • M. Kimiaei*, H. Esmaeili, F. Rahpeymaii

    In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some prior successful iterates. For iterates far away from the optimizer, we give a very strong nonmonotone strategy. In the vicinity of the optimizer, we have a weaker nonmonotone strategy. It leads to a medium nonmonotone strategy when iterates are not far away from or close to the optimizer. Theoretical analysis indicates that the new approach converges globally to a first-order critical point under classical assumptions. In addition, the local convergence is also studied. Extensive numerical experiments for unconstrained optimization problems are reported.

    Keywords: Unconstrained optimization, Trust-region framework, Nonmonotone technique, Theoretical convergence
  • AbdelRahman Ahmed, Mohamed Salim*

    This paper deals with the solution of the unconstrained optimization problems on parallel computers using quasi-Newton methods. The algorithm is based on that parallelism can be exploited in function and derivative evaluation costs and linear algebra calculations in the standard sequential algorithm. Computational problem is reported for showing that the parallel algorithm is superior to the sequential one.

    Keywords: parallel algorithm, unconstrained optimization, quasi-Newton
  • Elaf Khaleel *, Eman Hamed, Huda Ahmed
    In this paper, we have investigated a new spectral Quasi-Newton (QN) algorithm. New search directions of the proposed algorithm increase its stability and increase the arrival to the optimum solution with a lowest cost value and our numerical applications on the standard Firefly Algorithm (FA)and the new proposed algorithm are powerful as in meta-heuristic field. Our new proposed algorithm has quite common uses in several sciences and engineering problems. Finally, our numerical results show that the proposed technique is the best and its accuracy higher than the accuracy of the standard FA. These numerical results are compared using statistical analysis to evaluate the efficiency and the robustness of new proposed algorithm.
    Keywords: QN-method, self-scaling QN, Conjugate gradient, Unconstrained optimization, Firefly Algorithm
  • Razieh Dehghani*, Mohmadmehdi Hosseini

    We make some ecient modications on the modied secant equation proposed by Zhangand Xu (2001). Then we introduce modied BFGS method using propose secant equation,and obtain some attractive results in theory and practice. We establish the global con-vergence property of the proposed method without convexity assumption on the objectivefunction. Numerical results on some testing problems from CUTEr collection show the pri-ority of the proposed method to some existing modied secant methods in practice

    Keywords: Unconstrained optimization, modified secant equation, BFGS method
  • س.حسین نژاد*، س.محمدخان سرتیپ

    در ادامه کار تنظیم پارامتر دای-لیاو در روش های گرادیان مزدوج، دو پارامتر جدید براساس معادلات سکانت اصلاح شده معرفی شده توسط لی و فوکوشیما، با دو رویکرد متفاوت که از یک شرط مزدوجی جدید استفاده می کند، ارایه کرده ایم. اولین پارامتر براساس روش ارایه شده توسط ژنگ و همکارانش به عنوان یک روش گرادیان مزدوج هستینس-استیفل است. دومین پارامتر براساس رویکرد شبه نیوتن است. همگرایی سراسری روش های پیشنهادی برای توابع محدب یکنواخت و توابع عمومی ثابت شده است. نتایج عددی با استفاده از مجموعه ای از مسایل \lr{CUTEr} و مقایسه روش های پیشنهادی با تعدادی از روش های مشهور، به دست آمده است.

    کلید واژگان: بهینه سازی نامقید، معادلات مرزی اصلاح شده، روش گرادیان مزدوج دای-لیاو
    Sahar Mohammadkhan Sartip *, Saeed Nezhadhosein

    Following the setting of the Dai-Liao (DL) parameter in conjugate gradient (CG) methods‎, ‎we introduce two new parameters based on the modified secant equation proposed by Li et al‎. ‎(Comput‎. ‎Optim‎. ‎Appl‎. ‎202:523-539‎, ‎2007) with two approaches‎, ‎which use an extended new conjugacy condition‎. ‎The first is based on a modified descent three-term search direction‎, ‎as the descent Hestenes-Stiefel CG method‎. ‎The second is based on the quasi-Newton (QN) approach‎. ‎Global convergence of the proposed methods for uniformly convex functions and general functions is proved‎. ‎Numerical experiments are done on a set of test functions of the CUTEr collection and the results are compared with some well-known methods.

    Keywords: Unconstrained optimization‎, ‎Modified secant equations‎, ‎Dai-Liao conjugate gradient method
  • Saeed Nezhadhosein *
    Here, three new nonlinear conjugate gradient (NCG) methods are proposed,  based on a modified secant equation introduced in (IMA. J.  Num.  Anal. 11 (1991) 325-332) and optimal Dai-Liao (DL) parameters (Appl.  Math.  Optim.  43 (2001) 87-101). Firstly, an extended conjugacy condition is obtained,  which leads to a new DL parameter. Next, to set this parameter,   we use three approaches such that the search directions be close to some descent or quasi-newton directions. Global convergence of the proposed methods for uniformly convex functions and general functions is proved.   Numerical experiments are done on  a set of test functions of the CUTEr collection and the results of these NCGs are compared with some well-known methods.
    Keywords: Unconstrained optimization, Modified secant equations, Dai-Liao conjugate gradient method
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال