به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت

جستجوی مقالات مرتبط با کلیدواژه « Data Envelopment Analysis » در نشریات گروه « ریاضی »

تکرار جستجوی کلیدواژه « Data Envelopment Analysis » در نشریات گروه « علوم پایه »
  • Mahnaz Ahadzadeh Namin, Elaheh Khamseh *

    Evaluating the performance of organizations can provide managers with useful information about the status of the organization compared to other organizations so that managers can take a step towards the growth and excellence of the organization. Obviously, the number of indicators and their amount affect the performance evaluation of organizations. So, by collecting the exact values of the indicators, an accurate and accurate performance evaluation will be provided to managers of organizations. In this article, we intend to evaluate the companies investing in the stock exchange. Since in the table of indices related to these companies published by the Iran Stock Exchange Organization there are indices whose values have been lost for any reason (not available - heterogeneous index), it is necessary to use envelopment analysis models. We used data (DEA) in the presence of heterogeneous indicators. We have used the model of Cook et al.'s (2013) article to evaluate companies. For the conceptual use of research, we have described and implemented their method step by step. Lastly, we have analyzed the results.

    Keywords: Non-Homogeneous, Data Envelopment Analysis}
  • Fatemeh Fattahi *, Farhad Hosseinzadeh Lotfi, Andrew C. Worthington
    ‎Data envelopment analysis (DEA) is a methodology widely used for evaluating the relative performance of portfolios under a mean–variance framework‎. ‎However‎, ‎there has been little discussion of whether nonlinear models best suit this purpose‎. ‎Moreover‎, ‎when using DEA linear models‎, ‎the portfolio efficiency obtained is not comparable to those on the efficient portfolio frontier‎. ‎This is because a separable piecewise linear boundary usually below the efficient frontier is considered the efficient frontier‎, ‎so the model does not fully explore the possibility of portfolio benchmarks‎. ‎In this paper‎, ‎and with use of the dual-Lagrangian function‎, ‎we propose a linear model under a mean–variance framework to evaluate better the performance of portfolios relative to those on the efficient frontier‎.
    Keywords: Data Envelopment Analysis, Efficiency, Portfolio, Dual-Lagrangine}
  • Sadegh Farzam, Farhad Hosseinzadeh Lotfi *, Hamid Mesgarani

    Using the right tools to support decision-making for managers is very important. Today, using mathematical methods to support managers in making decisions is prevalent. Data envelopment analysis is one of the mathematical methods used in the mentioned field. In recent years, researchers in this field have developed data envelopment analysis methods to a great extent. Many data organizations use to evaluate performance and other purposes are imprecise and uncertain. To solve this problem, one should use methods whose results are reliable. Stochastic, fuzzy-stochastic, fuzzy, robust, etc. models can be mentioned among these methods. Among non-deterministic methods, stochastic models are of particular importance. This article only reviews the existing stochastic methods for dealing with non-deterministic data. We have divided stochastic methods into four categories to understand the subject better. Also, the fit between the models and the data has been reviewed in the articles. The process mining method is suggested to determine the appropriate model for the available data. Finally, the weaknesses of some previous models have been introduced. Suggestions have been made to fix these weaknesses. The current research results provide researchers in this field with appropriate information regarding stochastic data envelopment analysis.

    Keywords: Data Envelopment Analysis, Network DEA, Stochastic DEA}
  • Somayeh Sadeghzadeh, Mohhamd Reza Mozaffari *, Zohreh Iravani, Hadi Bagherzadehvalami, Ali Ebrahimnejad

    Considering intermediate data, two-stage networks eliminate the possibility of evaluating the performance of decision-making units in Black box mode. In this article, based on the structure of two-stage networks, Central Resource Allocation (CRA) models with fuzzy data are proposed. Then, two-stage network models are proposed in the form of combining data envelopment analysis and Raito analysis. In general, the models of this article using the CRA structure, Target introduce the decision-making units under the assumption of constant return to scale. CRA models in the two-stage network structure in DEA-R provide the possibility of finding suitable targets for decision-making units by solving a linear programming model instead of solving n linear programming problems (for n decision-making units). In conclusion, the models are proposed based on a practical study on 16 Iranian airlines.

    Keywords: Data Envelopment Analysis, Fuzzy Data, Fuzzy Probability Function, DEA-RA}
  • Omid Dalir*, Taghi Torabi, Mahnaz Rabiei, Yeganeh Mousavi Jahromi

    Data Envelopment Analysis (DEA) is one of the widely used methods in measuring the technical efficiency of organizations. Data envelopment analysis is a classical non-parametric technique based on mathematical programming, which is used to compare the evaluation of the efficiency of a set of similar decision-making units. Its significant advantage is that it does not need to determine parametric specifications (such as the production function) to get performance points. Data envelopment analysis is known as a valid and stable tool used in performance evaluation, which provides a single measure of performance for each unit relative to its peers. Even though the number of data envelopment analysis models is constantly increasing and each one has a specialized aspect, the basis of all is a number of main models, among which we can refer to the "Charnes, Cooper and Rhodes" models (1978) as CCR, in which the assumption of constant returns to scale (CRS) has been used in the analysis. There are different approaches to determining the effective factors on efficiency and productivity and how to measure the productivity of production factors. Among the investigated companies in the years 2019 to 2019, it was found that the technical efficiency of the companies decreased, the technological changes of the companies increased, the managerial efficiency of the companies decreased, the efficiency of the scale of the companies decreased, and the overall productivity of the companies faced a decrease.

    Keywords: Data Envelopment Analysis, Efficiency, Management Efficiency, Performance Evaluation}
  • Nasim Ekram Nosratian, MohammadTaghi Taghavi Fard *

    Supply Chain Management (SCM) is an integrated system of planning and control of materials and information, including suppliers, manufacturers, distributors, retailers, and customers. Chain performance measurement is an important issue in SCM. Also, given that the information plays a key role in improving supply chain performance, the kind and amount of information sharing should be investigated. In this paper, the effect of information sharing on supply chain performance will be evaluated. In this way, 17 different scenarios of information sharing are defined and ranked using the cross-efficiency method. Finally, values ​​for different scenarios using simulations and Rockwell Software Arena V5 are reported. The obtained results show that the proposed model is quite valid and efficient and can be easily applied to real-world cases.

    Keywords: Supply chain management, information sharing, Data Envelopment Analysis, Cross-efficiency method, Ranking, Simulation}
  • Mehrdad Navabakhsh *, Nasser Shahsavari Pour
    Until the 1980s, the system for assessing the performance of organizations with specific structures has been based on economic and financial indexes. The previous methods that were frequently used for performance assessment were mainly focused on the economic-financial aspects of the organization. However, at present, due to vast human needs, sensitive cognitive, fundamental parameters in social organizations that are based on realities, are very effective, and meet scientific criteria have come into vogue. These parameters rely on experience, observation, experiment, hypothesis, and theory. Balanced Scorecard (BSC) seeks to make a balance between financial and economic objectives as outcomes of past performance (past-oriented indexes) and three indexes of customer processes, learning and growth, and development of human and social forces (future-oriented).Data Envelopment Analysis (DEA) is a non-parametric method for measuring the outputs or efficiency of homogeneous units with different inputs and outputs. However, in cases where there are numerous inputs and outputs with some similarities, their efficiency can be measured by two-level DEA, i.e., classifying them and using common weights.In primitive social institutions, the inputs of social systems are mainly limited and clear. However, in modern, complex, standardized systems, the input is both expanded and diversified. Therefore, in this paper, we have tried to use BSC as an instrument for designing performance assessment indexes and two-level DEA as an instrument for measurement.
    Keywords: Balanced Scoredcard, Data Envelopment Analysis, Performance assessment of social institutions, Development}
  • Sajjad Tavakoli, Mahnaz Ahadzadeh Namin *, Navid Khabiri

    Evaluating the efficiency of low-fee loans in small businesses and comparing them with each other can be a support for strategic planning. Banks and escape from economic inflation. One of the methods of measuring efficiency is data coverage analysis. In this research, the indicators affecting the performance of low-fee loans have been researched and identified, and then the researched samples have been estimated. In three parts, the evaluation of the public institution of Resalat (Qarz Al-Hasna Resalat Bank) which is active in paying microloans has been done and in order to measure the impact of microloans on business creation, it has been compared with 9 other banks. In fact, the purpose of this research is to measure the efficiency of micro-bank facilities and its impact on the creation of micro- and home-based businesses and to identify indicators that affect micro-businesses. At the end, the steps of the research are described with a practical example

    Keywords: Micro loans, Data Envelopment Analysis, Social entrepreneurship, Micro businesses}
  • Sarah Navidi*, Mohsen Rostamy-Malkhalifeh

    One of the interesting subjects that amuse the mind of researchers is surmising the correct classification of a new sample by using available data. Data Envelopment Analysis (DEA) and Discriminant Analysis (DA) can classify data by each one alone. DEA classifies as efficient and inefficient groups and DA classify by historical data. Merge these two methods is a powerful tool for classifying the data. Since, in the real world, in many cases we do not have the exact data, so we use imprecise data (e.g. fuzzy and interval data) in these cases. So, in this paper, we represent our new DEA-DA method by using Mixed-Integer Nonlinear Programming (MINLP) to classify with imprecise data to more than two groups. Then we represent an empirical example of our purpose method on the Iranian pharmaceutical stock companies' data. In our research, we divided pharmaceutical stock companies into four groups with imprecise data (fuzzy and interval data). Since, most of the classical DA models used for two groups, the advantage of the proposed model is beheld. The result shows that the model can predict and classify more than two groups (as many as we want) with imprecise data so correct.

    Keywords: Imprecise data, Data Envelopment Analysis, Classification, Mixed-Integer Nonlinear Programming, Discriminant analysis}
  • Sarvar Kassaei, Alireza Amirteimoori *, Bijan Rahmani Parchikolaei

    Cross-efficiency is a frequently used method for ranking decision-making units in Data Envelopment Analysis (DEA). A fundamental weakness of this method which has been quite problematic is the presence of multiple optimal weights along with selection of zero values by many of these multiple weights in calculating cross-efficiency. In the current paper it is tried to provide a method which through utilizing fair distribution of weights resolve the mentioned problems and in this way give more validity to the cross-efficiency method in raking decision-making units.

    Keywords: Zero weights, Fair weights, Cross-efficiency, Data Envelopment Analysis}
  • Fatemeh Komaki, Reza Fallahnejad *, Farhad Losseinzadeh Lotfi, Mohsen Rostamy, Malkhalifeh

     As the calculated efficiencies for the DMUs is relative, so each DMU attempts to improve its performance to don’t miss the position of efficiency in compete with other DMUs. Generally, the performance of DMUs can be evaluated from two perspectives-optimistic and pessimistic. A part of data envelopment analysis examines the sensitivity of the set of efficient DMUs to changes in input and output values. In real world, DEA is sometimes faced with fuzzy and interval inputs and/or outputs. In this paper we focus on the one of important subjects of sensitivity analysis and present an algorithm which uses the classic fuzzy DEA models that can determine the relative efficiency security margin of DMUs with fuzzy and interval inputs and outputs and their simultaneous perturbation. Of course, in addition to the optimistic frontier, we also consider the pessimistic frontier for the observed DMUs and call it the inefficiency improvement margin. With this information, the managers of companies can identify their closest threat and improve their performance in order to keep their position in ranking. Some numerical examples for illustration are given.

    Keywords: Sensitivity analysis, Data envelopment analysis, fuzzy efficiency security margin}
  • Super efficiency SBM-DEA method and neural network for the efficiency evaluation in the case of data uncertainty
    Mohsen Rostamy-Malkhalifeh *

    The classic models for the performance assessment in Data Envelopment Analysis (DEA) may have some inherent issues. For example, they can be affected by the statistical noise in data. Furthermore, if the decision maker (DM) adds new decision-making units (DMUs) into the evaluation, then the performance of all the original units is affected and must be re-measured, which restricts the efficiency evaluation in DEA. The main goal of this paper is to apply machine learning algorithms to overcome the shortcomings of the DEA models. On the other hand, in many real-world problems, there are some imprecise data due to incomplete or non-attainable information, errors in measurements, unquantifiable variables, or any other source of reason. In the DEA literature, many studies have focused on developing methods that incorporate uncertainty into the input/output values. The uncertain data can be reported as fuzzy data, stochastic data, and interval data. This paper considers the situation where each input/output value is selected from a symmetric box. First, we use a super-efficiency SBM model in the presence of uncertain data to construct the relative effective frontier and then apply the machine learning algorithms to construct a regression model and establish the absolute effective frontier. The proposed method has some advantages, compared to the existing methods. Also, the proposed model can better overcome the problems associated with DEA compared with the DEA in the presence of uncertain data and the neural network fusion outlined in the literature, so it can improve fusion efficiency.

    Keywords: Neural Network, Efficiency Evaluation, Super-efficiency SBM, Data Envelopment Analysis}
  • عصمت نوروزی*، حمید شرفی

    تکنیک تحلیل پوششی داده ها برای ارزیابی کارایی نسبی مجموعه ای از واحدهای تصمیم گیری استفاده می شود. تجزیه و تحلیل پوششی داده ها در زمینه های مختلف مورد مطالعه قرار گرفته است، به عنوان مثال، تجزیه و تحلیل حساسیت در DEA. تحلیل حساسیت مدل های تحلیل پوششی داده ها بسیار مهم است. متعاقبا مقالات زیادی در این زمینه ارائه شده است،در بعضی مواقع مدیران به مسایلی بر خورد می کنندکه تخصیص یک هزینه ی ثابت به واحدهای تصمیم گیرنده مهم می باشدواز آنجا که در اغلب مسایل واقعی داده ها و اطلاعات اولیه دقیق نیستند بلکه کیفی، بازه ای و یا ترتیبی می باشندلذا سعی شده است که در این مقاله این موضوع را مورد بحث قرار داده و مدلی ارائه شودکه تخصیص یک هزینه ی ثابت که از نوع فازی است را به واحدهای تصمیم گیرنده مورد بررسی قرار دهد. علاوه بر این فرض می شود تمامی ورودی ها و خروجی های واحدها فازی هستند و تخصیص هزینه ی جدید باید به گونه ای باشد که بیشترین تعدادواحد ناکارا کارا شود و در انتها با دو مثال عددی مورد استفاده قرار گرفته شده و نتایج ارائه شده است.

    کلید واژگان: نظریه فازی, واحد تصمیم گیری, تحلیل پوششی داده ها, کارایی, تخصیص}
    Esmat Noroozi *, Hamid Sharafi

    Data envelopment analysis technique is used to evaluate the relative efficiency of a set of decision-making units that have been studied in different fields. One of the important issues in data envelopment analysis is sensitivity analysis. Many articles have been presented in this field by researchers, sometimes managers are concentrating on issues that would be critical to allocate a fixed cost to decision-making units. Since in real problems the primary data are not precise but interval, ordinal, and qualitative therefore this study have been discussed this issue and present a model for assigning a fuzzy fixed cost to decision-making units. Moreover, the inputs and outputs of all units are assumed to be fuzzy and the allocation of new costs should be such that the highest number of inefficient units become efficient. At the end, this model has been utilized in two numerical examples and the results have been presented.

    Keywords: Efficiency, Fuzzy Theory, Decision Making Unit, Data Envelopment Analysis, Allocation}
  • M. Amiri, A. Ashrafi *
    Ranking decision making units (DMUs) is an important topic in data en-velopment analysis (DEA). When efficient DMUs or inefficient DMUs have the same efficiency score, the traditional DEA model usually fails to rank all DMUs. For the sake of comparing and improving the discrimination power of DMUs, some proposed approaches use cooperative game theory for rank-ing DMUs. In this paper, communication game theory, which includes a transferable utility cooperative game and an undirected graph describing limited cooperation between players, can be used to rank DMUs. The idea is that the ranking of DMUs can be done by measuring the effect of remov-ing a subset of DMUs on the total share of the remaining DMUs obtained by the reference frontier share model. In the proposed approach, the play-ers are the DMUs, and the characteristic function measures the increase and decrease in the total share of each DMU. The current paper considers the total share for efficient and inefficient DMUs to rank all DMUs. The proposed approach has been tested on several datasets and compared with the results of the previous ranking methods, which sometimes coincide. In the empirical study, a complete ranking of DMUs is useful and reasonable.
    Keywords: Data Envelopment Analysis, Communication game, Myerson value, Reference frontier share model}
  • Seyed Zaman Hosseini, Farhad Hosseinzadeh Lotfi *, Mahnaz Ahadzadeh Namin
    One of the important applications of data envelopment analysis is to determine the progress and regress of the units under evaluation at two different times, which has been addressed in many papers. Also, one of the distinctions of data coverage analysis technique with other methods is the introduction of achievable and flexible benchmarks. In the present paper, we intend to study the progress and regress of Iranian regional electricity companies during two consecutive years of 2015 and 2016. Since some of the evaluated indicators are semi-positive and semi-negative indicators, in this study we will develop Emrooznejad et al.  [7] to determine the productivity index of Malmquist for semi-positive and semi-negative indicators. Finally, for further explanation, we have used the proposed models to determine the progress and regression of 16 regional electricity companies in Iran with 3 semi-positive and semi-negative indices in the presence of the limitation on the benchmark, an undesirable index and 11 completely positive indices in the nature of input with constant scale returns as a black box.
    Keywords: Progress, Regression, Semi-oriented radial measure, Malmquist Productivity Index, Regional Electricity Companies, Data Envelopment Analysis}
  • Reza Rasinojehdehi *, Seyyed Esmaeil Najafi

    Network security is paramount in safeguarding the integrity of computer networks and the data they host. The primary objective of network security is to protect data from cyber-attacks and ensure the overall reliability of the network. A robust network security strategy deploys various solutions to shield data within networks, safeguarding both users and organizations from potential threats. This paper introduces a novel approach to evaluating computer network security using Data Envelopment Analysis (DEA), a mathematical method designed to measure the performance of Decision-Making Units (DMUs) employing identical inputs to yield identical outputs. We present a practical application of DEA to assess the security of 10 distinct networks, treating them as DMUs. The resulting performance measurements allow us to classify computer network security into four levels: "terribly insecure," "insecure," "safe," and "very safe. To optimize the discriminating power of DEA, we employ Principal Component Analysis (PCA) to reduce the number of inputs and outputs. It not only enhances the precision of our evaluation but also ensures that the number of DMUs remains well-suited to the analysis. As a rule of thumb, the number of DMUs should be at least three times larger than the sum of the numbers of inputs and outputs to maintain DEA's discriminating power. Through the combined application of DEA and PCA, this research contributes a comprehensive and efficient method for evaluating and classifying computer network security, providing valuable insights for enhancing overall network resilience against cyber threats.

    Keywords: Data envelopment analysis, Principal component analysis, Computer network security, Decision-making unit}
  • Mahsa Nasiri, Mohsen Rostamy-Malkhalifeh, Hadi Bagherzadeh valami

    In recent years, the environmental issue has attracted widespread concern from the international 6 community, as gas waste, water waste, and solid wastes generated in the production process of factories. 7 Recent studies on environmental management have forced commercial organizations to re-evaluate 8 their roles and responsibilities for protecting the natural environment. This study focuses on the DEA 9 environmental assessment via the concept of congestion. Recognizing the congestion of units is one of 10 the most attractive issues in the literature of Data Envelopment Analysis (DEA), because the decision 11 maker (DM) can use this concept to decide whether to increase or decrease the size of a Decision12 Making Unit (DMU). In the DEA literature, congestion is classified into Undesirable Congestion (UC) 13 and Desirable Congestion (DC). In many real-world situations, we cannot determine the exact value for 14 all data, hence, some parameters are inevitably reported as uncertain data, e.g. stochastic data, fuzzy 15 data, interval data and so on. This study focuses on considering Returns to Damage (RTD) under UC 16 and Damages to Return (DTR) under DC in the situation that the input and desirable and undesirable 17 outputs are reported as interval data. For this purpose, some uncertain models under the different 18 production possibility sets (PPS) are formulated and then we use the robust optimization technique to 19 formulate the equivalent certain models. The potential of the proposed methods is illustrated by a 20 numerical example.

    Keywords: Data Envelopment Analysis, Returns to Damage, Damages to Return, Undesirable22 Congestion, Desirable Congestion}
  • اسماعیل یوسفی، محمد مقاصدی*، محسن رستمی مال خلیفه، محمدحسن بهزادی

    در این مقاله ابتدا نگاهی کوتاه به مفهوم یادگیری و تاریخچه ی یادگیری پرداخته شد، سپس تعریفی برای آموزش به روش ریاضی کاری و آموزش به روش ریاضی فهمی ارایه شد. مراحل یادگیری به صورت اشاره وار معرفی شد. ما یادگیری را به صورت دو پالس معرفی کردیم که هر پالس نیز شامل دو مرحله است. پالس اول مربوط به ناتوانی و پالس دوم مربوط به توانایی است. این مراحل را برای آموزش ریاضی با بیان مثال، برای هر مرحله یادگیری مورد بررسی قرار دادیم. برای تعیین نسبت بهینه آموزش به دو روش ریاضی کاری و ریاضی فهمی، ابتدا چهار مثال برای چهار مرحله آموزش طراحی شد، سپس این دو روش برای کلاس های مختلف آموزش ریاضی عمومی دانشجویان فنی و مهندسی در دانشگاه آزاد به کار گرفته شد. داده های به دست آمده را به کمک تحلیل پوششی داده ها تحلیل کردیم. ورودی ها ریاضی کاری و ریاضی فهمی است. خروجی ها یادگیری در پالس اول و یادگیری در پالس دوم است که در نهایت به نسبت بهینه استفاده از هر دو روش در یک کلاس آموزشی رسیده ایم، سپس برای جدول بهینه، با استفاده از درونیابی توابع چندمتغیره، ضابطه ای برای دونوع آموزش ارایه دادیم که به صورت یک تابع دومتغیره است. در این تابع، دامنه درصد آموزش به هر دو روش است و برد کارایی است. در نهایت از روی ضابطه تابع و یا نمودار تابع می توان تعیین کرد نسبت آموزش به هر دو روش بیان شده، چقدر کارایی دارد.

    کلید واژگان: یادگیری, ناتوانی ناآگاهانه, ناتوانی آگاهانه, توانایی آگاهانه, توانایی ناآگاهانه}
    Esmaeil Yousefi, Mohammad Maghasedi *, Mohsen Rostami Malkhalifeh, Mohammad Hassan Behzadi

    In the present article, after examining the concept of learning, we discussed education through work mathematics and understanding mathematics. First, 4 learning stages are introduced in the form of 2 pulses. Next, these stages are applied for 2 teaching methods of work mathematics and understanding mathematics. We employed these 2 methods in different classes of Islamic Azad University of Iran. Then, the data acquired were analyzed using data envelopment analysis. The inputs are work mathematics and understanding mathematics and the outputs are learning in the first and the second pulses. Finally, we achieved an optimal ratio of employing both methods in a course And we find the internal polynomial.

    Keywords: learning, unconscious incompetence, conscious incompetence, unconscious competence, work mathematics, understanding mathematics, optimal ratio, Data Envelopment Analysis}
  • Mahsa Nasiri, Mohsen Rostamy-Malkhalifeh, Hadi Bagherzadeh valami

    In recent years, the environmental issue has attracted widespread concern from the international community, as gas waste, water waste, and solid wastes generated in the production process of factories. Recent studies on environmental management have forced commercial organizations to re-evaluate their roles and responsibilities for protecting the natural environment. This study focuses on the DEA environmental assessment via the concept of congestion. Recognizing the congestion of units is one of the most attractive issues in the literature of Data Envelopment Analysis (DEA), because the decision maker (DM) can use this concept to decide whether to increase or decrease the size of a Decision Making Unit (DMU). In the DEA literature, congestion is classified into Undesirable Congestion (UC) and Desirable Congestion (DC). In many real-world situations, we cannot determine the exact value for all data, hence, some parameters are inevitably reported as uncertain data, e.g. stochastic data, fuzzy data, interval data and so on. This study focuses on considering Returns to Damage (RTD) under UC and Damages to Return (DTR) under DC in the situation that the input and desirable and undesirable outputs are reported as interval data. For this purpose, some uncertain models under the different production possibility sets (PPS) are formulated and then we use the robust optimization technique to formulate the equivalent certain models. The potential of the proposed methods are illustrated by a numerical example.

    Keywords: Data Envelopment Analysis, Returns to Damage, Damages to Return, Undesirable Congestion, Desirable Congestion}
  • Mehdi Khazaeyan, Sevan Sohraiee, Amin Mostafaee

    One of the most well-known issues in Data Envelopment Analysis (DEA) literature is to identify the anchor points of the production possibility set (PPS). Each extreme efficient unit which is located on the intersection of the strong and weak efficient frontiers of the PPS, is called an anchor point. In the other word, a decision making unit (DMU) is an anchor point, if there is at least one supporting hyperplane at the unit under consideration, in the situation that some components of its gradient vector are equal to zero, and so some input or output factors do not play any role in the performance of that unit. This study presents a new method to identify the anchor points of the PPS under the variable returns to scale (VRS) assumption and in the presence of the uncertain data. The proposed method is based on the robust optimization technique and finding the weak and strong defining supporting hyperplanes passing through the unit under evaluation. The potentially of the proposed method is illustrated by a data set, includes 20 banks in Iran.

    Keywords: Data Envelopment Analysis, Anchor point, Variable returns to scale, Defining supporting hyperplane, Robust optimization}
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال