فهرست مطالب

Journal Of Industrial Engineering International
Volume:18 Issue: 2, Spring 2022

  • تاریخ انتشار: 1402/04/20
  • تعداد عناوین: 8
|
  • Muktar Umar Danjuma, Ibrahim Yusuf, Nasir Ahmad Sufi Pages 1-18

    Computer network topologies are complex systems made up of large subsystems that are arranged in series-parallel configurations. The current paper dealt with the mathematical modeling of some reliability metrics used in determining the strength, reliability, and performance of a computer distributed system stationed in two locations A and B, all of which were configured as a series-parallel system. The system is made up of six series-parallel subsystems distributed between locations A and B. In location A, there are three subsystems: four clients running in parallel as subsystem 1, six directory servers running in parallel as subsystem 2, and two replica servers running in parallel as subsystem 3, while in location B, there are two replica servers running in parallel as subsystem 4, six directory servers running in parallel as subsystem 5, and four clients running in parallel as subsystem 6. Using the Markovian process, the goal is to build mathematical models of reliability, dependability, availability, and maintainability in order to assess the system's performance, strength, and effectiveness. The ordinary differential difference equations for each subsystem are obtained from the schematic diagrams and solved iteratively. In this work, the Ramd analysis is used to quantify the performance of a system in terms of reliability, maintainability, availability, and dependability. It is tabulated the impact of subsystem repair and failure rates on reliability, maintainability, availability, and dependability. Inspecting the network's essential subsystems and their maintenance priorities improves the system's stability, maintainability, availability, and dependability while also lowering maintenance costs.

    Keywords: Mirrored server, replication, directory servers, Distributed system, repair rate, failure rate
  • mahtab hajghasem, AmirReza Abtahi, Reza Yousefi Zenouz Pages 19-40

    Research shows that an increment in the levels of automation (LoA) can affect the quality of production, cost efficiency, and performance on a large scale. Increasing the levels of automation (LoA) is essential in this regard and automatization can help overcome the varying problems. This paper develops a new taxonomy to measure and increase the LoA in the cosmetics industry. The proposed taxonomy is presented as a five-dimensional (5-D) matrix. The rows correspond to the LoA considering new technology like Blockchain, Cloud, and Internet of Things (IoT) and the columns conform to Information, Plan, Act, Control, and Decision. This taxonomy can help managers clearly define LoA and according to the main factors in the cosmetics industry increase the current LoA with the use of the current resources. Also, the DYNAMO++ methodology was employed to measure the current LoA of a cosmetics factory and three sets of suggestions were considered to increase the LoA in the factory at issue. These suggestion sets were then compared to each other in terms of key parameters of cost, productivity, quality, and processing time via simulation.

    Keywords: Levels of automation, Cosmetics industry, DYNAMO++, IoT, Cloud computing
  • Yousef Badraghi, Shokrollah Ziari, Naghi Shoja, Amir Gholam Abri Pages 42-55

    The main drawbacks that arise for data envelopment analysis (DEA) are: lack of discriminationpower amongst efficient decision making units (DMUs) and scattering input-output weights. In theDEA, sometimes the mismatch of the input or output weights in the decision-making units (DMUs)under consideration leads to assigning higher weight to variables with the less significance and/or thelower or zero weight to the variables with high significance. Accordingly, most DEA models introducemore than one efficient DMU in evaluating the relative efficiency of decision-making units. The presentpaper is conducted to overcome these inabilities. In this trends, we present a novel DEA model basedon minimizing the sum of absolute deviations of all input-output weights from each other. The proposedmodel provides to enhance the discrimination power and adjusts the balance dispersion of input-outputweights. Finally, well-known numerical experiments are considered to demonstrate the efficiency andvalidation of the suggested model.

    Keywords: Data Envelopment Analysis, Discrimination power, Dispersion of weights, Scale transformation
  • Yaser Vahedi Geshniani, Bijan Rahmani, Reza Kamranrad Pages 56-71

    Many real-world issues are based on multivariate processes with descriptive characteristics that are represented by contingency tables. A contingency table is a tool for showing the simultaneous relationship of two or more descriptive variables that is modeled by the log-linear communication function and monitored over time. In some statistical process monitoring (SPM) applications, we are faced with the multiplicity of variables and, of course, the number of nominal classifications of the response variable. To model them, a log-linear model based on large-scale contingency tables is used that are called nominal large-scale descriptive multivariate processes. In monitoring this type of process, we face the negative impact of large dimensions of contingency tables on the performance of control charts. For this purpose, a new approach based on the clustering approach in correspondence analysis have been developed to reduce the effect of large dimensions and improvement performance of the control charts in diagnosing out of control status. The performance of control charts has been evaluated using simulated studies and the results indicate the appropriate efficiency of the proposed approach in reducing the impact of the contingency table dimensions on the performance of the control charts. In addition, to demonstrate the performance efficiency of the proposed methods, a real case study in the field of renewable energy has been used, the results of which indicate the proper performance of the proposed control charts in diagnosing out of control status.

    Keywords: Large-scale contingency table, Genetic algorithm, log-linear model, correspondence analysis, Statistical process monitoring
  • Ibrahim Yusuf, Abdullahi Sanusi Pages 72-91

    This study focuses on an investigation into the analysis of reliability measures used to determine the strength of a serial manufacturing system comprising of three subsystems A, B, and C in the form of units, conveyors, and processors. Subsystem A has three parallel active units, whereas subsystems B and C each have two units. The system is analyzed using the linear differential difference equation, supplementary variable technique and Gumbel-Hougaard family of copula to obtain expressions of reliability measures for determining system strength such as availability, reliability, mean time to failure (MTTF), and profit function. Numerical examples are provided to illustrate the obtained results and analyze the effects of various system parameters. The current study may assist manufacturing, industries, reliability engineers, maintenance managers, system designers and their repairers in alleviating some of the challenges faced by repairers in certain manufacturing and industrial systems operating in harsh environments or under unfavorable weather conditions.

    Keywords: reliability, availability, mean time to failure, profit, manufacturing system
  • malihe ebrahimi Pages 92-101

    In recent years, reverse logistics has been given more research attention. Reverse logistics has backward and forward flow of products which customers are not end of the flow. Reverse logistics has environmental and economic benefits such as recovering the value of returning products, and contenting the environmental requirements. In this lecture, a new multi-objective mixed-integer non-linear program is suggested in order to minimize total cost and air pollution. Decreasing carbon emissions is considered related to environmental aspects or the second aim( minimizing air pollution). The new closed-loop supply chain (CLSC) model is an inventory-location multi-period problem. The demand in this model is depended on green technology and quality level. The returned products are disassembled and sorted, the good raw materials are sent to the manufacturers and other materials disposed. The LP-metric and utility function or total weighted methods /are applied to gain Pareto optimal solutions. Finally, a numerical example is applied for validating the new model.

    Keywords: Closed-loop supply chain, Returned products, Quality, green technology, Bi-objective
  • saeed sadeghi, mohammad fallah, esmaeil najafi Pages 102-122

    Many practical decision-making problems involve a significant level of data uncertainty. In such a case, modeling the uncertainty involved is critical to making informed decisions. The set-based robust optimization approach is one of the most efficient techniques for finding optimal decisions in problems involving uncertain data. The main concern with this technique is over-conservatism. This drawback has been widely investigated, and several robust formulations have been developed in the literature to deal with it. However, research is still ongoing to obtain effective formulations to handle uncertainty. In this study, we derive a robust counterpart formulation for an uncertain linear programming problem under a new uncertainty set that is defined based on a pairwise comparison of perturbation variables. The performance of the proposed robust formulation is evaluated using numerical studies and in terms of different performance metrics. For this purpose, robust counterpart models corresponding to the production-mix sample problems are solved at different protection levels. Then, for each solution obtained, violation probability is calculated using a Monte-Carlo simulation approach. The results revealed that the proposed method outperforms the existing ones.

    Keywords: perturbation variables, robust counterpartoptimization, uncertain coefficients, uncertainty set‎
  • Hamid Amiri, Rasoul Shafaei Pages 123-140

    Many practical decision-making problems involve a significant level of data uncertainty. In such a case, modeling the uncertainty involved is critical to making informed decisions. The set-based robust optimization approach is one of the most efficient techniques for finding optimal decisions in problems involving uncertain data. The main concern with this technique is over-conservatism. This drawback has been widely investigated, and several robust formulations have been developed in the literature to deal with it. However, research is still ongoing to obtain effective formulations to handle uncertainty. In this study, we derive a robust counterpart formulation for an uncertain linear programming problem under a new uncertainty set that is defined based on a pairwise comparison of perturbation variables. The performance of the proposed robust formulation is evaluated using numerical studies and in terms of different performance metrics. For this purpose, robust counterpart models corresponding to the production-mix sample problems are solved at different protection levels. Then, for each solution obtained, violation probability is calculated using a Monte-Carlo simulation approach. The results revealed that the proposed method outperforms the existing ones.

    Keywords: perturbation variables, robust counterpart optimization, uncertain coefficients, uncertainty set‎