فهرست مطالب

Scientia Iranica
Volume:19 Issue: 3, 2012

  • Transactions E: Industrial Engineering
  • تاریخ انتشار: 1391/04/09
  • تعداد عناوین: 9
|
  • S.S. Hashemin, S.M.T. Fatemi Ghomi, M. Modarres Page 841
    In this paper, we develop an approach to optimally allocate a limited nonrenewable resource among the activities of a project, represented by a PERT-Type Network (PTN). The project needs to be completed within some specified due date. The objective is to maximize the probability of project completion on time. The duration of each activity is an arbitrary discrete random variable and also depends on the amount of consumable resource allocated to it. On the basis of the structure of networks, they are categorized as either reducible or irreducible. For each network structure, an analytical algorithm is presented. Through some examples, the algorithms are illustrated.
    Keywords: PERT, Resource allocation, Project management, Dynamic programming, Optimization
  • C., J. Cheng, S.W. Chiuc., B. Chengj., Y. Wu Page 849
    The present study attempts to establish a framework for computing customer lifetime values for a company in the auto repair and maintenance industry. The customer lifetime value defined in this study consists of the current and future values of a customer, which involve an estimation of lifetime length, future purchasing behavior and the profit associated with each behavior of the customer. The proposed framework contains three groups of techniques to obtain these estimates from historical customer transactions. The first group includes a logistic regression model and a decision tree model to estimate the churn probability of a customer and to, further, predict the lifetime length of the customer. The second group comprises a regression analysis to identify the critical variables that affect a customer’s purchasing behavior, and a Markov chain to model the transition probabilities of behavior change. Finally, the third group contains two neural networks to predict the profits contributed by a customer under various purchasing behaviors. The proposed framework is demonstrated with the historical customer transactions of an auto repair and maintenance company in Taiwan.
    Keywords: Customer lifetime value, Auto repair, maintenance industry, Data mining, Markov chain, Decision tree, Neural networks
  • Amirhossein Amiri, Ramezan Khosravi Page 856
    In a high quality process, the fraction of nonconforming is very low. In this area, standard Shewhart control charts are no longer useful. The Cumulative Count of Conforming (CCC) control charts, which enumerates the number of conforming items between the occurrences of two nonconforming ones, have been shown to be effective in the monitoring of high quality processes. When the CCC control chart signals an out-of-control condition, the process engineers should search for the source of the assignable causes. Knowing the exact time of the process change would help them to reduce the time for identification of the assignable causes. This paper provides a maximum likelihood estimator for the change point of the nonconforming level of the high quality process with a linear trend. Then, a Monte Carlo simulation is applied to evaluate the performance of the proposed estimator. In addition, the proposed estimator is compared with the MLE of the process fraction nonconforming, derived under a single step change. The results show that the proposed estimator outperforms the MLE designed for step change, when a linear trend disturbance is present in the process.
    Keywords: Drift, Change point, CCC control chart, Statistical process control, Maximum likelihood estimator
  • S.T.A. Niaki, M. Khedmati Page 862
    In multi-attribute process monitoring, when a control chart signals an out-of-control condition indicating the existence of a special cause, knowing when the process has really changed (the change point) accelerates the identification of the source of the special cause and makes the corrective measures to be employed sooner. This, of course, results in a considerable amount of savings in time and money. Since many real world multi-attribute processes are Poisson and most process changes are step-change, a new method is proposed, in this paper, to derive the maximum likelihood estimator of the time of a step-change in the mean vector of multivariate Poisson processes. In this method, two transformations are first employed to almost remove the inherent skewness involved in multi-attribute processes and make them almost multivariate normal, and also to almost diminish correlations between the attributes. Then, a T2 control chart is employed for out-of-control detection and a maximum likelihood estimator is used to estimate the change point. The performance of the proposed methodology is illustrated using some simulation experiments in which we show that the proposed procedure is relatively accurate and reliable in detecting and estimating the change point.
    Keywords: Multi, attribute processes, Change point estimation, Root transformation, Symmetric square root ransformation, Maximum likelihood estimator
  • Z. Yue Page 872
    In this paper, we investigate the group decision making problem in which the attribute values are given as interval numbers, and the attribute weights are able to be determined appropriately. The contribution of this paper is to determine the weights of Decision Makers (DMs), using an extended projection technique. We define the ideal decision of all individual decisions as their average. The weights of DMs are determined according to the extended projection of each individual decision on the ideal decision. Finally, we give an example to illustrate the developed approach.
    Keywords: Multiple attributes group decision making, Weight of decision maker, Extended projection method, Interval number, Ideal decision
  • M. Aslam, S.T.A. Niaki, M. Rasool Fallahnezhad Page 879
    In this research, Repetitive Group Sampling (RGS) plans are developed for the Weibull and generalized exponential distributions. To design the proposed plans, the median of a life-time is first used as the quality parameter. Then, a decision-making framework is developed, based on first and second type errors. Next, based on acceptable and limiting quality level criteria, tables are obtained to select the parameters of the proposed decision-making framework. The advantages of the proposed method over single sampling plans are discussed at the end.
    Keywords: Acceptance sampling plans, Repetitive group sampling, Weibull distribution, Generalized exponential distribution
  • R. Noorossana, M. Heydari Page 885
    When a control chart signals an out-of-control condition, a search begins to identify and eliminate the cause of disturbance. Identification of the time when a change manifests itself into the process, referred to as the change point, can help process engineers to perform root cause analyses effectively. In this paper, a Maximum Likelihood Estimator (MLE) is proposed to estimate the time of a monotonic change in the variance of a normal quality characteristic. Using Monte Carlo simulation, performance of the proposed estimator is studied and comprehensively compared to the existing maximum likelihood estimators for simple step and linear trend changes. This simulation is repeated for a number of monotonic change types, following a signal from a Shewhart S-control chart. Numerical results reveal that the proposed estimator provides appropriate and robust estimation, with regard to the magnitude and type of change.
    Keywords: Statistical process control, Shewhart control chart, Change point estimation, Maximum likelihood estimator, Monotonic change
  • A. Mohammadi, I. Nakhaei Kamal Abadi Page 895
    In this paper, we propose a heuristic algorithm, named the Lotto–Meta heuristic, to solve small instances of the lottery problem, using its set covering formulation. The algorithm uses a randomized method, allotting a priority to each column to be included in the solution. A neighborhood search strategy is fused with the algorithm to enhance the search and to balance the exploration and exploitation procedures. Computational results show that our method outperforms the best known solutions for a number of hard instances.
    Keywords: Integer programming, Set covering problem, Lottery problem, Lotto–Meta heuristic, Neighborhood search
  • M.H. Fazel Zarandi, S. Davari, S.A. Haddad Sisakht Page 902
    This paper addresses the multiple allocation hub set-covering problem considering backup coverage and mandatory dispersion of hubs. In the context of this paper, it has been assumed that a flow is covered if there are at least Q possible routes to satisfy its demand within a time bound. Moreover, there is a lower limit for the distance between hubs in order to provide a degree of dispersion in the solution. Mathematical formulation of this problem is given, which has O(n2) variables and constraints. Computational experiments carried out on the well-known CAB dataset give useful insights concerning model behavior and its sensitivity to parameters.
    Keywords: Hub location, Network design, Hub covering problem, Multiple allocation, Backup coverage