فهرست مطالب

مدیریت تولید و عملیات - سال دهم شماره 1 (پیاپی 18، بهار و تابستان 1398)
  • سال دهم شماره 1 (پیاپی 18، بهار و تابستان 1398)
  • تاریخ انتشار: 1398/02/01
  • تعداد عناوین: 10
|
  • پری سیما پاک روان، جواد بهنامیان* صفحات 1-23
    در مسئله مکان یابی تسهیلات نامطلوب برخلاف تسهیلات مطلوب، تا حد امکان سعی می شود تسهیلات دور از مناطق دریافت کننده خدمت استقرار یابند. در این پژوهش درباره مسئله مکان یابی این قبیل تسهیلات بحث شده است. در این مسائل بر اصطلاح «نه در حیاط خلوت من»، تمرکز و به پدیده های اجتماعی اشاره دارد. در این پدیده ساکنان با مکان یابی تسهیلات نامطلوب اطراف خانه هایشان مخالف اند. نمونه هایی از این تسهیلات شامل خطوط انتقال برق و مراکز بازیافت است. به دلیل اینکه درجه آلودگی حاصل از این تسهیلات با عدم قطعیت همراه است، در این پژوهش برای نخستین بار عملکرد این مسئله با در نظر گرفتن عدم قطعیت احتمالی ارزیابی شده است. این مسئله در فضای گسسته در نظر گرفته شده است. در این مسئله سه حالت ممکن برای دامنه تغییرات این دو پارامتر در نظر گرفته شده است. باتوجه به اینکه ارتباط و میزان اختلاف درجه آلودگی اصلی و حاشیه ای نیز نامشخص است، سناریوها براساس درجه اختلاف این دو پارامتر در نظر گرفته شده اند. با در نظر گرفتن این سناریوها، اهمیت درجه اختلاف این دو پارامتر در مسئله مکان یابی تسهیلات نامطلوب بررسی شده است. در این پژوهش مدل ریاضی مسئله، روش های مواجهه با عدم قطعیت، مدل سازی مسائل برنامه ریزی تصادفی و روش استفاده شده در مسئله درحال مطالعه ارائه شده است. باتوجه به NP-hard بودن مسئله، الگوریتم فراابتکاری شبیه سازی تبرید برای حل مسئله در ابعاد بزرگ پیشنهاد شده است. آزمایشات عددی برای ارزیابی و اعتبارسنجی مدل ریاضی و الگوریتم پیشنهادی در نظر گرفته شده است و عملکرد الگوریتم پیشنهادی در حل مسائل مختلف با الگوریتم ژنتیک موجود در ادبیات مسئله درحال مطالعه، مقایسه و برتری آن ارائه شده است.
    کلیدواژگان: مکان یابی تسهیلات نامطلوب، عدم قطعیت، شبیه سازی تبرید، الگوریتم ژنتیک
  • محمد خان بابایی، فرزاد موحدی سبحانی*، محمود البرزی، رضا رادفر صفحات 25-45
    سازمان ها برای ارتقای عملکرد خود باید به بهبود فرآیندها توجه داشته باشند. مسئله اصلی حجم زیاد فرآیندها همراه با تنوع وسیع ویژگی های آنها است که باعث افزایش پیچیدگی در روش های بهبود فرآیندها می شود. روش های قبلی به بهبود فرآیندها در حجم زیاد فرآیندها قادر نیستند. رویکرد داده کاوی با شناسایی الگوهای ارزشمند پنهان در حجم زیاد فرآیندها از روش های بهبود پشتیبانی می کند. در این مقاله چارچوبی برای به کارگیری روش های داده کاوی برای استخراج الگوهای ارزشمند پنهان در حجم زیاد فرآیندها با هدف ارائه پیشنهادهای بهبود توسعه داده شده است. برای ارزیابی چارچوب پیشنهادی، مجموعه ای واقعی از فرآیندها به همراه ویژگی های آنها جمع آوری شده است. سپس با الگوریتم های طبقه بندی، خوشه بندی و انتخاب ویژگی ها، الگوهایی با ارزش در حجم زیاد فرآیندها شناسایی شدند. بعد از ارزیابی این الگوها، پیشنهادهای بهبود ازطریق الگوهای شناسایی شده پیشنهاد شده است. نتایج نشان می دهد الگوهای شناسایی شده قادر هستند با ارائه پیشنهادهای بهبود اقدامات بهبود فرآیندها را پشتیبانی کنند.
    کلیدواژگان: بهبود فرآیندها، داده کاوی، فرآیندهای کسب وکار
  • فرانک حسین زاده سلجوقی*، امیر رحیمی صفحات 47-63
    مدیریت زنجیره تامین (SCM) ابزاری مناسب برای بهبود هم زمان عملکرد اقتصادی، اجتماعی و محیط است و ارزیابی کارایی آن اهمیت زیادی دارد. روش DEA یکی از روش های مناسب برای ارزیابی SCM است. با در نظر گرفتن زنجیره تامین به صورت مدل تحلیل پوششی داده های شبکه ای، کارایی زنجیره تامین با چند مرحله محاسبه می شود. این پژوهش کارایی و بازده به مقیاس (RTS) مدیریت زنجیره تامین شرکت های تولید رزین را در چارچوب مدل های DEA شبکه ای محاسبه می کند. بازده به مقیاس در دو حالت داده های قطعی و فازی بررسی و مدلی برای ارزیابی مدیریت زنجیره تامین با استفاده از مدل DEA شبکه ای با داده های فازی پیشنهاد می شود. مدل DEA فازی براساس رویکرد - برش برای اندازه گیری کارایی و تعیین بازده مقیاس زنجیره تامین به کار می رود. ایده های پیشنهادی برای ارزیابی کارایی و بازده به مقیاس زنجیره تامین 27 شرکت تولید رزین استفاده شده است. در ارزیابی با داده های قطعی 6 شرکت، کارای شبکه ای هستند؛ درحالی که در بررسی با داده های فازی 3 شرکت، کارای شبکه ای هستند. این شرکت ها مدیریت و هماهنگ سازی جریان مواد بین چندین سازمان و در درون سازمان را در بهینه ترین حالت و با رعایت مسائل زیست محیطی انجام داده اند.
    کلیدواژگان: مدیریت زنجیره تامین، تحلیل پوششی داده ها، کارایی، بازده به مقیاس، فازی
  • مریم اخوان خرازیان*، محمد مهدی شهبازی، محمد فاتحی صفحات 65-82
    موفقیت یا شکست هر سازمان ارتباط مستقیمی با کیفیت استخدام یا به عبارت دیگر نیرویابی، سنجش و جذب منابع انسانی آن دارد. این مقاله قصد دارد ضمن بررسی داده های یکی از مشاغل دانشی، به بهبود فرآیند استخدام آن شغل کمک کند؛ به این واسطه نرخ جذب نیروهای متناسب افزایش و نرخ جابه جایی منابع انسانی کاهش خواهد یافت. این پژوهش از نظر هدف، کاربردی و از جنبه ماهیت، مطالعه موردی محسوب می شود. رویکرد این پژوهش، ترکیبی از دو روش محاسباتی DEA و CART است. DEA روشی غیرپارامتریک است که امتیاز کارایی افراد را تعیین می کند؛ اما اطلاعاتی درخصوص جزئیات عوامل موثر در کارایی (به ویژه عوامل غیرعددی) ارائه نمی دهد. در پژوهش حاضر، این خلا با استفاده از روش CART (یک روش داده کاوی) رفع شده است. نتیجه این پژوهش چارچوبی برای ترکیب DEA و CART برای کشف قوانینی در حوزه استخدام کارکنان دانشی، در شغلی خاص و در سازمانی خاص (موسسه هدایت فرهیختگان جوان) ارائه کرده است. نتایج نشان می دهد در رابطه با شغل مدنظر، متغیرهای سابقه کار، معدل آخرین مقطع تحصیلی و سن با عملکرد افراد مرتبط اند و درنتیجه ضروری است در فرآیند استخدام های آتی مدنظر قرار گیرند.
    کلیدواژگان: استخدام، تحلیل پوششی داده ها، داده کاوی، درخت طبقه بندی و رگرسیون، مدیریت منابع انسانی
  • کامران کیان فر*، میترا پشوتنی زاده صفحات 83-110
    هدف این پژوهش، تعیین قیمت های بهینه عمده فروشی و نهایی کتاب های چاپی و الکترونیک، تعیین و مقایسه سود نهایی اجزای زنجیره تامین کتاب و بررسی مقادیر بهینه تقاضا در کانال های دوگانه است. در این پژوهش تحلیل حساسیتی جامع روی پارامترهای زنجیره تامین مانند ضرایب کشش تقاضا، میزان گرایش مشتریان به کتاب چاپی، هزینه های تولید انواع کتاب ها و تاثیر آنها بر مقادیر بهینه قیمت ها، تقاضاها و سود نهایی کانال های مختلف انجام شده است. نتایج حاصل برای زنجیره تامین متمرکز درحالت تقاضای ثابت نشان می دهد اگر ضریب تمایل مشتریان به کتاب چاپی بزرگ تر از 5/0 باشد، قیمت کتاب چاپی بیشتر از قیمت بهینه فروش کتاب الکترونیک خواهد بود. در زنجیره تامین غیرمتمرکز، قیمت گذاری براساس تعادل استکلبرگ برای مدل های پایه- پیرو انجام شده است. در تقاضای ثابت، قیمت عمده فروشی کتاب های چاپی در مدل های غیرمتمرکز با قیمت فروش نهایی آنها درحالت متمرکز برابر است؛ ولی قیمت فروش نهایی کتاب چاپی در مدل غیرمتمرکز بیشتر از حالت متمرکز است و قیمت فروش کتاب الکترونیک در مدل های متمرکز و غیرمتمرکز با هم تفاوتی ندارد. براساس نتایج، درحالت تقاضای ثابت، یا کتاب الکترونیک نباید به بازار ارائه شود یا اینکه در همان ابتدا هم زمان با نسخه چاپی منتشر می شود؛ ولی درحالت تقاضای کاهشی، کتاب الکترونیک در هر زمانی از دوره عمر محصول انتشار می یابد.
    کلیدواژگان: زنجیره تامین، قیمت گذاری، کتاب الکترونیک، کتاب چاپی، ناشر
  • مهناز حسین زاده*، محمدرضا مهرگان، محمد قمی صفحات 111-132
    صنایع خودروسازی به دلیل داشتن زنجیره تامین طولانی و شبکه ارتباطات گسترده میان عناصر متشکله آن با ریسک های بسیاری مواجه اند. شرکت سایپا (یکی از مهم ترین قطب های خودروسازی کشور) در سال های اخیر درزمینه چگونگی شناسایی، اولویت بندی و مدیریت ریسک های موجود در زنجیره تامین خود با چالش های بسیاری مواجه بوده است؛ به طوری که همواره مقابله با این چالش از اولویت های اصلی پژوهش های این مجموعه بوده است. هدف این پژوهش، شناسایی و ارزیابی ریسک های موجود در زنجیره تامین شرکت خودروسازی سایپا برای تعیین ریسک های بحرانی و اتخاذ تصمیم مناسب برای هریک از دسته های تعیین شده است. مدل های موجود ارزیابی ریسک، هر ریسک و اثر و اهمیت آن در عملکرد شرکت را به صورت مجرد و بدون توجه به شبکه ارتباطی میان ریسک های مختلف موجود در سیستم بررسی می کنند. ازطرفی این روش ها درصورتی به کار گرفته می شوند که تعداد ریسک های شناسایی شده محدود باشند. در این پژوهش تلاش شده است با تعبیه رویکرد تحلیل شبکه های اجتماعی درقالب گام های مدل ارزیابی ریسک کوزو، رویکرد جدیدی در ارزیابی ریسک ها استفاده شود. بدین منظور، نخست باتوجه به اسناد و مدارک شرکت و مصاحبه با خبرگان و بهره گیری از روش تحلیل تم، ریسک های زنجیره تامین شرکت سایپا شناسایی شده است. در مرحله دوم، رویکرد تحلیل شبکه های اجتماعی در گام های مدل کوزو پیشنهاد شده است و با استفاده از این رویکرد، کلیدی ترین ریسک ها شناسایی شده اند. درنهایت نتایج حاصل با بهره گیری از ماتریس اهمیت - عملکرد تجزیه و تحلیل و تصمیمات لازم ارائه شده است. باتوجه به نتایج تحلیل، 48 درصد از کل ریسک ها در دسته های مالی- اقتصادی، تامین کنندگان، اطلاعاتی، و حمل و نقل قرار دارند.
    کلیدواژگان: زنجیره تامین، ریسک، مدیریت ریسک زنجیره تامین، مدل مدیریت ریسک کوزو، رویکرد تحلیل شبکه های اجتماعی (SNA)
  • نسیبه جنتیان، مصطفی زندیه*، اکبر عالم تبریز، مسعود ربیعه صفحات 133-153
    صنعت داروسازی در ایران دچار مشکلاتی مانند توزیع و زمان بندی نامناسب دارو است که موجب به موقع نرسیدن دارو به بیماران و یا از طرف دیگر حجم عظیم داروهای تاریخ گذشته شده است. همچنین توجه به مسائل زیست محیطی و اجتماعی در کنار مسائل اقتصادی رویکرد جدی برای رسیدن به توسعه پایدار است. در این مقاله برای توزیع دارو در سطح کشور با توجه به میزان تقاضا، توابع هدف اقتصادی، زیست محیطی و اجتماعی در نظر گرفته شده است. هدف این پژوهش طراحی مدلی نوین برای شبکه توزیع دارو منطبق با شرایط کشور است. نوآوری این مقاله نسبت به سایر پژوهش های دیگر داشتن تابع هدف اجتماعی مدل است که انبارهای محلی در مناطق با درصد بیکاری بیشتر انتخاب می شوند و این امر موجب ایجاد اشتغال در مناطق محروم تر می شود. برای اعتبارسنجی، مدل پیشنهادی در شرکت داروپخش اجرا شده است. این مدل Np-hard است و برای به دست آوردن جبهه جواب از الگوریتم های تکاملی سه هدفه و سپس تنظیم پارامتر با روش طراحی آزمایش های تاگوچی استفاده شده است. شرکت های توزیع کننده دارو با استفاده از این مدل قادر به تصمیم گیری های استراتژیک (تعیین مکان های توزیع اصلی و محلی دارو) و تاکتیکی (میزان جریان دارو در شبکه) با کمترین هزینه و زمان، ایجاد اشتغال در مناطق محروم تر و کاهش ایجاد گازهای خطرناک می شوند.
    کلیدواژگان: شبکه توزیع دارو، پایداری، مدل برنامه ریزی چندهدفه، الگوریتم های تکاملی، طراحی آزمایش های تاگوچی، کارایی الگوریتم
  • محبوبه هنرور *، حسین رضایی صفحات 155-174
    در گذشته معمولا کانال سنتی یا همان خرده فروش ها برای فروش محصولات استفاده می شد؛ اما با توسعه یافتن تجارت الکترونیک، شرکت ها به فکر ایجاد کانال های فروش دیگری مانند وب سایت ها افتاده اند. باتوجه به وجود دو کانال برای فروش، انتخاب استراتژی مناسب برای قیمت گذاری اهمیت زیادی پیدا کرده است. همچنین در مبحث قیمت گذاری و برنامه ریزی تولید، ریسک فاکتور بسیار مهمی است. در این مقاله از سیاست برون سپاری برای مقابله با ریسک، استفاده و مدل ریاضی جدیدی برای تصمیم گیری هم زمان مباحث قیمت گذاری و برون سپاری در زنجیره تامین سه سطحی و دوکاناله با وجود عدم قطعیت ارائه شده است. در مقاله ابتدا مدل غیرخطی برای سود زنجیره ارائه شده است؛ سپس باتوجه به پیچیده بودن مدل از روش فراابتکاری شبیه سازی تبرید و مدل سازی احتمالی براساس سناریو برای حل مدل پیشنهادی استفاده شده است. پارامترهای اولیه این الگوریتم با روش تاگوچی تنظیم می شوند. نتایج محاسباتی و تحلیل حساسیت نشان دهنده کارایی روش حل پیشنهادی برای حل مسئله است.
    کلیدواژگان: زنجیره تامین دوکاناله، ریسک، مدیریت اختلالات، قیمت گذاری
  • محمدرضا صادقی مقدم، حسین صفری، رضا بارانی بیرانوند صفحات 175-197
    کشور ایران جزء کشورهایی است که نرخ وقوع فجایع طبیعی در آن نسبت به برخی کشورها بیشتر است؛ ازاین رو مدیریت زنجیره تامین بشردوستانه در قبل، حین و بعد از حادثه اهمیت زیادی دارد. ارزیابی عملکرد زنجیره تامین بشردوستانه یکی از فعالیت های مهم در مدیریت این زنجیره است؛ به نحوی که وضعیت عملکرد را مشخص می کند و بازخورد مناسبی را برای بهبود عملکرد آن فراهم می کند. در مقاله حاضر پس از شناسایی شاخص ها و ابعاد عملکردی زنجیره تامین زلزله ای کشور که در پژوهش دیگری انجام شده است، ازطریق نرمال سازی بارهای عاملی، شاخص ها و ابعاد اهمیت آنها مشخص می شود. سپس برای ارزیابی عملکرد زنجیره تامین بشردوستانه سه زلزله فاجعه بار اخیر کشور، از رویکرد خاکستری استفاده می شود. برای ارزیابی منطقی و مناسب از دو سناریو در ارزیابی بحث می شود. در سناریوی نخست فقط از اطلاعات سه زلزله منتخب استفاده شده است و در سناریوی دوم پس از تعیین استاندارد در تمام شاخص ها این اطلاعات به صورت زلزله ای فرضی در محاسبات وارد می شود. نتایج نیز برمبنای سناریوی دوم است که سناریوی کامل تری نسبت به سناریوی نخست است؛ بدین صورت که عملکرد زنجیره تامین بشردوستانه در زلزله های دشتی و اهر (هریس و ورزقان) برابر و بهتر از زلزله بشاگرد است.
    کلیدواژگان: زنجیره تامین بشردوستانه، ارزیابی عملکرد، رویکرد خاکستری، سناریو
  • علیرضا شهرکی *، امید فراست صفحات 199-219
    هدف این مقاله، ارائه رویکردی مبتنی بر بهینه سازی شبیه سازی برای بهبود عملکرد سیستم تولید سلولی با بهینه سازی تخصیص منابع و تعیین توالی انجام کارها در هر سلول است. از فرضیات در نظر گرفته شده در این پژوهش، احتمالی بودن کلیه پارامترهای مدل، خرابی ماشین آلات و در نظر گرفتن چندین محصول در سیستم تولیدی است. ابتدا متغیرهای کنترل شدنی و پاسخ مسئله براساس هدف پژوهش و شرایط سیستم تولیدی درحال بررسی و حدود آنها تعیین شده است. سپس با استفاده از طراحی آزمایش های تاگوچی، سناریوهای آزمایشی براساس ترکیب متغیرهای کنترل شدنی طراحی شده است. بعد از آن با استفاده از شبیه سازی، سناریوهای آزمایشی ارزیابی و متغیرهای پاسخ مربوطه تعیین شده اند. درادامه برای بسط نتایج به کل فضای جواب از شبکه های عصبی مصنوعی استفاده شده است. درپایان سناریوی بهینه با استفاده از روش تحلیل پوششی داده ها مشخص شده است. در آخر عملکرد سناریوی بهینه شناسایی شده با وضعیت فعلی سیستم تولیدی مقایسه و میزان بهبود درصورت پیاده سازی سناریوی بهینه مشخص شده است. برای پیاده سازی رویکرد ارائه شده یک واحد تراشکاری صنعتی در نظر گرفته شده است که از سیستم سلولی استفاده می کند.
    کلیدواژگان: بهینه سازی شبیه سازی، تحلیل پوششی داده ها، تخصیص اپراتور، توالی انجام کارها، سیستم تولید سلولی، شبکه های عصبی مصنوعی
|
  • Parisima Pakravan, Javad Behnamian * Pages 1-23
    In undesirable facility location problem contrary to desirable location, facilities are located far from service receiver facilities as much as possible. The problem of locating such facilities is discussed in this study. This research is focused on the ‘not in my backyard’ (NIMBY) phrase which refers to the social phenomena in which residents are opposed to locate undesirable facilities around their houses. Examples of such facilities include electric transmission lines and recycling centers. Due to the opposition typically encountered in constructing an undesirable facility, the facility planner should understand the nature of the NIMBY phenomena and consider it as a key factor in determining facility location. Because of the adverse effects of these facilities, coupled with the uncertainty in the real world, it is estimated taking into account the potential uncertainty. This problem has been considered in the discrete space. The mathematical model is presented and methods to deal with uncertainty and stochastic programing problems modeling and methods used in the study are presented. According to the NP-hard problem, Simulated Annealing suggested for solving problems in large-scale. Numerical experiments to evaluate and validate the mathematical modeling and proposed algorithm are considered and operation of the proposed algorithm to solve various problems with genetic algorithms available in the literature about the problem is compared.
    Introduction
    From a general point of view, it is possible to investigate locational issues in two categories of optimal and undesirable facilities. There are many location models available for desirable facilities such as warehouses, service centers, police stations, and more. In such instances, customers are attracted to their facilities. Unlike desirable facilities in the location of undesirable facilities, it attempts to facilities as far away as possible from receiving areas. Undesirable facilities, while providing essential services for the people, at the same time, may have negative consequences for their neighborhoods. The proximity of undesirable facilities to residential areas due to their high pollution and hazardousness, will lead to lower quality of life in these areas and increase the health risk for residents of these areas. This type of problem is formulated in order to minimize the adverse effects of a new facility on existing facilities. Given that the degree of pollution from this facility in the real world is associated with uncertainty, in this paper, the mathematical model for the problem of locating undesirable facilities with uncertainty in degree of pollution parameters is presented. Also, the history of the problem of locating facilities in stochastic conditions and undesirable facilities has been presented. In the following, considering the mathematical model in a deterministic and stochastic manner, the proposed algorithm structure is described.  
    Materials and Methods
    The problem of locating the undesirable facilities discussed in this paper is based on the article that focuses on the purpose of the term (not in my backyard). This refers to social phenomena in which residents oppose the placement of undesirable facilities around their homes. Considering the opposition to the creation of undesirable facilities, the facility planner must understand the nature of the NIMBY phenomenona and consider it as a key factor in determining the location of the facility. In this research, the NIMBY phenomenona is directly discussed through the structure of the objective function. The purpose function structure allows the residents to speak of the fact that the hosts hosting these facilities are those who are absorbing environmental costs, while other regions enjoy the benefits of this facility.  1-                   First, uncertainty in the degree of pollution is considered and different scenarios are considered for it and the problem will be solved. 2-                   According to the mathematical model presented in stochastic mode, the problem is solved for each individual scenario and finally, three scenarios are compared and the objective hope of the target function is obtained for all scenarios. We considered the scenarios based on the degree of difference between these two parameters, which are as follows: The scenario (1): The degree of main contamination (a) 100 times the marginal contamination (b) Scenario (2): both change in close proximity. Scenario (3): The degree of main contamination (a) is 10 times the marginal contamination level (b) 3-                   Because of the NP-hardness of the nature of the problem studied, it is proposed to solve it in large dimensions for the large-scale simulation of Simulated annealing algorithm. A Heuristic method has been used to generate the initial answer to this question. The method of transfer from a solution to its neighboring solution is characterized by a known key factor called the neighborhood structure. Here, four operators for generating neighboring solutions are used in the Simulated annealing algorithm.  
    Results and Discussion
    The problem in small and medium sizes in all experimental samples of the Metaheuristic-algorithm has been reached to the optimal solution of the problem Cplex solving software. Due to the fact that the Cplex software was unable to solve test samples for dimensions of 1000 node and larger than this dimension of the problem, the results of numerical tests of the problem were considered by considering 3 scenarios in each sample. Sixteen types of problem in large dimensions are solved with genetic and Simulated annealing meta-heuristic algorithms, and the stopping time of both algorithms is 1 hour. Finally, the results are compared. The genetic algorithm is written according to Sang et al. (Sang et al., 2013). Due to the fact that it was not able to produce feasible answers on large dimensions, the Heuristic method used in the Simulated annealing algorithm to generate the primary solution is also used in this algorithm. In all experimental samples, the Simulated annealing algorithm has the lowest value of the target function in comparison with the genetic algorithm.  
    Conclusion
    In this paper, the problem of locating undesirable facilities with a focus on reducing the degree of pollution from these facilities was studied. Considering that in the past and also in the real world, the exact amount of this degree of pollution is uncertain, it was decided to investigate the degree of contamination resulting from this facility in the uncertainty mode. Then, by investigating the methods of dealing with uncertainty and modeling methods of random planning problems, the scenario method was used to consider the uncertainty in this problem and scenarios were designed to take into account the uncertainty in degree of pollution of the facility for the points around them. The problem was solved for various test examples in small to medium size scenarios with Cplex exact solver software and optimal solutions were obtained. Given the complex nature of this problem, a large-scale Simulated annealing algorithm was proposed to solve it in large dimensions and given that the space is large, it is used to generate an initial possible response instead of producing a random solution from an Heuristic method to produce an initial possible answer to this problem. The computational results of Optimal Optimization and Simulated annealing Algorithms in Small and Medium Dimensions were compared. Both methods achieved the optimal response and the difference was during solving these samples. For large dimensions, experimental samples were used using the Simulated annealing, and genetic algorithm available in the literature were compared and the efficiency of  Simulated annealing algorithm in convergence to optimal solution and less time to solve in different samples and scenarios has been proved.   References Song, B. D. Morrison, J. R. Ko, Y. D. (2013). Efficient location and allocation strategies for undesirable facilities considering their fundamental properties. Comput. Ind. Eng., 65: 475–484. Mirhasani, S. A. (2014). Stochastic Programming. First Edition. Amir Kabir University Publishers.
    Keywords: Undesirable Facility Location, Uncertainty, Simulated Annealing, Genetic Algorithm
  • Mohammad Khanbabaei, Farzad Movahedi Sobhani *, Mahmood Alborzi, Reza Radfar Pages 25-45
    Organizations should consider process improvement for promoting their performance. The main problem is the high volume of process dataset along with the wide variety of their features that leads to increase the complexity in process improvement methods. Previous methods cannot improve processes in a high volume of process dataset. Data mining approach can support improvement methods by identifying valuable hidden patterns in high volume of process dataset. In this paper a framework of using data mining techniques for extracting valuable hidden patterns in high volume of process dataset is developed for presenting the improvement suggestions. In order to evaluate the proposed framework, a real set of processes along with their features were gathered. After that, using classification, clustering, and features selection algorithms, valuable patterns in high volume of process dataset were identified. After evaluating these patterns, improvement suggestions were recommended by the identified patterns. The results showed that identified patterns can support processes improvement activities with recommending the improvement suggestions.
    Introduction
    Processes are one of the most important resources in organizations. Improving processes can lead to enhance the organizational performance. Several methodologies are presented for process improvement. However, they did not consider the problem of high volume of process dataset along with their features in organizations. There are many processes in organizations that lead to increase the complex interactions between processes and high dimensionality problem (Jeong et al., 2008). In addition, there is a single view to processes in the improvement actions (Houy et al., 2011). Huang et al. (2012) stated that there is a low attention to the internal aspects of the processes. In this situation, data mining techniques can identify and discover valuable patterns hidden in the large number of processes in the organizations. These patterns can be utilized for recommending the improvement suggestion for enhancing the performance of the processes. Recently, there is a link between data mining and process improvement. A few studies considered the interaction between data mining and process improvement. However, these studies did not consider a real large number of processes in their computations. In addition, they did not have a comprehensive and practical view to the application of data mining for process improvement. This paper presents a framework for using data mining techniques for identifying and extracting valuable and suitable patterns hidden in the large number of processes. These patterns can be employed for recommending the process improvement suggestions.
    Materials and Methods
    This paper employs three data mining techniques including clustering, classification, and feature selection techniques for extracting valuable patterns hidden in the large number of processes. CRISP-DM (cross industry standard process for data mining) standard is used to implement data mining activities. In the classification technique, C5 decision tree algorithm is employed to classify processes. In clustering, K-means algorithm is applied to segment processes in several clusters. In feature selection technique, the most important process features are selected for process improvement. In the proposed framework, at first, all processes are gathered from a variety of the resources in the organization. After that, process features are identified from the literature and they are defined based on an interaction between data miner and process improvement expert. In the following, a process dataset is provided for using data mining techniques. For this dataset, a variety of data preparation and preprocessing methods are employed for achieving better results of implementing data mining techniques. After that, three classification, clustering, and feature selection data mining techniques are applied for extracting valuable patterns hidden in the large number of processes. In the classification techniques, C5 decision tree algorithm employs a target features for classifying processes. Cross validation method is applied to train and test the constructed decision tree. The output is several if-then rules for classifying processes based on the target feature. In clustering, K-means algorithm segments processes in several clusters. Processes in a cluster have a similar behavior and they are dissimilar to the processes in the other clusters. Euclidean distance function is used for calculating the distance between processes. The output is the cluster profiling to describe the behavior of processes in each cluster. In the feature selection, the most important features are selected based on a target process feature. These features are more correlated to the target process feature. In addition, they are more important to consider for process improvement purposes. The output is a variety of more important process features so that they can be considered for recommending the process improvement suggestions. After extracting valuable patterns hidden in the large number of processes, the accuracy and quality and these patterns are evaluated by an interaction between data miner, process owner, and process improvement expert. Evaluated patterns are considered for recommending the improvement suggestions. These suggestions must be aligned with the process improvement concepts in the organization. Processes are improved based on these suggestions. In last, the performance of the improved processes is evaluated. The proposed framework is based on an iterative and continuous method for using data mining in process improvement.
    Results and Discussion
    The proposed framework was evaluated based on a real process dataset including 1318 processes and 80 process features. Several preprocessing methods were employed to prepare dataset. Three classification, clustering, and feature selection techniques were applied to extract valuable patterns hidden in the large number of processes. Using C5 classification algorithm, processes were classified by the if-then rules based on a target process feature. In the proposed model, as an example, key processes were classified using C5 decision tree. The classification accuracy in the test dataset was set to 92.31%. The output was several if-then rules to classify key processes. These rules can identify key processes. In addition, processes features applied to construct the if-then rules can be employed for recommending the improvement suggestions for key processes. In clustering, using K-means algorithm, processes were segmented into 10 clusters. Some features were considered to cluster processes. The output was cluster profiling to describe the behavior of processes based on the selected features. In this paper, for example, the behaviors of processes in cluster 1 were described through the selected features. Several suggestions were recommended by the selected features for improving processes in this cluster. In the feature selection technique, for example, “cost of process” was considered as the target process feature. The output was the selection of 10 more important process features that they were more related to the target process features. These features can describe the cost of processes better than the other features. The improvement suggestions for reducing cost of processes were recommended based on these selected features.
    Conclusion
    This paper presented a framework of using data mining techniques for identifying valuable patterns hidden in the large number of processes for the process improvement purposes. A real process dataset was employed to evaluate the applicability and effectiveness of the proposed framework. In the proposed framework, classification, clustering, and feature selection data mining techniques were applied to extract valuable patterns hidden in high volume of process dataset. Process improvement methodologies cannot recommend the improvement suggestions in a rapid and accurate method, when there are many processes along with a variety of the process features. These methodologies are restricted to recommend a limited number of the process improvement suggestions. In the other direction, there are few studies on the application of data mining for the process improvement that they include some weaknesses. The proposed framework employed a lot of real processes in the data mining techniques to discover valuable patterns for process improvement. In addition, a variety of the process features were extracted from the literature to describe the behavior of the processes. In last, a wide variety of the several suggestions were recommended for process improvement based on the extracted patterns hidden in the large number of processes. The organizations can utilize the proposed framework for improving their processes. In addition, this framework can help organizations with the large number of processes for employing the process improvement methodologies in a productive manner. Future studies can apply the other data mining techniques for the proposed framework. In addition, the proposed framework can be developed for knowledge-intensive processes. The proposed framework can be integrated with the knowledge management methodologies for improving knowledge-based processes. In last, the proposed framework can be enhanced as a decision support system for the process improvement.
    References

    Houy, C., Fettke, P., Loos, P., van der Aalst, W.M.P., Krogstie, J. (2011). Business Process Management in the Large. Business and Information Systems Engineering, 3, 385–388.
    Huang, Z., Lu, X., Duan, H. (2012). Resource behavior measure and application in business process management. Expert Systems with Applications, 39 (7), 6458–6468.
    Jeong, H., Song, S., Shin, S., Rae Cho, B. (2008). Integrating data mining to a process design using the robust bayesian approach. International Journal of Reliability, Quality and Safety Engineering, 15 (05), 441–464.
    Keywords: Process Improvement, Data Mining, Business Processes
  • Faranak Hosseinzadeh Saljooghi *, Amir Rahimi Pages 47-63
    Supply Chain Management (SCM) is a suitable tool to improve economic, social and environmental performance. SCM assessment is an important task for all types of organizations. The DEA method has been widely used to evaluate SCM.  By attention supply chain as network data envelopment analysis (DEA) can calculate the efficiency of supply chain with multiple stages. This study examines the efficiency and returns to scale (RTS) of supply chain management of resin manufacturing companies based on network DEA models. We determine returns to scale of resin manufacturing companies as a two-stage process, with crisp and fuzzy data. Fuzzy DEA model is based on  approach to measure the efficiency and RTS of supply chain. The proposed models are used to evaluate the efficiency and RTS of supply chain of 27 resin production companies. The six companies were network efficient in the investigation with crisp data, while there are three network efficient companies with fuzzy data.
    Introduction
    The supply chain includes all stages that directly or indirectly supply demands and resolve customer needs. Planning, purchasing, production, transportation, storage, distribution and customer service are parts of supply chain which have a major role in the process of running any business. Supply Chain Management (SCM) is an appropriate tool for improving the economic, social and environmental performance of any organization simultaneously; therefore, its evaluation is very important. Data Envelopment Analysis (DEA) is one of the suitable methods for evaluating SCM. The supply chain cannot be evaluated by traditional models of data envelopment analysis due to its network or multi-stage nature, so in this paper apply network DEA is chosen to evaluate SCM. Kao and Hwang (2008) modified the standard DEA model by considering the two-stage series relationship with the overall process, and simulated the efficiency of the whole two-stage process as a result of the efficiency of each stage. Then Chen et al. (2009) proposed another viewpoint. Also, Tavana et al. (2008) measured and analyzed the efficiency of the two-stage fuzzy DEA models using an interval method.
    The review of the researches in this area indicate that the consideration of technical efficiency in a two-stage network with environmental impacts is less common in existing literature. Also, in all literature, the issues of return on a scale, which is an economic and important concept in data envelopment analysis, and the maximum output increase per income increase, have been ignored in the supply chain in a two-stage network given the discussion of the two-step technical efficiency. The present research is providing models for determining the efficiency and return to scale of stages and the process of network production in a supply chain, in two states: deterministic and fuzzy data.
    Materials and Methods
    Firstly, we would measure the technical efficiency of a two-level supply chain using the network data envelopment analysis model.  The efficiency of each stage is calculated from the ratio of the weighted output to its weighted input, and the overall efficiency obtained by the weighted average efficiency of the stages (Cooper et al. 2007). In this study, we examined the most complete model of two-stage models, which allows for direct inputs and outputs for each stage in addition to the intermediate data between the stages.Also, Data Envelopment Analysis method can determine the return to scale of decision-making units (DMU). If a DMU has the constant return to scale, each multiplication set of inputs produces the same multiplication of outputs. By assuming constant return to scale, large and small DMUs are compared and generally not efficient rather than each other. In the variable return to scale, any changing of inputs can produce less or more multiplier in outputs. In this research, we measure the efficiency and return to scale (RTS) of supply chain for resin manufacturing companies within the framework of DEA network models; efficiency and return to scale are determined in two states of deterministic and fuzzy data and a model is proposed for evaluating of supply chain management using DEA network with fuzzy data. In evaluating with deterministic and fuzzy data, the supply chain is efficient, if all the stages in the process are efficient. Fuzzy LR numbers are widespread because of reaching good models and good computational performance. In this paper, the fuzzy LR numbers in the fuzzy DEA model is considered, based on the a-cuts approach, and measuring efficiency and determining the return to scale of supply chain of resin companies in Iran. Efficiency and return to scale are calculated on the lower and upper bounds; it is efficient if both bound are efficient .
    Results and Discussion
    The proposed models have been used to assess the efficiency and RTS of the supply chain of 27 resin production companies in Iran. This supply chain is considered as two-levels, the supply sector as stage 1 and the manufacturing sector as stage two.
    Inputs in stage of supply are annual cost, annual turnover of personnel, and environmental costs. The number of products from supplier to manufacturer and partnership cost in green production plans are considered as the intermediate data (inputs for second stage and outputs of firs stage), and outputs of the production stage are the number of trained personnel in the fields of job, safety, and health, number of green products and revenue. In the evaluation of the deterministic data, 6 companies (Alborz Chalk, Fajr Petrochemicals, etc.) are network efficient, with different kinds of RTS, constant, increasing and decreasing return to scale. While, considering fuzzy data, 3 companies are just network efficient.
    Conclusion
    The proposed models in this paper are methods for determining the efficiency and returns to scale of the supply chain with two deterministic and fuzzy data approaches.  We consider the supply chain of 27 resin companies as a two-stage network, and according to the DEA network-related techniques and theories, in addition to identifying efficient and inefficient companies, the determination of an increasing, constant, and decreasing RTS of the supply chain stages with two kind data, deterministic and fuzzy data. Because in many environmental factors, we encounter data that are inaccurate or ambiguous, or our knowledge about the production process is inaccurate, so the proposed model provides more useful information to organizations and industrial activities. Six companies with deterministic data and 3 companies with fuzzy data are efficient in network. These companies have managed and coordinated flow of materials between several organizations and within the organization in the best possible way and with environmental concerns.
    References

    Chen, Y., Chen, W. D., Cook x, N., & Zhu, J. (2009). "Additive efficiency decomposition in two-stage DEA". European Journal of Operational Research, 196(3), 1170–1176.
    Cooper, W.W., L.M. Seiford & Tone, K. (2007). Data envelopment analysis: A comprehensive text with models, applications, references and DEA-solver software. Boston: Kluwer Academic Publishers.
    Kao, C., & Hwang, S. N. (2008). "Efficiency decomposition in two-stage data envelopment analysis: An application to non-life insurance companies in Taiwan". European Journal of Operational Research, 185(1), 418–429.
    Tavana, M., Kalili_Damghani, K., Arteaga, F., Mahmoudi, R., & Kafezal Kotob, A. (2018). "Efficiency Decomposition and Measurement in Two-Stage Fuzzy DEA Models Using a Bargaining Game Approach". Computers & Industrial Engineering, 1-36.
    Keywords: Supply Chain Management, Data Envelopment Analysis, Efficiency, Return to Scale Fuzzy Data
  • Maryam Akhavan Kharazian *, Mohammad Mahdi Shahbazi, Mohammad Fatehi Pages 65-82
    The success or failure of any organization is directly linked to the quality of its human resources selection (recruitment, measurement, and selection). By reviewing the data of a knowledge job, this paper aims to help improve the selection process of that job. Consequently, the selection of appropriate employees’ rate will increase, and the rate of human resource turnover will decrease. The approach of this paper is Applied Research and the strategy is Case Study. This paper combines two computational techniques (DEA and CART). Data Envelopment Analysis (DEA) is a non-parametric technique that determines the efficiency of individuals, but it does not provide information on the details of factors affecting performance (especially non-numerical factors). In the present study, this deficiency has been resolved using the Classification and Regression Tree (CART) (as a data mining technique). The result of this study has provided a framework for combining DEA and CART in order to discover rules on the recruitment of knowledge workers in a specific job (a knowledge job) and in a specific organization (HFJ Institute). The results indicate that ‘work experience’, ‘average score in the last degree’ and ‘age’ are related to the employee performance, and therefore it is necessary to be considered in the process of future recruitment of that job.
    Introduction
    By reviewing the data of a knowledge job, this paper aims to help improve the selection process of that job. Consequently, the selection of appropriate employees’ rate will increase, and the rate of human resource turnover will decrease. The approach of this paper is Applied Research and the strategy is Case Study.
    In the literature review section, the definition of recruitment and selection (Azar et al. 2013), the definition of knowledge workers (Drucker 1994; Horwitz et al. 2006; Li et al. 2015), tasks of human resource management (Osman et al. 2011) and a background of the use of data mining in the field of human resources (Hajiheydari et al. 2017) have been reviewed.
    Materials and Methods
    This paper combines two computational techniques (DEA and CART). Data Envelopment Analysis (DEA) is a non-parametric technique that determines the efficiency of individuals, but it does not provide information on the details of factors affecting performance (especially non-numerical factors). In the present study, this deficiency has been resolved using the Classification and Regression Tree (CART) (as a data mining technique).
     
    Results and Discussion
    In this research, we tried to develop the previous models and present a new model.
    The result of this study has provided a framework for combining DEA and CART in order to discover rules on the recruitment of knowledge workers in a specific job (knowledge job) and in a specific organization (Hedayat-e Farhighteghgan-e Javan (HFJ) Institute). The combination of data envelopment analysis and data mining approaches (and considering qualitative and implicit variables in the estimation of efficiency) is one of the most important innovations in this research.
    In the proposed framework, organizations can identify and recruit talents and appropriate individuals in a short time based on data mining and discovery of success patterns (resulted from their past experiences). This action avoids costs of frequent recruitment, and decreases turnover rate and improves performance.
    By analyzing the outputs of the designed model for the stage of recruitment and selection in this specific job (a knowledge job) and specific organization (HFJ Institute), six rules were extracted, and based on that, suggestions were given.
    At the stage of recruitment, it is better for this organization to take into consideration these rules (the status of jobseekers in ‘work experience’, ‘average score in the last degree’, and ‘age’) and then to decrease costs and failure rate of the recruitment process.
    Conclusion
    The results are somewhat consistent with the results of previous studies. The proposed approach can be planned and implemented in various jobs and organizations to extract specific rules for these jobs and organizations in order to increase productivity in the process of human resource selection and recruitment.
    References
    Zhu, X., Seaver, W., Sawhney, R., Ji, S., Holt, B., Sanil, G. B., & Upreti, G. (2017). Employee turnover forecasting for human resource management based on time series analysis. Journal of Applied Statistics, 44 (8), 1421-1440.
    Lukovac, V., Pamučar, D., Popović, M., & Đorović, B. (2017). Portfolio model for analyzing human resources: An approach based on neuro-fuzzy modeling and the simulated annealing algorithm. Expert Systems with Applications, 90, 318-331.
    Osman, I. H., Berbary, L. N., Sidani, Y., Al-Ayoubi, B., & Emrouznejad, A. (2011). Data envelopment analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit. Journal of medical systems, 35 (5), 1039-1062.
    Keywords: Employee Selection (Personnel Selection), Data Envelopment Analysis, Data Mining, Classification, Regression Tree, Human Resource Management
  • Kamran Kianfar *, Mitra Pashootanizadeh Pages 83-110
    The purpose of this research is to determine the optimal wholesale and final prices of printed and electronic books in a dual channel supply chain. Here, a comprehensive sensitivity analysis has been conducted on different supply chain parameters such as demand elasticity coefficients, customer tendency to printed books and book production costs. The effect of these variables on optimal prices, demands and final profit is analyzed. Results of centralized and fixed demand mode reveal that if the coefficient of customers' tendency to printed book is bigger than 0.5, then the final printed books' price will be higher than e-book prices. Pricing in decentralized model is done using the Stackelberg equilibrium. Results show that wholesale prices of p-book in decentralized and fixed demand system are equal to selling prices in centralized system, but the final selling price of p-book in decentralized system is higher than centralized one. In both systems, the e-book selling price is the same. In the fixed demand condition, either e-books are not published or they are simultaneously published with their printed versions; however, in the decreasing demand mode, e-books may be introduced to market at any time during books' life period.
    Introduction
    In this paper, a three-level supply chain including publisher, bookstores, and final customers is considered. The publisher provides books to the market via printed and electronic versions. The production costs for one copy of e-book and p-book are denoted by  and , respectively. The publisher provides p-books to bookstores at wholesale price, and customers can buy them at final selling price. E-books are provided to customers at price . The problem is defined in two time periods where in the first period, the publisher only delivers p-books at wholesale price  and bookstores sell them at price . In the second period of books’ lifetime, the wholesale price is set to  and e-books are simultaneously provided by the publisher to the market. The bookstores decide about the final selling price of p-books in the second period, . The following figure outlines the considering supply chain and its different distribution channels:   Publisher Bookstore P-book E-book Fig. 1- Outline of the considering dual-channel supply chain   The demand function is defined in two modes of fixed and decreasing. Assuming that prices remain unchanged during lifetime of books, demands are unchanged in the fixed mode while the demands for p-books and e-books will diminish in the decreasing mode.
    Materials and Methods
    Let  and  denote the demand of p-books in the first and second periods and  be the demand for e-books. Parameter  denotes the base demand of books when they are free. Parameter  shows the loyalty of customers to p-books meaning that  percent of customers potentially prefer e-books to p-books. Parameters  and  respectively denote price elasticity of demand and cross-price sensitivity coefficients.  
    Results and Discussion
    Table (1) shows the results of analysis on the supply chain with fixed demands in centralized and decentralized modes. In the centralized mode, the publisher and bookstores simultaneously decide regarding the prices and try to maximize total profit in the supply chain. In the decentralized mode, the publisher tries to maximize its own profit by adjusting prices  and  while bookstores want to maximize their profit via controlling prices  and .Most of the results are achieved from Stackelberg games and solving systems of linear equations.   Table 1- Summary of Results for the Supply Chain with Fixed Demand Centralized mode Decentralized mode  
    Conclusion
    In this paper, pricing policies in a dual-channel supply chain for printed and electronic books was considered. The demand function was defined as two types of fixed and decreasing and also, the supply chain was considered in centralized and decentralized modes. The results show that if the coefficient of customers’ tendency towards p-books is bigger than 0.5, then the optimal price for p-books is greater than price of e-books. Increasing the price elasticity of demand will reduce optimum selling prices and demands, and consequently reduces the final profit of publisher and bookstores. In the decentralized supply chain with fixed demand, pricing is done using Stackelberg equilibrium where publisher is the leader and bookstores are followers. Wholesale price of p-books in decentralized mode is equal to their final selling price in centralized mode; while the final price of p-books in decentralized mode is higher compared with centralized mode. The price of e-books is the same in both the modes. In the supply chain with decreasing demand, accelerating the demand reduction will postpone the introduction of e-books to the market in the centralized mode while it has a reverse effect in decentralized mode. Many ideas are provided by researchers to encourage people to read e-books such as advertising electronic books in TV programs, developing websites and portals and supporting their admins (IRNA, 2015), laying down suitable laws (Fahimifar & Heidary, 2014) and providing high-speed internet and e-book reader tools to all the people (Tavakol & BaniJamal, 2014). References Fahimifar, S. & Heidary, E (2014).  Economic dimensions of e-book publishing: Views from within the publishing industry. National Studies on Librarianship and Information Organization, 25(2): 1-175. (In Persian). IRNA (2015).   Ebook or printed book are not important: we do not have book reader and book user. Chap o Nashr, 11(129): 117-119. (In Persian). Tavakol, M. & BaniJamal (2014). Investigating e-book publishing in Iran and how Iranian users consume Ebooks. Strategic Studies on Youth and Sports, 13(24): 127-140. (In Persian).
    Keywords: Electronic Book, Pricing, Printed Book, Publisher, Supply Chain
  • Mahnaz Hosseinzadeh *, Mohammad Reza Mehregan, Mohammad Ghomi Pages 111-132
    This paper aimed at identifying and analyzing supply chain risks of Saipa automotive company to determine those seemly critical and the appropriate decision for each category to be made. To this end, first, according to company’s documents and interviews with experts, and using theme analysis method, the identification and categorization of supply chain risks are addressed. In the second step, by using of SNA approach, the most important risks in terms of the effects they have on emerging other risks in the risks’ relationship network are determined. The results are analyzed using the IPM matrix and the necessary decisions are made according to this matrix. According to the results, 48% of the total risks are categorized in financial-economic, suppliers, information, and transportation categories. Therefore, it seems that paying particular attention to these areas can result in significant improvement in system’s status.
    Introduction
    The automotive industry is the second largest industry in Iran, the survival of which is of great importance for the country. Today, various factors, such as fluctuations in the foreign exchange market, have led to uncertainties in this industry. In addition, other factors, including increasing the variety of products and services, reducing product life cycle, demand fluctuations, rising costs, technological changes, political issues, financial instability, and natural disasters have also increased the uncertainty and risk in the industry’s supply chain. On the other hand, the automotive industry has faced many risks owing to its long supply chain, in which diverse companies interact with each other. Hence, the supply chain risk management of this industry to identify and evaluate the risks and reduce their adverse effects is counted as a critical issue on which many researchers have been embarked. So far, various models, such as Fault Tree Analysis (FTA) (Zhang et al., 2016) and Failure Modes and Effects Analysis (FMEA) (Liu & Zhou, 2014) have been developed as risk analysis tools. However, in these models, each risk, its significance, and its impact on the performance of the company or supply chain has been deemed as a single concept regardless of various possible relationship among different kinds of risks in the system. Regarding the promising relationships among different types of risks, some have used Analytic Hierarchical Process (ANP) method (Talebi & Iron, 2015) to evaluate and prioritize the risks. The main problem is that this method is only applicable when the number of risks is low, thus the pairwise comparison of the risks would be difficult while the inconsistency is high. Whereas, if the number of identified risks is high (usually in the case of long supply chains such as the automotive supply chain), a risk will be counted as critical in co-occurrence with other risks. Thus, in addition to detecting the risks, identifying their communication network and the features of such a network is also important in analyzing and presenting solutions. Social Network Analysis (SNA) is an approach which emphasizes on the pairs of points and their relationships instead of focusing just on single points and their features. On the other hand, Coso is a new approach providing a comprehensive framework for managing and evaluating risks. To this end, in order to overcome the deficiencies of the existing models, it has been attempted to exploit the capabilities of SNA and to innovatively conduct it in the framework of the steps taken by the Coso risk management model, hoping to develop a new approach in risk evaluation and to apply it for assessing Saipa automotive supply chain risk.
    Materials and Methods
    The current study has been carried out in two stages. First, an initial framework has been proposed to categorize the existing risks in the supply chain of Saipa automotive regarding the sources of risks. After two stages of refinement, a list of risks have been obtained by using expert opinions. The process of identifying the risks based on the Coso model framework and the theme analysis method are as follows: 1-       Investigating the Saipa control environment (internal environment) by reviewing company’s documents and interviewing with experts (Step 1 of the Coso model); 2-       Target setting as to risk identification after conducting interviews with experts (step 2 of the COSO model). 3-       Detecting risk sources in the organization by interviewing experts and using the theme analysis method (Step 3 of the Coso model). The second phase of the research is quantitative, in which the following steps have been followed: 4-       Risk evaluation (Step 4 of the COSO Model). Risk assessment has been conducted from two perspectives. One in terms of the importance of the risks in their relationship network, and the other in terms of the impact of the risks on the performance of the Saipa Supply Chain. First, by using SNA approach, the communication network among the identified risks has been depicted, then the key risks associated with other risks in the network have been identified using the concepts of degree and betweenness centrality. Afterwards, a survey of experts has been conducted to determine the impact of risks on the supply chain performance. 5-       At the last step, the identified risks have been classified in IPM matrix. Risk categorization as for importance dimension has done with respect to SNA output and upon performance dimension based the survey output.  
    Results and Discussion
    Upon the results of the qualitative part of the study, 93 kinds of risks were identified as to Saipa automotive supply chain. Afterwards, the identified risks were branded in four categories as presented in Figure 1.  Fig. 1- Pareto diagram of risk categorization risks on the concepts of coso model  In the quantitative stage, the network causal relationship network of the identified risks were depicted and the degree of centrality and betweenness centrality measures were calculated in UCINET software. The more degree of centrality in the target network implies the risks most influenced by other risks (in degree) and the risks most affects other risks to be created. Betweenness centrality indicates the risks through which many risks lead to creating other risks.   Fig. 2- Network of Causal Relationships among Saipa Supply Chain Risks Finally, the IPM matrix was used to label the identified risks. In this matrix, the importance of the risks was determined based on centrality measures, so that risks with centrality degree higher than their mid-range (115) and those with betweenness centrality higher than their mid-range (31.9) were determined and their common points were taken as important risks. Besides, the risks were divided in two categories of high and low performance upon the cut- point of 0.5 for the scores in this dimension. Thus, the identified risks were assigned to each cell of the IPM matrix as presented in Figure 3, where the distribution of the risks was 36.6% in “keeping the current situation”, 16.1% in “critical”, 28% in “resource waste”, and 19.4% in “indifference” cells.
    Conclusion
    In this paper, SNA approach is conducted in the framework of the steps taken by the Coso risk management model to develop the current risk evaluation models and to identify critical risks in Saipa automotive supply chain by applying the developed framework in practice. According to the results, 48% of the total critical risks were categorized in financial-economic, suppliers, information, and transportation categories. Therefore, it seems that paying particular attention to these areas can result in significant improvement in system’s status. References Zhang, M., Song, W., Chen, Z., & Wang, J. (2016). Risk assessment for fire and explosion accidents of steel oil tanks using improved AHP based on FTA. Process Safety Progress, 35(3), 260-269. Liu, J., & Zhou, Y. (2014), Improved FMEA Application to Evaluation of Supply Chain Vulnerability, 7th International Joint Conference on Computational Sciences and Optimization (CSO) Location: Beijing, China. Talebi, D., & Iron, F., (2015). Identification of Risk Factors of Supply Chain and Supplier Selection with Analytical Network Process (Case: Automobile Indastry), Industrial Management Perspective, 17, 31- 43.
    Keywords: Supply Chain, Supply Chain Risk Management, Coso Risk Management Model, Social Network Analysis (SNA)
  • Nassibeh Janatyan, Mostafa Zandieh *, Akbar Alem Tabriz, Masood Rabieh Pages 133-153
    Pharmaceutical Industry in Iran has difficulties such as inappropriate distribution, expired drugs and late drug delivery to patients. To overcome these issues and reach a sustainable development, the environmental and social aspects must be considered beside the economic aspects. In this paper, we address the Pharmaceutical distribution network and design a novel model that helps the distributing companies to decide strategic (determining the main and local distribution centers) and tactical decision (determining the flow) according to the three aspects of sustainability. The solution through the evolutionary algorithms presents the best selection of local distribution centers and the optimum amount of flow of medicine through the network with respect to three aspects of sustainability.
     
    Introduction
    In order to compete in the universal market, pharmaceutical companies must amend their competency. This coemption market needs to select the certain level of commitment for sustainability aspects to reach the sustainable supply chains. For this purpose, this study considers three aspects of sustainability (i.e. economical, environmental, and social pillars) to design the pharmaceutical distribution network. So, the model has three objective functions. The first function is to minimize the cost of transportation through the network, the second is to minimize the CO2 emissions through the network and the third is to maximize the job creation through the network.
    Carter and Rogers (2008) used the concept of ‘true sustainability’. They suggested that the consideration of environmental and social issues should be “coupled with economic objectives” and assist the company’s strategic long-term planning. Mota et al. (2014) considered this research question: “How can sustainability be integrated in to supply chains' design and planning decisions?” Several methods and frameworks have been proposed to evaluate environmental effects.
    Materials and Methods
    This model is a multi-objective mixed integer linear programming (MILP) and is NP-hard. We have used the evolutionary algorithms (NSGA-II, MOPSO) to solve the problem and achieve the Pareto frontier. The Taguchi design experimental design has been used in order to tune the parameters of the algorithm. For comparing two evolutionary algorithms’ performance indicators of spacing (S), mean ideal distance (MID), Time and number of solutions (NOS) have been calculated.
    Results and Discussion
    As a real case study, the proposed model is used in Darupakhsh. This company is one of the biggest pharmaceutical companies in Iran. The model has been customized with Darupakhsh’s features for selected medicine. This company has a main distribution center and twenty local distribution centers. Twenty important customers have been assumed in this research. Solving the model with the evolutionary algorithms of NSGA-II and MOPSO and comparing to traditional method of weighted sum, highlighted the superior of MOPSO in economical and environmental aspects. The weighted sum method had a better answer in the social aspect. Overall, the power of evolutionary algorithms have been proved because of the Pareto frontier present to the researcher.
    Conclusion
    This research focused on designing the sustainable pharmaceutical distribution network. The proposed model is a multi-objective mixed integer linear programming (MILP). This model was tested in Darupakhsh Company. In this case, MOPSO had better performance than NSGA-II. The solution presents the best selection of local distribution centers and the optimum amount of flow of medicine through the network with respect to three aspects of sustainability.For further research, studying the uncertain parameters in this model and a suitable approach to deal with uncertainty could be proposed.
    References
    Ahmadi, A., Mousazadeh, M., Torabi, S. A., & Pishvaee, M. S. (2018). “OR Applications in Pharmaceutical Supply Chain Management”. International Series in Operations Research & Management Science. 262,461–491.
    Mota,B., Gomes, Ke., Carvalho, A., & Barbosa-Povoa, A. (2014). “Towards supply chain sustainability: economic, environmental and social design and planning”. Journal of Cleaner Production, 107, 14–27.
    Mousazadeh, M., Torabi, S.A., & Zahiri, B. (2015). “A robust Possibilistic Programming approach for pharmaceutical supply chain network design”, Computers and Chemical Engineering, 82, 115- 128.
    Keywords: Pharmaceutical Distribution Network, Sustainability, Multi-Objective Model, Evolutionary Algorithms, Taguchi Design
  • Mahboobeh Honarvar*, Hossein Rezaee Pages 155-174
    In the past, traditional channel or retailer was used for selling products but with the development of e-commerce, a large company in the world considers another sale channel like websites. Considering the existence of two channels for sale, choosing the right strategy for pricing has become important. In pricing and production planning, risk is a very important factor. In this paper, outsourcing policies have been used to deal with risks and a new mathematical model is presented for simultaneous decision-making on pricing and outsourcing in a three-level and two-channel supply chain despite uncertainty. In this paper, a nonlinear model is presented for supply chain profit function. According to the complexity of profit function, a meta-heuristic algorithm based on simulated annealing and scenario-based stochastic model are used to solve the proposed model.  The initial parameters of this algorithm are set by Taguchi method. The computational results and sensitivity analysis indicate the effectiveness of the proposed solving method for problem solving.
    Introduction
    The rapidly expanding Internet provides an opportunity for organizations to distribute their products via a direct channel, while continuing to sell their products through the traditional retail channel. Although a hybrid channel strategy provides firms with many benefits and enables them to capture a larger share of the market, combining the retail distribution channel with direct channel may pose some challenges (Chiang et al., 2003). A comprehensive review of multi-channel models can be found in Cattani et al. (2004) and Tsay and Agrawal (2004). On the other hand, the disruption in supply networks is an important supply chain risk. Natural or man-made disasters such as equipment breakdowns, labors trikes, traffic interruptions, earthquakes, floods, and hurricanes may cause supply disruptions (Chen & Xiao, 2015). In this paper, we focus on supply disruption which happened by production downtime. One of the most common policies for risk mitigation is flexible multiple-sourcing. We use both the regular production run and the outsourcing mode due to the production disruption risk and uncertainty of capacity allocation. One of the applied studies conducted on pricing and disruption management is by Huang et al. (2013) in which production costs are disrupted. Yu et al. (2009) focus on evaluating the impacts of supply disruption risks on the choice between the famous single and dual sourcing methods in a two-stage supply chain with a non-stationary and price-sensitive demand. Chen and Xiao, et al (2015) developed supply chain game models with multiple uncertainties, and outsourcing mode due to his production disruption risk and uncertainty of capacity allocation.  In the literature examined, the effect of outsourcing on pricing and production planning in dual channel supply chain which is under disruption risks is not taken into account.
    Materials and Methods
    We consider a dual channel supply chain in which a manufacturer sells to retailers as well as directly to end customers. The manufacturer sells the products to the retailer at wholesale price w. The retailer sells the products to end customers at retail price . The manufacturer sells the products to end customers directly at direct sale price . We assume that the channel demand functions in the two channels are random and linear in self-price and cross-price effects. Regular production capacity of the manufacturer is denoted by Y. We assume production is subject to a random disruption risk, and with disruption, the regular production will reach zero. The probability of disruption of production will be indicated by P. When the supply disruption occurs, the manufacturer cannot fulfill the order from the retailer. Therefore, we assume that in addition to a regular production run, the manufacturer has access to an outsourcing option with the higher procurement cost and the outsourcing production is perfectly reliable. The expected total profit for integrated dual channel supply chain is obtained as follows which comprises total revenue, production, holding, and shortage costs in both manufacturer and retailer under both disruption and non-disruption situations.
    Then, to simplifying the model, the problem is remodeled based on scenarios
    Results and Discussion
    In this study, in order to achieve an optimal pricing and outsourcing, simulated annealing algorithm (SA) is developed. To get better output from SA, the initial solution is obtained from the scenario-based model which is solved by GAMS. This solution is used in SA algorithm. This method shows that the combination of SA and scenario based model in this specific way can adapt advantages of both methods. The sensitivity analysis show that with the increased sensitivity of direct channel demand or indirect channel demand to the price, the price of both channels decreases. With increasing the potential market demand, prices will rise. With increasing cost of outsourcing, prices on both channels are reduced.
    Conclusion
    In this paper, a non-linear stochastic model for pricing and determining the amount of outsourcing in the dual channel supply chain with disruption was presented. Regarding the non-linearity and complexity of model, the simulated annealing algorithm was used to solve the model. To improve the algorithm and approaching the answer to the optimal answer, the initial response value in the algorithm was obtained using a scenario-based model used in the algorithm.
    Keywords: Dual-channel Supply Chain, Risk, Disruption Management, Pricing
  • Mohammad Reza Sadeghi Moghadam*, Hossein Safari, Reza Barani Beyranvand Pages 175-197
    Iran is a country where the rate of natural disasters is high comparing with other countries. Therefore, humanitarian supply chain management before, during and after disasters is of great importance. Performance assess humanitarian supply chain is one of the important activities in humanitarian supply chain management which specifies the performance and how to improve it.
    At the beginning, performance dimensions and indicators of the earthquake supply chain in the country were identified. For this purpose, the output of the master's thesis in Tehran University was used and weights of performance dimensions and indicators by normalization factor loadings were obtained. Then, the selected earthquakes were assessed by gray approach. Two scenarios were used to assess reasonably and appropriately. In the first scenario, Just earthquake information was used, but in the second scenario, after setting the standard in qualitative and quantitative indicators, this information is entered in the calculations in the form of a hypothetical earthquake. Results are based on the second scenario which is more complete than the first one. They are: humanitarian supply chain performances in Dashti and Ahar earthquakes were equal and were better than that in Bashagard. 
    Introduction
    Humanitarian supply chain is a special type of supply chain with unique characteristics which distinguishes it from other market supply chains. Performance of this supply chain has a major role in challenging, controlling, and reducing impacts of disasters (Schulz and Heigh, 2009). Vast losses and casualties in different cities of the world caused by natural disasters have led to extensive applied researches on humanitarian supply chain and rescue logistics management in order to minimize losses encountering a crisis. Among all areas of humanitarian supply chain, performance assessment has a great significance and as matter of fact has been deserted compared to other concepts. Among active organization in humanitarian field, only 45% discussed performance assessment and among them, only 20% are constantly monitoring and also measuring their performance in all activities (Scholten, 2013). Assessing performance of humanitarian supply chain in the past disasters, countries can acquire proper understanding from this supply chain and use information result from this assessment as a feedback to improve supply chain performance in future disasters that inevitably will entangle them. Consequently, this study aims to develop a quantitative approach to assess the performance of humanitarian supply chain in Iran's earthquake disasters.
    Materials and Methods
    In this paper, we used indicators and performance dimensions of a M.A thesis titled as "assessing performance of humanitarian supply chain through benchmarking from performance assessment models (a case study: some of the last earthquakes in Iran)" carried out in management department of Tehran University (by current authors) in three phases before, during, and after earthquake with 13 performance dimensions and 44 performance indicators. Also we used output (factorial loads) of that study to determine significance and weights of those dimensions and indicators. So that after considering these indicators and dimensions, by normalization, factorial loads of indicators and dimensions are considered as weights and significance. Then, we applied the gray approach and two scenarios to assess performance of three last catastrophic disasters in the country, the first scenario calculates the performance of every earthquake separately, and second one calculates the performance compared to an ideal earthquake which has the best indicators. Pervasive decision making of this approach (Gray approach) is presented in following steps:-                      First step: creating decision making matrix
    -                      Second step: normalizing decision making matrix
    -                      Third step: defining reference option
    -                      Fourth step: calculating Minkovsky distance
    -                      Fifth step: calculating gray correspondence coefficient
    -                      Sixth step: calculating gray correspondence score
    The reasons to apply Gray approach are uncertainty in qualitative indicators, the ability to simultaneously consider quantitative indicators and uncertain numbers in order to integrate assessment procedure and to achieve a general conclusion. After considering earthquakes happened in the last two decades, with regard to constraint to access information and data of these earthquakes, three earthquakes of Ahar, Heris, and Varzaghan in east Azerbaijan Province (2012), Dashti in Bushehr Province (2013) and BashaGard in Hormozgan Province (2013) were selected because of proportional equivalence of conditions (equal amplitude, number of people influenced, required rescue aids) and availability of data and authorities, to compare these selected earthquakes considering those characteristics.
    Results and Discussions
    We expected that this study determine the performance of humanitarian supply chain in selected earthquakes in terms of performance dimension, three considered phases, and finally the whole supply chain.
    In the first scenario, information of three earthquakes were used in different steps and performance status of three earthquakes was determined based on this information. It can be said that Bashagard earthquake condition is better than the two other earthquakes. Ahar and Dashti earthquakes have small difference. In terms of reaction phase to earthquake, Bashagard earthquake had less performance compared to other two earthquakes. In the phase of after earthquake, magnitude order was Ahar > Bashagard > Dashti; with regard to final results of the first scenario, East Azerbaijan earthquake (0.806) has better conditions than Bushehr earthquake (0.750) and Hormozgan (0.684). Minor differences between these numbers indicate that the performance of three earthquakes was very close and there is no great distinction among them.
    A second scenario was considered in order to rationalize the comparison. With regard to results from this scenario, Bashagard earthquake in the phase of before earthquake had better condition and performance of Ahar and Dashti earthquakes were 0.403 and 0.4 respectively, that shows a small difference. In the phase of reaction to earthquake, Dashti, Ahar and Bashagerd earthquakes, respectively, have less distance with standard performance. Performance in the phase of after earthquake for Ahahr, Dashti, and Bashagerd were 0.460, 0.447, and 0.418, respectively. With regard to general results we can conclude that Ahar and Dashti earthquakes with 0.467 show similar performances and like the previous scenario, Bashagard earthquake (0.437) has the largest distance from standard and has undesirable performance compared to other earthquakes.
    Performance scores are remarkable in two scenarios. Performances in the second scenario are less than the first scenario. This is normal, because Gray approach is based on distance from ideal state and in the first state, ideal state is acquired among available earthquakes, but in standard performance scenario, ideal state was defined based on existing standards with larger amounts. Then, normally distance of earthquakes to ideal are larger and lower performances are obtained.
    Conclusion
    Based on the second scenario which is more complete than first one, performances of humanitarian supply chain in Dashti and Ahar (Heris and Varzaghan) earthquakes are equal and better than that of Bashagard earthquake. In the end, it is noteworthy that this study has provided a framework to assess performance of humanitarian supply chain of the country in earthquakes that did not exist before and it could evaluate three earthquakes as sample earthquakes in the country.
    Keywords: Humanitarian Supply Chain, Performance Assessment, Grey Approach, Scenario
  • Alireza Shahraki*, Omid Farasat Pages 199-219
    The purpose of this article is to present an approach based on simulation optimization for improving the performance of cellular manufacturing systems through optimizing operator allocation and job dispatching rules in each cell. In this study, we have considered stochastic parameters, machines’ breakdown and multiple products in order to consider the problem as close as possible to real-world situation. The presented approach is composed of Taguchi design of experiments, discrete event simulation, artificial neural networks, and data envelopment analysis. First, controllable and response variables are determined based on the objective of the study and expert judgment. Then, the design of experiments is used in order to develop experimental scenarios base on controllable variables. Furthermore, simulation is used to evaluate experimental scenarios and their related response variables. Then, in order to expand the experimental results to the whole feasible solution space, artificial neural networks is used. Finally, the optimum scenario is determined using data envelopment analysis. After determining the optimum scenario, it is compared to the present condition of the case and the improvements are determined. In order to evaluate the performance of the presented approach, a lathing factory which uses a cellular manufacturing system is considered as the case study.
    Introduction
    Due to the fact that the high volume of manufacturing systems around the world forms the cellular manufacturing system, optimization of these lines has been of great importance and so far have been studied by many researchers in this regard. Most researchers have considered the problems in simple terms and ignored many of the assumptions. They have been optimized cellular manufacturing line problems by using mathematical modellings and meta-heuristic algorithms, but it should be noted that assumptions such as the uncertainty of problem parameters, machines’ breakdown and variable demand are among the existing and dominant conditions in cellular manufacturing problems, which, by taking them, can bring the problem as close as possible to real-world conditions, and, on the other hand, research results become more practical. Because of the complicated nature of such problems, mathematical modelling will not be efficient and useful. In this situation, simulation is one of the best approaches at hand. By using simulation modelling, it is possible to consider all parameters of the problem, stochastic, which make the model much closer to reality. The purpose of this study is to present an approach for optimizing operator allocation and job dispatching rules on machines in a cellular manufacturing ambience, in order to minimize delay costs per piece and maximizing the average efficiency of machines. Since the model of this study is seeking multiple objectives, the simulation model of the problem includes several responses. In the end, the operator's optimum number for allocation to each cell and the optimal job dispatching rules in each cell will be determined with the aim of achieving the objectives of the problem. Azadeh et al. used fuzzy data envelopment analysis (FDEA) and computer simulation to optimize operator allocation in a cellular manufacturing system. They indicated the effectiveness and superiority of the method through a practical case study (Azadeh et al., 2010)(Azadeh, 2010 #9;Azadeh, 2010 #9). Besides, an approach for multi-response optimization problem by using artificial neural network (ANN) and data envelopment analysis (DEA) is studied by Bashiri et al., (2013). Studies have been done so far show that optimal operator allocation along with the optimal job dispatching rules in the cellular manufacturing system has not been performed in the stochastic conditions, and from this point of view, the present study is unique.
    Materials and Methods
    This section describes the proposed methodology which is illustrated in Figure 1
    Results and Discussion
    In the present study, the cellular manufacturing system was first evaluated and the data needed to simulate the system were collected. After the initial simulation of the manufacturing system in ARENA simulation software, controllable variables were determined according to the features of the manufacturing system. Then, using Taguchi’s experimental design method in Minitab software, experimental scenarios were designed by various combinations of controllable variables. Then, the simulation model was modified and simulated according to any experimental scenario, and the problem response variables, that were the same problem objective functions, were extracted. After extracting the results of the experimental scenarios, considering that without evaluating other not tested scenarios, it is impossible to identify the optimum scenario, by using artificial neural networks, the experimental results were expanded to the entire possible modes. For this purpose, data on experimental scenarios with their results were placed as training data in the neural network. After setting the parameter, the optimal neural network was identified. Table (1) shows that the network number 7 with 6.8% error is chosen as the optimal structure of the neural network.
    Keywords: Simulation Optimization, Cellular Manufacturing System, Operator Allocation, Job Dispatching Rules, Artificial Neural Network, Data Envelopment Analysis