فهرست مطالب

فصلنامه فیزیک زمین و فضا
سال سی و هشتم شماره 4 (زمستان 1391)

  • تاریخ انتشار: 1392/01/22
  • تعداد عناوین: 18
|
  • سیدهادی دهقان منشادی، نوربخش میرزایی، مرتضی اسکندری قادی صفحه 1
    در این کار، گستره E ◦54-5‎/49 و N ◦1‎/34-31 برای بررسی لرزه زمین ساخت و لرزه خیزی منطقه اصفهان انتخاب شده است. از نقشه های زمین شناسی با مقیاس 1:250000 و 1:100000 ناحیه اصفهان برای استخراج دقیق تر گسل های ناحیه مورد نظر استفاده و از آن برای بررسی لرزه زمین ساخت منطقه بهره گرفته شده است. بررسی سازوکار کانونی 18 زمین لرزه با 3‎/4Mw ≥، در این منطقه، وجود مولفه غالب معکوس را برای اکثر آنها (11 مورد) نشان می دهد. با استفاده از داده های زمین لرزه های تاریخی و دستگاهی از منابع جهانی (USGS،ISC)‎ و محلی فهرست نامه کامل و یکنواختی مشتمل بر 170 زمین لرزه دستگاهی و 10 زمین لرزه تاریخی از منطقه تهیه و براساس آن لرزه خیزی این ناحیه مورد بررسی قرار گرفته است. به منظور یکنواختی فهرست نامه، در هر ایالت لرزه زمین ساختی رابطه ای برای تبدیل مقیاس بزرگی mb به Ms تعیین و به کار گرفته شد. برای محاسبه دقیق تر پارامتر های لرزه خیزی، عدم قطعیت بزرگی زمین لرزه ها و همچنین آستانه کامل بودن داده ها، بررسی و مدنظر قرار گرفته است. مقدار پارامتر b (b-value)‎ برای ایالت لرزه زمین ساختی زاگرس و ایران مرکزی-شرق ایران در گستره مورد بررسی به ترتیب، معادل 05‎/0 ± 95‎/0 و 12‎/0 ± 81‎/0، نرخ رویداد زمین لرزه، λ، برای زمین لرزه های با بزرگی 3 ≤ Ms معادل 413‎/0 ± 976‎/4 و 129‎/0 ± 48‎/0، برآورد شده است. بیشینه بزرگی زمین لرزه قابل انتظار برای ایالت لرزه زمین ساختی زاگرس در گستره مورد بررسی، Mmax، 67‎/0 ± 4‎/7 و برای ایران مرکزی-شرق ایران 74‎/1 ± 8‎/7 برآورد شده است. همچنین، تاثیر دسته بندی نکردن داده های دستگاهی (با اختصاص آستانه کامل بودن و عدم قطعیت خاص آن دسته)، روی پارامترهای لرزه خیزی بررسی شده است، نتیجه آنکه تاثیر آن روی مقدار پارامتر b و نرخ رویداد محسوس تر از تاثیر آن روی Mmax است. همچنین، باعث کاهش چشمگیر عدم قطعیت پارامتر های لرزه خیزی می شود.
    کلیدواژگان: عدم قطعیت بزرگی، ناحیه اصفهان، لرزه خیزی، آستانه کامل بودن داده ها، لرزه زمین ساخت
  • وحید ملکی، ظاهر حسین شمالی، محمدرضا حاتمی صفحه 23
    روش تعیین محل غیرخطی با بهره گیری از حل احتمالاتی و تشکیل تابع چگالی احتمال آن، پاسخ قابل قبول تر و دقیق تری را در مقایسه با روش های معمول در تعیین محل زمین لرزه ها به دست می دهد. در این تحقیق با اجرای آزمایش های مصنوعی، تاثیر محدودیت های پوشش ایستگاهی بر روش غیرخطی بررسی می شود. همچنین تاثیر افزایش گپ آزیموتی و حذف ایستگاه های نزدیک و تاثیر آن بر تعیین محل زمین لرزه ها شبیه سازی می شود. در ادامه نیز با استفاده از روش غیرخطی، 16 زمین لرزه رخ داده در ناحیه البرز مرکزی تعیین محل می شود و نتایج با تعیین محل زمین لرزه ها به روش خطی شده که ازسوی مرکز لرزه نگاری موسسه ژئوفیزیک دانشگاه تهران (IGUT)‎ و پژوهشگاه بین المللی زلزله شناسی و مهندسی زلزله (IIEES)‎ که به صورت معمول صورت می گیرد، مقایسه می شود. همچنین مقاطع عمقی زمین لرزه های مکان یابی شده در این تحقیق با نتایج مراکز ذکر شده به منظور بررسی کیفی نتایج، مقایسه شده است.
    کلیدواژگان: تابع چگالی احتمال، البرز مرکزی، گپ آزیموتی، تعیین محل، روش غیرخطی
  • مهدی علی پور کله بستی، بهمن بهلولی صفحه 39
    در برخی از میادین نفتی تراوایی کم سنگ مخزن سبب تولید ناچیز چاه ها می شود. از جمله روش هایی که می توان برای افزایش تراوایی و درنهایت افزایش تولید چاه ها به کارگرفت، روش شکافت حرارتی در سنگ است. تنش حرارتی که در فرایند تزریق شاره سرد در مخزن ایجاد می شود در واقع بخشی از تنش لازم در فرایند شکافت هیدرولیکی است که سبب کاهش فشار شکست سنگ می شود. اگر این تنش به حد بحرانی برسد، موجب شکست سنگ می شود و با افزایش شکستگی ها، تراوایی توده سنگ مخزن زیاد می شود و تولید چاه افزایش می یابد. در این بررسی آزمایشگاهی، تاثیر اختلاف حرارت و تنش ایجاد شده در سنگ مورد بررسی قرار گرفته است. نتایج آزمایش ها نشان می دهدکه امکان ایجاد شکستگی های ریز با اختلاف حرارتی 90 درجه سلسیوس وجود دارد. بر این اساس دو نمونه از چهار نمونه آثاری از ریزشکستگی را نشان دادند. این شکستگی ها عمدتا در بخش میانی نمونه ها روی داده است که علت این امر نحوه گسترش نیم رخ حرارتی (سرمایش) و تشکیل جبهه تنش در این بخش از نمونه ها است.
    کلیدواژگان: تنش حرارتی، شکافت هیدرولیکی، میادین نفتی، تزریق، شکافت حرارتی، شکستگی، فشار شکست
  • کیوان نجف زاده، محمدعلی ریاحی، محسن سیدعلی صفحه 49
    تراوایی، پارامتری کلیدی در ارتباط با تعیین ویژگی مخزن است. در حقیقت راه حل های دقیقی برای مسائل مهندسی مخزن بدون دسترسی به اطلاعات دقیق تراوایی وجود ندارد. تا کنون صنعت نفت سعی بر آن داشته است که مقادیر تراوایی را از طریق اندازه گیری های آزمایشگاهی و یا از راه چاه آزمایی به دست آورد. اگرچه موارد پیش گفته به منزله راه حل عرضه شده است ولی روش هایی کافی و مطمئن برای توصیف مخزن نیستند.
    اغلب فرض شده است که به خاطر محدودیت در تفکیک پذیری داده های لرزه ای و طبیعت هیدرولیکی پارامتر تراوایی، برآورد مستقیم تراوایی مخزن از داده های لرزه ای نیز امکان پذیر نیست. در بسیاری از حالت ها برآورد تراوایی مخزن محدود به مقیاس مغزه و تقریب نگارهای چاه است.
    در این مقاله از روش شبیه سازی گاوسی متوالی در حکم ابزاری مطمئن برای برآورد مقادیر تراوایی با استفاده از داده های لرزه ای و مقادیر تخلخل استفاده شده است. ابتدا به کمک داده های لرزه ای و نمودارهای چاه های موجود در محدوده مخزن، یک مدل ساختمانی قابل اعتماد در منطقه مورد بررسی (یکی از میادین هیدروکربوری جنوب ایران) ساخته شده است. سپس با استفاده از داده های مقاومت لرزه ای و به کمک روش کوکریجینگ گزینشی برآورد تخلخل مخزن صورت گرفته و با استفاده از روش شبیه سازی و مقادیر برآوردی تخلخل، تراوایی در سطوح متفاوت مخزن محاسبه شده است. درنهایت اعتبارسنجی نتایج با استفاده از اعتبارسنجی تقاطعی صورت گرفته است. با توجه به ساختار پیچیده زمین شناسی مخزن مورد بررسی و وجود گسل های متعدد در منطقه، اعتبارسنجی تقاطعی نشان می دهد که برآورد تراوایی حاصل از روش شبیه سازی، اعتبار قابل قبولی دارد.
    کلیدواژگان: برآورد تراوایی، شبیه سازی، برآورد تخلخل، کوکریجینگ گزینشی، روش های قطعی و احتمالاتی
  • حمیدرضا سلیمانی، محمدرضا سکوتی دیارجان، محمدعلی ریاحی صفحه 57
    فشار منفذی یا فشار شاره درون حفره های سنگ، از جمله کمیت های مهم در بحث اکتشاف و حفاری منابع نفت و گاز است. وجود فشار منفذی زیاد باعث ایجاد مشکلات عدیده ای از جمله فوران چاه، ناپایدار بودن چاه، گیرکردن لوله حفاری و یا از دست دادن چرخش گل حفاری می شود. به همین علت به نظر می رسد که پیش بینی عددی فشار منفذی، در سازند های با فشار زیاد، قبل از حفاری امری ضروری برای حفاری ایمن و اقتصادی است. همچنین در زمینه اکتشاف می توان از مفهوم فشار منفذی در تحلیل و بررسی مسیرهای مهاجرت نفت و در بررسی کیفیت پوش سنگ استفاده کرد.
    در این تحقیق به منظور پیش بینی کمی فشار منفذی از تبدیل سرعت- فشار منفذی، که با چاه های منطقه تنظیم شده، استفاده شده است لذا تولید مکعب سرعت لرزه ای مناسب در این تحقیق اهمیت ویژه ای دارد. باید توجه داشت که سرعت های به دست آمده از پردازش داده های لرزه ای معمولا به علت کم بودن وضوح و فرض های زمین لایه ای و برون راند هذلولی، دارای دقت کافی برای تحلیل فشار نیست. در تحقیق پیش رو سعی شده است تا با استفاده از داده های سرعت برانبارش و وارون سازی داده های لرزه ای، همچنین استفاده از داده های نگار سرعت صوتی در چاه های منطقه مدل سرعت مناسب بررسی فشار منفذی تولید شود. همچنین برای تبدیل داده های سرعت تولید شده به فشار موثر، از رابطه باورز (1995) تنظیم شده با چاه های منطقه استفاده شده است. درنهایت نتایج حاصل از اعتبارسنجی حاکی از مناسب بودن مدل فشار منفذی حاصل از این روش است.
    کلیدواژگان: فشار روباره، فشارمنفذی، وارون سازی، فشار هیدرواستاتیک، زمین آمار
  • اصغر راست بود، بهزاد وثوقی صفحه 71
    در این تحقیق نقش پارامترهای گوناگون گسلش روی داده های ژئودتیک مورد بررسی قرار گرفته و هدف از آن بررسی اثر پارامترهای ورودی هندسی و فیزیکی در مدل های چسبنده کشسان (ویسکوالاستیک) تغییرشکل پوسته است. برای عملی شدن این آنالیز از مدل چسبنده کشسان وانگ و همکاران (2006) استفاده شده است. از اطلاعات با کیفیت بالا در مورد تغییرشکل وابسته به زمان سطح زمین، می توان برای استخراج جزئیات بیشتری از تغییرات زمانی و مکانی تغییرشکل پوسته ای ناشی از زمین لرزه استفاده کرد. این فرایندها دربردارنده متغیرهای متعددی هستند که برخی از آنها با دقت بیشتری نسبت به بقیه تعیین می شوند. در این تحقیق، تغییرشکل سطحی در دو گسلش شیب لغز و امتدادلغز در محیطی که از یک لایه کشسان روی یک نیم فضای چسبنده کشسان تشکیل شده است مدل سازی می شود و آنالیز حساسیت روی همه پارامترهای هندسی و فیزیکی آنها صورت می گیرد. با بررسی های صورت گرفته در مورد روش های گوناگون تعیین پارامترهای هندسی گسلش و پارامترهای فیزیکی منطقه گسلش، از بین این پارامترها، آنالیز روی پارامترهایی که با دقت کمتری از اطلاعات مربوط به عملیات صحرایی تعیین می شوند یعنی چسبندگی (ویسکوزیته) نیم فضا، ضخامت لایه کشسان و زاویه شیب صفحه گسل، توصیه می شود. مطابق نتایج حاصل از این تحقیق، میزان حساسیت مدل چسبنده کشسان به پارامترهای ورودی، ارتباطی به نوع گسلش ندارد. با استفاده از آنالیز حساسیت می توان مکان مناسب ترین اطلاعات جابه جایی حاصل از مدل را به منظور به دست آوردن مقادیر پارامترهای مورد بررسی، تعیین کرد. برای نمونه مطابق نتایج به دست آمده، بهترین اطلاعات برای به دست آوردن مقدار چسبندگی، تغییرشکل پسالرزه ای روی منطقه ای است که در آن گسیختگی رخ داده است. با این حال هر منطقه با جابه جایی های پسالرزه ای بزرگ مقدار می تواند اطلاعات مفیدی را در این زمینه فراهم کند. برای ضخامت لایه کشسان نیز استفاده از اطلاعات پسالرزه ای از منطقه بالای صفحه گسل توصیه می شود، شیب گسل نیز از اطلاعات همالرزه ای بهتر تعیین می شود.
    کلیدواژگان: تغییرشکل پسالرزه ای، تغییرشکل همالرزه ای، آنالیز حساسیت، نیم فضای چسبنده کشسان، گسل مرجع، مدل سازی، لایه کشسان
  • مهروز خادمی، مهدی نجفی علمداری، محمدعلی شریفی صفحه 89
    بازیابی میدان جاذبه زمین در مقیاس جهانی به صورت ضرایب هماهنگ کروی با استفاده از مشاهدات ماهواره های اختصاصی گرانی سنجی، به روش های گوناگونی صورت می گیرد. در این مقاله برای بازیابی، از روش انتگرال انرژی استفاده شده است. در این روش، محاسبه بردارسرعت ماهواره برای بازیابی میدان جاذبه با استفاده از روش های مشتق گیری عددی ضروری است. روش های مشتق گیری نیوتن، اسپلاین و تیلور برای محاسبه مشتقات (بردار سرعت) به کار گرفته شده اند. مشاهدات دینامیکی شبیه سازی شده ماهواره GRACE در میدان متاثر از مدل ژئوپتانسیلی EGM96 مورد استفاده قرار گرفته اند و با استفاده از روش های پیش گفته ضرایب ژئوپتانسیلی تا درجه و مرتبه 70 بازیابی شده اند. همچنین تاثیر مشاهدات آلوده به اشتباه نیز در روش های مشتق گیری بررسی شده اند. در نهایت نتایج به صورت اختلاف ارتفاع های ژئوئید و degree variance نسبت به ضرایب مدل EGM96 نمایش داده شده است. در بازیابی میدان جاذبه به روش انتگرال انرژی، روش نیوتن بیشترین دقت را نشان داد.
    کلیدواژگان: بازیابی میدان جاذبه، مشتق گیری عددی، ماهواره GRACE، روش انتگرال انرژی
  • مجتبی رشوند، بهروز اسکویی صفحه 105
    این تحقیق به معرفی و مقایسه روش گرادیان افقی سیگنال تحلیلی و روش ترکیبی سیگنال تحلیلی- واهمامیخت اویلر برپایه روش سیگنال تحلیلی، در تفسیر داده های مغناطیسی می‎پردازد. هر دو روش بدون نیاز به اطلاعات اولیه از پارامترهای چشمه ایجاد بی‎هنجاری مغناطیسی، برآورد مناسبی از موقعیت و شکل چشمه مورد بررسی به دست می‎دهند. روش گرادیان افقی سیگنال تحلیلی روشی خودکار در برآورد هم زمان عمق، مکان افقی و شکل چشمه ایجاد بی‎هنجاری مغناطیسی است که در این روش با ترکیب معادلات سیگنال تحلیلی، اویلر، سیگنال تحلیلی ارتقاء یافته و گرادیان افقی سیگنال تحلیلی، معادله خطی در تعیین پارامترهای چشمه دوبعدی عرضه می شود. مهم ترین مشخصه های این روش استفاده از مشتق‎های مراتب پایین مغناطیسی و کارایی آن در تفسیر داده ها با کیفیت پایین است. روابط اصلی در روش ترکیبی سیگنال تحلیلی- واهمامیخت اویلر با ترکیب معادلات سیگنال تحلیلی و واهمامیخت اویلر و با قرار دادن مشتق‎های مناسب معادله همگن اویلر در رابطه سیگنال ‎تحلیلی داده می شوند. در این تحقیق مسئله دقت و کارایی این دو روش (با توجه به شباهت آنها) در تفسیر بی‎هنجاری های مغناطیسی ناشی از ساختارهای زمین‎شناسی با خاصیت مغناطیسی مورد بررسی قرار گرفته است. به منظور مقایسه دقت و کارایی هر یک از روش‎ها نتایج حاصل از به کارگیری آنها روی داده های مصنوعی حاصل از مدل‎سازی به روش پیش رو و آلوده به نوفه برای ساختارهای متفاوت در اعماق متفاوت نشان داده شده است. نتایج به دست آمده خطای کمتر روش گرادیان افقی سیگنال تحلیلی در برآورد شکل و مکان افقی و کارایی بیشتر روش ترکیبی سیگنال تحلیلی- واهمامیخت اویلر را در تعیین عمق چشمه ایجاد بی‎هنجاری مغناطیسی نشان می‎دهد.
    کلیدواژگان: مشتق‎ پایین مغناطیس، واهمامیخت اویلر، بی‎هنجاری مغناطیسی، سیگنال‎ تحلیلی
  • مجید جمیع، بهروز اسکویی صفحه 117
    هدف از این تحقیق ارتقاء تفکیک پذیری نتایج حاصل از برگردان به روش کمترین مربعات داده های دو بعدی مگنتوتلوریک با استفاده از یک روش هموار سازی جدید است. این روش که متعادل سازی فعال نامیده می شود تلاش می کند تا با متعادل ساختن قیود در برگردادن به روش کمترین مربعات با توجه به حساسیت یک مسئله خاص به ارتقا توان تفکیک و پایداری نتایج حاصل از برگردان بپردازد. برای این کار از یک برنامه از پیش نوشته شده در دو محیط برنامه نویسی فرترن و مت لب استفاده شده است و با ایجاد تغییرات تکمیلی در آن امکان مدل سازی داده های مگنتوتلوریک و ارزیابی الگوریتم متعادل سازی فعال فراهم شده است. با اعمال الگوریتم برگردان داده های مصنوعی مگنتوتلوریک برای مدل های ساده ای از زمین و نیز داده های صحرایی و مقایسه نتایج حاصل با نتایج برنامه های برگردان موجود، ارزیابی های لازم صورت گرفته است. خطای rms و نبود تجانس ناچیز بین داده و مدل به دست آمده از این روش، در مقایسه با نتایج حاصل از روش برگردان مرسوم که از ضریب لاگرانژ ثابت در کل فرایند برگردان استفاده می کند، معلوم می شود که این روش رهیافتی مفید برای دستیابی به نتایجی با پایداری و تفکیک پذیری بیشتر است.
    کلیدواژگان: کمترین مربعات، مگنتوتلوریک، ضریب لاگرانژ، پایداری، تفکیک پذیری
  • ساناز اسمعیلی، محمدکاظم حفیظی، سیدسجاد جزایری جونقانی صفحه 131
    GPR یکی از روش های با قدرت تفکیک زیاد ژئوفیزیک است که توانایی بسیار زیادی در مشخص کردن مکان اشیای گم شده و تجهیزات مدفون و یا به نقشه درآوردن سریع مناطقی که از نظر باستانی جالب توجه هستند، دارد. در این روش از امواج الکترومغناطیس بسامد بالا (5‎/12 تا 2300 مگاهرتز) برای آشکارسازی ساختارهای زیرسطحی بهره برده می شود. یکی از بهترین کاربردهای این روش در بررسی های باستان شناسی است که روشی غیر مخرب به شمار می رود و بدون حفاری و آسیب رساندن به آثار باستانی مدفون، به باستان شناسان در پیدا کردن این آثار کمک می کند. در این مقاله بررسی های صورت گرفته در جایگاهی در شهر کرمان با روش GPR به صورت سه بعدی با آنتن 500 مگاهرتز صورت گرفته است. داده ها به صورت سه بعدی برداشت شده و سپس پردازش و تعیین عمق شده است. با توجه به هدف مورد بررسی، پردازش های مورد استفاده روی داده ها می تواند متفاوت باشد. با انتخاب متفاوت روال پردازش و پارامترهای موردنیاز هر مرحله پردازش، ممکن است از مجموعه ای از داده های اولیه به پاسخ های متفاوت دست یافت. توانایی پردازش گر در انتخاب پارامتری مناسب، معمولا به اندازه تاثیر یک الگوریتم پردازش مناسب، برای رسیدن به یک تصویر با کیفیت خوب، حائز اهمیت است. چه بسا در شرایطی اعمال برخی از پردازش ها منجر به ایجاد مشکلات ناخواسته بر داده ها شود.
    کلیدواژگان: تصحیح ایستا، GPR، پردازش، بهره، پوش فیلتر
  • علی نجاتی کلاته، امین روشندل کاهو صفحه 145
    تاکنون روش های گوناگونی برای تشخیص مرز بی هنجاری ها معرفی شده است که از جمله می توان به سیگنال تحلیلی، فیلتر زاویه شیب و گرادیان های افقی کلی اشاره کرد. در بسیاری از موارد بررسی بی هنجاری های با دامنه کم از نظر زمین شناسی از اهمیت بیشتری نسبت به بی هنجاری های با دامنه بزرگ برخودار هستند. می توان گفت که توانایی روش های موجود برای آشکارسازی محدوده ساختارهای زمین شناسی به شدت به عمق آن ساختار وابسته است. در این مقاله، تبدیل هیلبرت دوبعدی تصویر داده های میدان پتانسیل برای متوازن سازی به کار گرفته شده است. به منظور نشان دادن کارایی روش متوازن سازی روی تصاویر حاصل از دو روش متداول انحنای نیم رخ و زاویه کجی (Tilt angle)‎ که به عمق ناهنجاری وابستگی دارند، اعمال شد. نتایج حاصل از داده های مصنوعی روشن ساخت که روش پیش گفته می تواند توازن مطلوبی میان دامنه های مربوط به ناهنجاری های با اعماق متفاوت ایجاد کند. یکی از داده واقعی مورد استفاده مربوط به ناهنجاری عمیق نفوذی گرانیتی ترامپسبرگ و دیگری مربوط به بخشی از حوضه رسوبی ایران مرکزی شامل یک ساختار نمکی است. استفاده از روش متوازن سازی بهبود قابل توجهی در کیفیت تصاویر روش های متداول ایجاد کرده است.
    کلیدواژگان: میدان پتانسیل، توازن داده ها، مرز آنومالی، تبدیل هیلبرت
  • ابوالقاسم کامکار روحانی، اسماعیل اسحقی، علیرضا عرب امیری صفحه 155
    روش رادار نفوذی به زمین (GPR)‎ به منزله یک روش غیرتخریبی به منظور آشکارسازی اهداف زیرسطحی واقع در عمق کم، براساس ارسال امواج الکترومغناطیسی به درون زمین و ثبت بازتاب های دریافت شده از امواج ارسالی، مورد استفاده قرار می گیرد. دانه بندی ذرات و وجود رس مواد زیرسطحی به دلیل تغییراتی که در میزان رسانندگی و گذردهی الکتریکی ایجاد می کنند و در نتیجه، ایجاد تغییر نسبی در عمق نفوذ امواج GPR قابل تشخیص و بررسی هستند. ساختارهای زیرسطحی کم عمق همچون حفر ه های زیرسطحی نیز که جنس و ضخامت های متفاوتی دارند به دلیل تباین در ویژگی های الکتریکی (رسانندگی و گذردهی) با روش GPR قابل آشکارسازی هستند. در این تحقیق برداشت GPR در طول 5 نیم رخ در منطقه درخانیاب بخش مجن شهرستان شاهرود با استفاده از آنتن پوششی 250 مگاهرتز صورت گرفته است. به علت فاصله کم بین فرستنده و گیرنده GPR، رسانندگی زمین مورد بررسی و وجود بازتاب های مزاحم یا ناخواسته با بسامد کم روی بازتاب های با بسامد زیاد و مغشوش کردن آنها، فیلتر Dewow روی همه داده های برداشت یا اعمال شده است. از جمله پردازش های دیگر اعمالی روی داده ها می توان انواع بهره ها (Gains)‎ را نام برد که برای کاهش اثر تضعیف امواج ناشی از افزایش عمق نفوذ به کار گرفته شده است. روی نیم رخ های برداشتی، پردازش پوش (Envelope)‎ نیز اعمال شد تا بتوان لایه بندی زیرسطحی را با دقت زیادی تعیین کرد. نتایج حاصل از این تحقیق نشان داد که با استفاده از مشخصه های امواج در مقاطع GPR، شناسایی اهدافی مانند حفر ه های زیرسطحی، تفکیک رسوبات زیرسطحی درشت دانه از ریزدانه و میزان رطوبت لایه های متفاوت زیرسطحی در منطقه با موفقیت زیادی امکان پذیر بوده است. قدرت تفکیک GPR در این تحقیق به گونه ای بود که در نیم رخ های با رسانندگی زیرسطحی کم می توان مرزهای قنات مانند مرز هوا-آب و آب-بتن را همچون لایه بندی خاک منطقه تعیین کرد. منطقه مورد بررسی در عمق کم از لایه های افقی خاک با ضخامت در حد چند سانتی متر پوشیده شده است. رسانندگی نسبتا زیاد این منطقه نیز باعث شده است تا عمق نفوذ امواج GPR به ندرت حتی به 2 متر نیز برسد.
    کلیدواژگان: میزان رس، اهداف زیرسطحی، ثابت دی الکتریک، رادار نفوذی به زمین
  • فرناز حسین پور، علیرضا محب الحجه، فرهنگ احمدی گیوی صفحه 175
    برای بررسی اقلیم شناختی مسیرهای توفان در زمستان 2007-2008، علاوه بر داده های جهانی GFS از داده های جهانی بازتحلیل NCEP-NCAR از سال 1948 تا 2005 برای میانگین های اقلیم شناختی استفاده شده است. در زمستان 2007-2008 به همراه کاهش انرژی جنبشی پیچکی در پادجریان سوی مسیر توفان اطلس، انرژی پتانسیل در دسترس پیچکی نیز در این مناطق به علت کاهش تولید کژفشاری نسبت به میانگین اقلیم شناختی آن کاهش یافته است. علاوه بر جمله تبدیل کژفشاری که به منزله عامل کژفشار، مهم ترین نقش را در تقویت و کشیده شدن جت در پادجریان سوی مسیر توفان اطلس در زمستان 2007-2008 داشته، افزایش مقادیر مثبت تبدیل فشاروردی نیز درحکم عامل فشاروردی نقش موثری در این خصوص داشته است.
    در بررسی نقش جملات انرژی در خاورمیانه به ویژه در ژانویه 2008، تولید کژفشاری مهم ترین جمله در تغذیه انرژی از جریان سوی مدیترانه بوده و تکوین جریان سوی قوی با واگرایی شدید شار آزمین گرد را به همراه داشته است. در این ماه، استقرار هسته جت جنوب غرب آسیا در مناطق شمال عربستان تا جنوب غرب ایران عامل افزایش انرژی جنبشی شارش میانگین در این مناطق است. تبدیل کژفشاری، تولید کژفشاری و واگرایی شار آزمین گرد در مناطق شمال عربستان و غرب ایران به بیشینه مقدار خود در زمستان 2007-2008 رسیده و سبب تقویت انرژی پیچکی و تابش قوی به نواحی مرکزی ایران شده است. بیشینه همگرایی قوی شار آزمین گرد روی ایران نشان می دهد که به طور میانگین منطقه ایران در این ماه گیرنده انرژی بوده و عمیق شدن ناوه دمایی و سرمای بی سابقه در دهه های اخیر را روی ایران به دنبال داشته است. در فوریه 2008، بیشینه بی هنجاری مثبت تبدیل فشاروردی در نیمکره شمالی روی ایران بوده و جا به جایی جریان سوی جت جنوب غرب آسیا سبب افزایش انرژی جنبشی میانگین در این مناطق شده است. در این ماه، افزایش شدید تبدیل فشاروردی در مناطق شمالی و جنوبی ایران و بیشینه محلی تولید کژفشاری در مناطق مرکزی ایران به ترتیب با افزایش انرژی جنبشی پیچکی و انرژی پتانسیل در دسترس پیچکی همراه بوده و به طور میانگین منطقه ایران در این ماه فرستنده انرژی به مناطق همجوار شرقی بوده است.
    کلیدواژگان: انرژی پتانسیل در دسترس پیچکی، انرژی جنبشی پیچکی، تبدیل کژفشاری، تولید کژفشاری، شار آزمین گرد، تبدیل فشاروردی
  • نفیسه پگاه فر، عباسعلی علی اکبری بیدختی، پیمان زواررضا صفحه 189
    لایه مرزی شهری به دلیل دارا بودن تراز آلودگی بیشتر، همواره مورد توجه هواشناسان بوده است. لذا کوشش های فراوانی درجهت پیش بینی و پیش یابی پارامترهای موثر در این لایه صورت گرفته است. از آن جمله می توان به اجرای آزمایش های میدانی، تهیه الگوریتم های متنوع به کمک پارامتری کردن و مدل سازی های عددی اشاره کرد که اغلب آنها در منطقه روستایی کاربرد دارد. شهر تهران نیز به دلیل توپوگرافی ویژه، حجم ترافیک زیاد، تمرکز مراکز صنعتی، یکی از آلوده ترین شهر ها به شمار می آید. علاوه بر موارد فوق، تغییرات فصلی و چرخه شبانه روزی برخی میدان های هواشناختی و پارامترهای لایه مرزی نیز بر غلظت آلاینده ها در این مکان تاثیرگذار است. تاثیر پارامترهای لایه مرزی در شب که اثر ترافیک و مراکز صنعتی کم می شود، بیشتر نمایان است. دلیل آن را می توان تشکیل لایه وارونه و همچنین پایدار شبانه دانست که عمق اختلاط را کاهش می دهد. بنابراین در این زمان غلظت آلاینده ها به دلیل کاهش حجم در دسترس برای پخش، انتقال و ماندگاری افزایش می یابد.
    با وجود پیشرفت های فراوان در این زمینه هنوز برآورد عمق لایه پایدار شبانه یک چالش بزرگ برای تحقیق در فضای شهری محسوب می شود؛ زیرا که در شب و در منطقه شهری جهت شار گرما با منطقه غیر شهری متفاوت است که کاربرد الگوریتم های تهیه شده برای منطقه غیر شهری را در این مناطق با مشکل مواجه می کند.
    با توجه به اهمیت موضوع در این پژوهش سعی شده است تا ابتدا ارتباط بین عمق لایه پایدار شبانه و غلظت آلاینده ذره ای 10PM بررسی شود. در ادامه، اعتبار پارامتری کردن های متنوعی که برای برآورد عمق لایه پایدار شبانه در الگوریتم های لایه مرزی در مدل های پخش به کار می رود، مورد ارزیابی قرار گیرد و از بین آنها بهترین رابطه برای این منطقه معرفی شود. نتایج نشان می دهد که از بین 10 رابطه بررسی شده، روابطی که تاثیر پارامترها و متغیرهای میدانی هواشناسی را به طور جداگانه در نظر می گیرند خطای بیشتری تولید می کنند. از بین این روابط، رابطه ای که زیلیتینکویچ و میرانو (1996) پیشنهاد کرده اند، خطای کمتری ایجاد کرد. این رابطه با استفاده از 5 مقیاس طولی تهیه شده بود که پارامترهای بیشتری از لایه مرزی را شامل می شد. نتایج حاصل از این پژوهش نشان می دهد که این پارامتری کردن برای برآورد عمق لایه پایدار در شهر تهران بزرگ ترین ضریب همبستگی و کمترین خطا را به وجود می آورد.
    کلیدواژگان: لایه مرزی منطقه شهری، غلظت 10PM، پارامتری کردن لایه پایدارشبانه
  • ایمان سلطانزاده، عباسعلی علی اکبری بیدختی، پیمان زواررضا صفحه 207
    از آنجا که هر منطقه دارای ویژگی فیزیکی و شرایط فیزیوگرافی خاص خود است، الگوی جریان در هر منطقه، کاملا خاص و منحصر به همان منطقه است. آنچه اهمیت تحقیقات جامع تر و دقیق تر الگوی باد و گردش روی شهر تهران را کاملا توجیه می کند، نقصان پژوهش های قبلی در زمینه مدل های شهری و منحصربه فرد بودن الگوی جریان در چنین مناطقی است. در این مقاله تلاش شده است با در نظر گرفتن شرایط جوی ایده آل و با استفاده از یک مدل شهری تک لایه ای جفت شده با مدل WRF، به سوال هایی از جمله علت کاهش جریان های محلی پایین سوی شبانه در 5 دهه اخیر پاسخ مناسبی داده شود. داده های مشاهداتی نشان می دهند که افزایش دمای تراز 2 متری در 50 سال اخیر همبستگی بسیار خوبی با رشد جمعیت شهر تهران دارد. این مسئله اهمیت لحاظ کردن گرمایش انسان زاد به منظور واسنجی کردن مدل های میان مقیاس در این منطقه را مضاعف می کند. نتایج نشان می دهد که با وارد کردن گرمایش انسان زاد، تباین دمایی شبانگاهی بین شهر و محیط غیرشهری افزایش می یابد و این امر موجب شکل گیری جریان دشت به شهر می شود (واقعی تر شدن شبیه سازی جزیره گرمایی)؛ لذا این جریان جنوبی در کنار اثر اختلاط واقعی تر در اثر حضور گرمایش انسان زاد موجب کاهش جریان فروشیب شبانه می شود.
    کلیدواژگان: مدل شهری، برهم کنش جریان های توپوگرافی و جریان های شهری، جریان های محلی، مدل میان مقیاس WRF
  • سرمد قادر، علیرضا محب الحجه، مرضیه شریفی دوست صفحه 223
    هنگام حل عددی شکل اویلری معادلات آب کم عمق، مهار ناپایداری غیرخطی و جلوگیری از بروز آن لازم است. یک راه حل رایج، اضافه کردن یک جمله فراپخش به معادله تاوایی است. در تحقیق حاضر، سه روش با توانایی تفکیک زیاد یعنی روش های ابرفشرده مرتبه ششم، فشرده مرتبه چهارم و روش طیفی وار و نیز روش مرتبه دوم مرکزی برای حل عددی معادلات آب کم عمق در صفحه مورد استفاده قرار می گیرند. سپس اثر استفاده از توان های متفاوت در جمله فراپخش بر دقت حل معادلات آب کم عمق در طرحواره های متفاوت مورد آزمایش قرار می گیرد. همچنین نحوه به کار گیری یک پالایه فشرده درحکم راه حلی جایگزین برای مهار ناپایداری غیرخطی در روش های فشرده نیز بررسی می شود. در انتها نتایج حاصل از شبیه سازی معادلات آب کم عمق با استفاده از روش های مورد بررسی و اثر میزان پخش عددی بر دقت حل آنها، از دو دیدگاه خطای جرم و نبود توازن مورد بررسی و تحلیل قرار می گیرد. نتایج به دست آمده نشان می دهد که برای روش مرتبه دوم مرکزی که یک روش با توانایی تفکیک پایین است، انتخاب توان جمله فراپخش با توجه به تفکیک شبکه محدودیت دارد. از دیدگاه خطای جرم و نیز نبود توازن، استفاده از جمله فراپخش مرتبه دوم، نتیجه معکوس بر دقت حل می گذارد. همچنین استفاده از جمله فراپخش مرتبه شش و پالایه فشرده مرتبه هشت با استفاده از روش ابرفشرده مرتبه ششم و روش طیفی وار نتیجه تقریبا یکسانی را به دست می دهد.
    کلیدواژگان: معادلات آب کم عمق، پالایه فشرده، روش ابرفشرده، روش طیفی وار، فراپخش
  • حمید نظری پور، سید ابوالفضل مسعودیان، زهرا کریمی صفحه 241
    ایران در کمربند خشک عرض میانه قرار گرفته که متوسط بارش آن 250 میلی متر است و تغییرات زمانی و مکانی شدیدی دارد. رویدادهای بارشی با دوام کوتاه مدت یکی از مشخصه های مناطق خشک است که در ایران نیز ملموس است. هرچند بارش ایران دوامی بین 1 تا 45 روز دارد و دارای تغییرات زمانی مکانی شدیدی نیز هست اما بیشینه بارش و روزهای بارشی ایران، از بارش های کوتاه دوام تامین می شود. بنابراین رخداد بارش های طولانی دوام یک رویداد فرین تلقی می شود که تغییرپذیری شدیدی دارد. بارش های با دوام یک تا هفت روزه تقریبا در کل گستره ایران و بارش های با دوام بزرگ تر فقط در برخی از قسمت های ایران رخ داده اند. با افزایش دوام بارش ها، نقش آنها در تامین روزهای بارشی ایران، به شدت کاهش می یابد. برخلاف آن، افزایش دوام بارش ها، سبب کاهش تامین بارش کل گستره ایران نمی شود. بارش های یک روزه، بیشینه روزهای بارشی کل ایران و بیشینه بارش بخش بزرگی از ایران را تشکیل می دهند و اهمیت چندانی در ایران ندارند. سهم بارش های یک روزه در تامین روزهای بارشی و مقدار بارش ایران در گذر زمان دچار تغییراتی شده که برای ارزیابی این تغییرات از پایگاه داده بارش ایران استفاده شده است. نتایج ارزیابی تغییرات سالانه سهم آنها در تامین روزهای بارشی و مقدار بارش ایران، نشان می دهد که در یک چهارم از گستره ایران از سهم آنها در تامین روزهای بارشی کاسته شده و فقط در 3 درصد از گستره ایران بر سهم آنها افزوده شده است. علاوه بر آن در یک پنجم از گستره ایران، سهم آنها در تامین مقدار بارش ایران کاهش یافته و فقط در 6 درصد از گستره ایران سهم آنها در تامین بارش ایران افزوده شده است. از طرف دیگر، ارزیابی تغییرات ماهانه سهم بارش های یک روزه در تامین بارش ایران، نشان می دهد که گستره روند منفی آنها در ماه های بارانی سال بزرگ تر از گستره روند مثبت آنها است.
    کلیدواژگان: تداوم، ایران، تحلیل روند، بارش
  • جمشید قنبری، مریم کریمیان *، ایمان باباییان، مهدی مطیعی صفحه 259

    از زمان های گذشته تاکنون تحقیقات زیادی در زمینه عرضه معیاری برای رویت هلال ماه صورت گرفته است، اما علی رغم پیشرفت این معیارها بارها پیش آمده است که با توجه به فراهم بودن شرایط نجومی، رویت هلال، رصدگران قادر به رویت آن نبوده اند. یکی از عواملی که در امر رویت هلال تاثیر بسزایی داشته ولی تاکنون بررسی نشده، پارامتر های جوی است. بدین منظور با استفاده از فن آوری الگوریتم ژنتیک، رصدهای سال های 1423 تا 1430 هجری قمری برای شهرهای مشهد، بجنورد، بیرجند، اصفهان، شیراز و کرمان در دو گروه دید افقی کمتر و بیشتر از ده کیلومتر به دو روش خطی و غیرخطی بررسی شد. نتایج نشان می دهد که الگوهای غیرخطی در هر دو حالت با دید افقی کمتر و بیشتر از ده کیلومتر دارای اریبی بیشتری نسبت به حالت خطی هستند و توانمندی الگوسازی خطی و دید بیشتر از ده کیلومتر به ترتیب از روش غیرخطی و کمتر از ده کیلومتر بیشتر است. به طورکلی میزان تاثیر فشار جو و دید افقی روی درصد روشنایی از سایر پارامترهای هواشناسی بیشتر است.

    کلیدواژگان: الگوریتم ژنتیک، رویت هلال ماه، درصد روشنایی هلال، پارامترهای جوی
|
  • Hadi Dehghan, Manshadi, Noorbakhsh Mirzaei, Morteza Eskandari, Ghadi Page 1
    This paper presents the seismicity and seismotectonic characteristics of Esfahan and adjoining regions, an area bounded between 49.5-54o E and 31-34.1o N. According to Mirzaei et al. (1998) this region is placed between two major seismotectonic provinces: Zagros and Central-East Iran. The boundary between these two provinces is known to be the Main Zagros Reverse Fault. Geological maps with scales of 1:100000 and 1:250000 were used to provide the faults map of this region. We determined several probable faults in the region that can help us to introduce the potential seismic source more precisely. Among a total of eighteen focal mechanism solutions of earthquakes with Mw ≥ 4.3 in the region of interest, one event is normal, six events show dominant strike-slip components, and eleven events have dominant reverse components, which confirms the dominance of reverse/thrust faulting. Using global database of earthquakes provided by USGS/NEIC and ISC, as well as catalog of historical (pre-1900) and early- instrumental (1900-1963) earthquakes provided by Ambraseys and Melville (1982), a uniform catalog of earthquakes for the interest region has been provided that involves 170 instrumental and 10 historical earthquakes. To unify the scale of earthquake magnitude for each seismotectonic province, we established empirical relations to convert mb to Ms. Because historical earthquake in Iran have been defined in Ms and lack of events with magnitudes above the saturation level of Ms in the interest region, surface wave magnitude is the most appropriate magnitude scale for the region, and on a broader scale, is appropriate for Iran. In order to establish empirical relations between mb and Ms we applied the orthogonal regression (OR) method that takes into accounts the errors of measurements in both variables. Based on this uniform catalog, seismicity of the interest region was studied, and seismicity parameters were calculated utilizing the method proposed by Kijko and Sellevoll (1992), in which one can consider magnitude uncertainty and completeness of data in calculations. In order to achieve more reliable results, the completeness of catalog and uncertainty of magnitudes have been estimated and considered in our calculations. In this method, dataset of the catalog should be divided into extreme and complete part, and each complete part can be subdivided again into several complete parts that have their own completeness threshold. In this work, the whole data was separated into six complete parts with threshold magnitudes between Mc=3.2 (for events after 1996) to Mc=5 (for events after 1939). To estimate the magnitude threshold (Mc) combination of two methods were used. The first, is maximum curvature (MAXC) and the second, is goodness of fit test (GFT). In cases where lack of data does not allow using GFT, only MAXC method is used. Magnitude uncertainty of each event is considered to be 0.2 and 0.3 for modern-instrumental and early-instrumental earthquakes, respectively, when Ms has been assigned directly based on seismogram readings. For the events that their surface-wave magnitudes have been obtained by empirical conversion relations, magnitude uncertainty is considered to be 0.4. In the case of historical events, uncertainties are considered to be in the order of 0.4 to 0.8 (Mirzaei et al., 1997a), based on their quality (a, b, c, d) that was assigned by Ambraseys and Melville (1982). For Central-East Iran part of the interest region, the results show that b-value is equal to 0.81 ± 0.12, λ is equal to 0.48 ± 0.129 and Mmax=7.8 ± 1.74. For Zagros part, b-value is equal to 0.95 ± 0.05, λ is equal to 4.976 ± 0.413 and Mmax=7.4 ± 0.67. The impact of classification of data on seismicity parameters has been investigated, and the results show that it has a significant impact on the b-value (or β) and recurrence rate of the seismic events (λ); while its effect on Mmax is negligible. Furthermore, classification of complete part of the catalog significantly decreases the uncertainty in the evaluated parameters of Mmax, λ and b (or β). The minimum impact on uncertainty of parameters appears in b, and the maximum appears in Mmax.
    Keywords: Esfahan, Magnitude uncertainty, Completeness of the catalog, Seismotectonics, Seismicity
  • Vahid Maleki, Zaher Hossein Shomali, Mohammad Reza Hatami Page 23
    Earthquake location process has an important role in any seismological applications including seismic tomography etc. The relationship between the travel-time of seismic phases and earthquake hypocenter (latitude, longitude, depth) and origin-time is non-linear and many different linearized methods have been implemented in recent years to linearize the relationship. The principal underlying linearized methods is given by Geirger (1912) based on the Taylor series. However, attempts have been made to incorporate higher terms of the Taylor series in earthquake location process (e.g. Thurber, 1985). Using more higher-terms of the Taylor series provides more constrained solution at the expense of computation costs. Full-nonlinear earthquake location algorithm was also developed and discussed earlier e.g. by Tarantola and Vallette (1982) and Tranatola (1987). In this algorithm, the location of earthquake is defined using Probability Density Functions (PDF) of all possible points around the hypocenter. In this study we use a nonlinear probabilistic method based on global search methods. In this method, the calculations of the partial derivatives are not required, and there is a higher probability to converge to the global minima due to the nonlinearity of the problem. (E.g. Lomax et al, 2008). The optimal solution in this method can be found using different algorithms such as Metropolis -Gibs (Metropolis et al, 1953), grid search (e.g. Sambridge and Mosegaard, 2002) and Octtree (oct-tree) Importance sampling algorithm (Lomax and Curtis, 2001). The oct-tree importance sampling algorithm is very faster, more complete and simple in comparison to the other methods e.g. grid search and metropolis-Gibs algorithms. The application using oct-tree algorithm is provided to produce accurate, efficient and complete mapping of Probability Density Functions (PDFs) (e.g. Lomax and Curtis, 2001). In this study we use the oct-tree importance sampling to find the optimized solution. In this study, we apply non-linear earthquake location method developed by Lomax et al. (2000) for local earthquakes during 2006-2010, for magnitude Mn≥4 occurred in the Central Alborz region. Non-linear location method is based on a posterior Probability Density Functions (PDF) determined for the model parameters. This function represents a complete probabilistic solution for the earthquake location problem including information on the uncertainties due to phase-picking uncertainties, calculated travel-time and the network geometry. We perform different synthetic tests to evaluate the performances of non-linear method, where the location problem is ill-conditioned due to station geometry and phase picking error. In this regard we test the effect of azimuthal gap and distance to the nearest station using various synthetic tests conducted in this research. In the synthetic tests conducted, we consider 4 events in different situations located outside of an assumed seismic network. The seismic network includes 8 stations with station-spacing of order of 15 km. The azimuthal gap of the events varies in the range of 225-317 degree. In these tests we also add noise with Gaussian distribution in arrival-times, in order to investigate the performance of the nonlinear location method in the presence of large azimuthal gap and noise in the data simultaneously. In other synthetic-tests to consider the performance of nonlinear location method due to distance of nearest station to earthquake, we perform tests using different situations in presence of noise in the data with different levels in arrival times. In first case we consider an event in a dense network with station spacing of order of 15 km, and in second case we expand the network to 150 km station spacing. In this study to relocate earthquakes occurred in the Central Alborz region during 2006-2010, for magnitude Mn≥4, we have used the data set of Iranian Seismological Center (IRSC), including the arrival times of P and S phases. In this regard we used three sub networks, Tehran, Sari and Semnan belonging to the IRSC network. Also in order to enhance the station coverage especially in the southern part of the Alborz region, we added data-set of two other stations from Isfahan sub network, namely KLH and ZEF stations. Finally, we show the robustness of the non-linear location algorithm in the presence of outliers by analyzing the shape, size and position of the 68% confidence ellipsoid that can be calculated from the PDF to track the changes in the distribution of the PDF with changing station geometry. We find that the non-linear method is robust in the presence of high azimuthal gap e.g. 300 degree and high Gaussian errors up to 1.0 sec, and is able to locate earthquakes with error less than 5 km. We relocate 16 earthquake occurred in the Central Alborz region with Mn ≥ 4.0 between 2006 -2010. Despite of high azimuthal gap and high station spacing in the dataset used in this study, 10 earthquakes located with horizontal error less than 3 km. In order to verify the quality of results, we compare the non-linear location results with those reported by IGUT and IIEES. The comparison shows that the nonlinear relocation solutions are, in most of cases, closer to IGUT solutions.
    Keywords: Central Alobrz, Non, linear, Earthquake location, Probability Density Functions
  • Mahdi Alipour Kallehbasti, Bahman Bohloli Page 39
    Productivity of an oil well is an important parameter in any oil field development. Low production rate of the wells, impose more production wells drilling which directly jeopardizes the economical feasibility of the project. Oil production rate in some of the wells is very low due to low permeability of the reservoir rock. Several practice like hydraulic fracturing and acid fracturing is very common in the oil industry. In these methods high pressure fluid (or acid) is injected into the reservoir and when the pressure exceeds the rock strength, it starts to fall apart. Thermal cracking is one of the methods used indirectly to increase rock mass permeability and consequently oil production. Thermal stress induced by injecting cold fluid into the reservoir is actually part of the stress needed to break down the reservoir rock during hydraulic fracturing process. If this exceeds the critical magnitude, the rock will fail leading to permeability and productivity enhancement. Effect of thermal stress in enhancement of injection well during water flooding process and wellbore stability has been investigated and reported in several papers, however thermal stress as a method is not addressed and reported elsewhere. In this experimental study effect of temperature gradient and induced thermal stress was investigated on one of the Iranian carbonate oil reservoir. The field is a carbonate reservoir with low permeability less than few mili-darcy. Well production is very small as low as 300 stb/day in some wells which makes the field a good candidate of stimulation job. Several samples have been taken from the reservoir and prepared for the experiments. The carbonate rock of the reservoir is very heterogeneous and contains layers of clay. Initial study on thin section shows that heterogeneous texture of the reservoir rock has the potential of thermal stress concentration. Three categories of fractures were distinguished in the reservoir rock which could be considered as weak surface in the process. Numerical simulation performed with FLAC simulator to understand how heat propagates with time in the rock sample. An experimental set up was designed for cooling the rock samples. The rock sample heated up to reservoir temperature which is almost 90oC and contained with a glass cap on its top. Hot water was circulated through the glass cap to keep the sample in the reservoir temperature during the experiment. Then the sample was cooled from one side to study the effect of temperature on the rock sample. The samples were CT scanned along and perpendicular to its axis before and after the experiment and each section was carefully analyzed for possible fracture. Experimental results showed that thermal cracking is feasible by imposing 90 C degrees temperature variation. Experiments performed on four samples of which two showed induced tiny fractures. The mini-fractures were mostly in the middle part of the sample which is related to temperature profile and induced thermal stress front inside the sample. Our experiment shows that thermal stress effect could cause fracture initiation in the sample and may be used as a driving force of fracturing in conjunction with hydraulic fracturing process for well stimulation. The results also indicate that thermal stress is important and should be considered in process like wellbore stability.
    Keywords: Low permeability, CT scan, Thermal stress, Hot water, Thermal cracking, Well stimulation, Oil well, Hydraulic fracturing
  • Keyvan Najafzadeh, Mohammad Ali Riahi, Mohsen Seyedali Page 49
    Permeability is a key parameter in reservoir characterization. In fact, without accurate information about reservoir permeability, there is no good solution for reservoir engineering problems. Up to now, permeability values of a reservoir have been calculated through laboratory measurements or well testing methods. These methods are useful but they cannot describe a reservoir reliably and precisely. Also, well logs and core analysis have been used for permeability estimation, but correlation methods also show that, estimation through these methods is not reliable. Direct prediction of reservoir permeability from seismic data is often supposed impossible due to resolution limitations of seismic data and hydraulic nature of permeability. In many cases reservoir permeability estimation is restricted to core scale and wellbore proximity. Regarding the existing limitations, recent studies have shown that there are some experimental relationships between some petrophysical parameters and permeability. In recent years, geostatistics and stochastic modeling have made a tremendous impact on scientific investigations. Geostatistics has been an increasingly important factor in the development of petroleum resources. So, a stochastic modeling of geostatistics is a reliable tool for estimating reservoir parameters including porosity and permeability. In geostatistics we suppose that variables have an effect on each other. So, the first concept of geostatistics is related to the variogram of values we want to estimate. It tells about the spatial correlation that exists between values. Distance that variables have an effect on each other and is called range of a variogram and is an important concept because after that distance, variables have no effect on each other. Geostatistical methods are meaningful in the range of variables and outside the range geostatistics is not applicable. The investigated field is located in the Persian Gulf between Iran and Qatar, 80 km south west of Lavan Island and 40 km south east of south Pars gas field. The study area consists of two reservoir structure including Sarvak and Surmeh. In order to estimate permeability values of the reservoir, first, all data required for this approach should be collected. Needed data are: porosity logs and 3D seismic data of the study area. 3D seismic acquisition of this field has been done by TOTAL FINA ELF in 2000. Total area of investigation is about 96 km2. Interpretation of data has been done by TOTAL in 2003 and interpretations of this article have been validated by that. There are twelve wells in the study area of which seven of them have porosity logs. First, a reliable structural model of study area (which is one of hydrocarbon fields in south of Iran) has been built from seismic data and wells information. Seismic data have been used in interpretation of horizons and characterization of faults in the reservoir. After determination of horizon surfaces in the reservoir, some horizons that have not been seen in seismic data have been determined using wells information by use of well tops. Using all horizons and fault plates we construct a structural model. Distances between horizons are divided into zones. In the model, with attention to importance of each zone, some layers have been defined. It's obvious that zones imbedding in reservoir parts, have more layers in order to increasing the validation of estimation. After collecting required data and making the model, estimation of reservoir porosity values has been done with the aid of seismic impedance values through Geostatistics. Collocated co-Kriging method has been selected for this geostatistical estimation. In this method there is a secondary variable in addition to main data that improve accuracy of estimation. For starting geostatistical estimation, firstly analysis of data has been carried out. The analysis is based on calculating variograms. Then with the use of variogram values and considering acoustic impedance values as the second parameter in collocated co-Kriging method, porosity of the reservoir has been calculated. There are some anhydrite layers in the reservoir that we put zero value of porosity for them in order to have more control on our results and having more reliable porosity estimation for the reservoir. So the value of reservoir porosities becomes known in all parts of the model. In all parts of the reservoir, porosity values change between 0 to 30 percent. In the next step, permeability values have been estimated using porosity values. There is a good correlation between porosity and permeability values. First, variography between permeability values have been done. Then using Sequential Gaussian Simulation (SGS) method as a reliable tool for the estimation, reservoir permeability values from porosity, have been calculated. Estimation has been done in all zones of the reservoir separately. Again, as we have some anhydrite layers in the reservoir structure, for a better control on estimating permeability values of the reservoir, zero value of permeability has been devoted to anhydrite layers. So we have a better estimation of permeability in total volume of the reservoir. Values of permeability of the reservoir change between 0 to 100md. Finally, validity of permeability estimation of the reservoir should be investigated. This has been done by cross validation method. Information of one well has been eliminated from the model, and using other wells, permeability estimation has been done. Then a comparison between permeability of the model at well location and well permeability was done and the correlation coefficient has been calculated. This process has been done for well 3W-01 and well 3W-03. Correlation coefficient for well 3W-01 is 0.86 and this coefficient for well 3W-03 is 0.81. Correlation coefficients show that results of permeability estimation are acceptable and Sequential Gaussian Simulation combined with collocated co-Kriging is a capable method in reservoir characterization. Using Sequential Gaussian Simulation alone is not reliable enough in porosity and permeability estimation especially when there is a complicated geological structure, because it uses mathematical calculations only, and its results may have no good compatibility with the truth of the geology. This problem has been solved by combination of acoustic impedance data with permeability values. Importing acoustic impedance values conduct the SGS algorithm and make it more compatible with the reality of the earth model.
    Keywords: Permeability estimation, Porosity estimation, Simulation, Deterministic, Stochastic methods of Geostatistics, Collocated co Kriging
  • Hamidreza Soleymani, Mohammadreza Sokooti, Mohammad Ali Riahi Page 57
    Pore pressure could be estimated prior to drilling using seismic velocities. Applying this approach, the final calculated pore pressure is highly affected by accuracy of the obtained velocity data. In order to generate an accurate velocity cube with proper resolution, velocities from well logs and post-stack 3-D seismic inversion are used along with stacking velocities from velocity analysis of 3-D seismic data. Using Bowers’s effective pressure method (1995), including known velocity and density values at well locations, the coefficients of Bowers’s equation could be calculated. Also, through applying Gardener’s equation, one could relate velocity to density in each well location and finally, it is possible to determine overburden stress for the entire survey. Pore pressure prediction is one of the most important stages in drilling of the new wells. According to Huffman 2002, “pressure prediction at the basin scale can be very powerful in (1) determining where the source rock is actively maturing (2) determining where large scale fluid migration is occurring in a basin (3) predicting the behavior of large regional faults and structures (4) identify the presence of secondary pressure area (5) constraining the porosity model and (6) evaluating the integrity of vertical seal in the basin”. Moreover, pressure information prior to drilling can insure economic advantages for safe drilling. Overpressure is referred to state where the pore fluid pressure exceeds normal hydrostatic pressure at the particular depth. This phenomenon usually results from restriction of the fluid flow due to rapid sedimentation or fluid expansion, which in turn results in increase in pore pressure. Since the pioneer work by Pennebaker (1968), many papers has begun to deals with the pressure prediction prior to drilling. A paper by Dutta (2002) provides a good insight into using seismic based methods and possible pitfalls of using different velocities in estimation of pore fluid pressure. Bowers (1995) suggested an outstanding empirical equation, using data from the Gulf of Mexico which governs the compaction and fluid expansion mechanisms. Since these two mechanisms are the most frequent causes of overpressure, the Bowers’s equation is an appropriate approach in most cases. It is important to note that to obtain the desired accuracy, constants A and B (i.e. the virgin curve parameters) in the Bowers’s equation should be determined using the available data from offset wells in every survey. In the process of pressure estimation, overburden pressure needs to be estimated. Hence, Gardner equation (Gardner et al., 1974) is calibrated by calculating related parameters, using available density and velocity logs. Knowing the velocity and having the calibrated Gardner’s equation in hand, one can calculate the overburden pressure. Moreover, effective pressure can be acquired with acceptable accuracy using calibrated Bowers’s equation. Eventually, porepressure could be estimated using Terzaghi’s equation (Terzaghi, 1943). It should be noted again, generating velocities from processing of 3-D seismic data, rock densities and calibrated Bowers’s equation are crucial steps, through our workflow. The final goal of this paper is to present a proper method to generate and validate the 3-D pore pressure model using conventional data for a large survey at Mansouri oil field (one of the Iranian south west oil fields). It is shown that the accuracy and resolution of the velocity model could be greatly enhanced by using available well logs and a proper geostatistical method. This high resolution velocity model results high resolution pore pressure model which reveals much information concerning reservoir integrity and characterization. The effective pressure is calculated using Bowers’s method which is calibrated using measured pressure data at some well locations and at the end; the porepressure is calculated using Terzaghi’s equation. Studying the final pore pressure map reveals that there are no major pressure anomaly within the Asmari reservoir. In addition, a bizarre looking low pressure anomaly in the beginning of the second segment is not due to a real drop in pressure (base on measured pressure in the well). In fact, this error is introduced due to changes in formation petrology at the reservoir seal. It should be noted that the geological study prior to pressure interpretation is an imperative stage in pressure estimation. Comparison of the predicted pore pressure with the measured pressure at well location which was not included in calibration phase reveals that final pressure estimation is reliable and it is in good agreement with the measured pressure data. The high resolution pressure data is suitable for well planning and provides detailed information on pressure variation along the future well trajectories. The 3-D pore pressure cube, along with effective pressure, overburden pressure and fracture pressures are obtained and used for regional geomechanical and pressure assessments.
    Keywords: Overburden stress, Geostatistics, Effective stress, Velocity model, Pore pressure
  • Asghar Rastbood, Behzad Voosoghi Page 71
    The scope of this research is to investigate the effect of geometrical and physical input parameters in crustal viscoelastic deformation models. To do this analysis viscoelastic model of Wang et al., 2006 is used. The increasing quality of data on time-dependent deformation of the Earth's surface can be used to extract more details on the spatial and temporal development of earthquake-related crustal deformation. Different variables are involved in these processes; some of them perform more accurately than others. In this research, surface deformation is modeled in two dip-slip and strike slip faults, in a medium composed of an elastic layer over an inelastic half-space, sensitivity analysis is performed on all geometrical and physical parameters. Among physical and geometrical parameters, performing of this analysis on less accurately determined parameters from displacement field data is recommended. These parameters are viscosity of the half-space, thickness of the elastic layer and dip angle of the fault plane. According to obtained results of this research, sensitivity of the viscoelastic model to input parameters is independent of type of faulting. From the variability analysis, the location of most appropriate displacement data to obtain values for the studied parameters is determined. For example according to obtained results, coseismic and postseismic displacement analysis show high dependency on fault slip above rupture surface of the fault, however it shows less sensitivity to fault length. When analyzing the co-seismic displacement, a strong dependency on the dip angle of the fault plane is found. Points with large displacements also show a large variability when the dip angle varies. The area over the rupture plane is the one where the largest displacements take place. Therefore, surface measurements in this area are the most suitable for ascertaining the most likely value for the dip angle. By contrast, varying the thickness of the elastic layer leads to small differences in general, especially small in the area immediately beyond the surface projection of the lower end of the rupture plane. This indicates that co-seismic displacement measurements, especially around the mentioned area, are not recommendable for trying to ascertain an accurate value for this parameter. In the analysis of the post-seismic deformation it is found that, on average, deviations from a reference model are large above the rupture plane when varying the viscosity and the thickness of the elastic layer. In this area, the dip angle does not influence the results as much as the other two parameters. Further away from the rupture surface along dip direction, in the area immediately beyond the surface projection of the lower end of the rupture plane, the dip angle becomes the most influential parameter. Further away, where the magnitude of the horizontal displacement reaches another maximum, the viscosity and the thickness of the elastic layer have again a greater effect than the dip angle of the rupture surface. Accordingly, measurements in areas of large post-seismic displacements are more suitable for deriving a value for the viscosity. In particular, above the rupture area, values depend heavily on this parameter. The same region can also provide data that is useful for ascertaining the thickness of the elastic layer, although for this parameter the area where the minimum displacement occurs is not as appropriate as for the viscosity. The dip angle, in general, cannot be accurately derived using post-seismic deformation data. The magnitude of the variability associated with this parameter is very small. The best place to find a value for this parameter is the area where the post-seismic displacement changes direction. So the best data to derive the value of the viscosity is the post-seismic deformation over the area where the rupture takes place, although any area with large magnitude post-seismic displacements can provide more useful data. For the thickness of the elastic layer it is also advisable to use postseismic data from the area above the fault plane, whereas the dip is better determined by means of coseismic data. Both of coseismic and postseismic displacement analysis show high dependency to locking depth of the fault especially above the rupture surface of the fault. The amount of sensitivity is especially high for coseismic displacements. Coseismic and postseismic displacement analysis show small sensitivity to Lame coefficients of elastic layer above rupture surface of the fault and it is reduced for Lame coefficients of visco-elastic layer. Coseismic displacement analysis does not show sensitivity to elastic or visco-elastic layer density, however postseismic displacement analysis shows small sensitivity, so modeling is not an appropriate tool for elastic layer or visco-elastic half space density determination.
    Keywords: Coseismic deformation, Sensitivity analysis, Viscoelastic, Postseismic deformation, Reference fault, Elastic, Modeling
  • Mahrouz Khademi, Mahdi Najafi Alamdari, Mohammad Ali Sharifi Page 89
    The recent dedicated satellite gravimetry missions have provided huge amount of high quality gravity data with global coverage. From computational point of view, estimation of the unknown gravity field parameters is a highly demanding task due to the sheer number of observations and the unknown coefficients. Different computational schemes have been proposed to tackle the problem. Since the early days of satellite geodesy, energy balance based methods for gravity field determination have been considered. If non-conservative forces are known the Hamiltonian along the orbit will be a constant function of the motion. Thus the gravity field can be determined if position and velocity of the satellite are known and accelerometer measurements are available to model the non-conservative part. A satellite mission dedicated to the improvement of our knowledge of the earth’s gravitational field with a direct (in situ) measurement system has been in the proposal stages for a long time and at several agencies. Of course, gravitational field knowledge comes also by tracking satellites from ground stations, and many long wavelength models of the field have been deduced from such data. But, these models derive from the observations of a large collection of satellites that have been tracked over various periods during the long history of earth-orbiting satellites, where none of these was launched for the expressed purpose of providing a global and detailed model of the gravitational field. The method has been applied in a close-loop simulation to the Gravity Recovery and Climate Experiment (GRACE) data and the achieved results show high performance of the proposed method. This article focuses on the development of new techniques for global gravity field recovery from high-low (hl) and low-low (ll) satellite-to-satellite tracking (SST) data. There are a number of approaches to global gravity field recovery known from literature, including the variational equations approach, short arc approach, energy balance approach and acceleration approach. The focus of the article is the energy balance approach with an aim to produce high-quality global gravity field models using simulated data from GRACE satellite missions. The GRACE mission has substantiated the low–low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy. A new, rigorous model is developed for the difference of gravitational potential between two close earth-orbiting satellites in terms of measured range-rates, velocities and velocity differences, and specific forces. It is particularly suited to regional geopotential determination from a satellite-to-satellite tracking mission. Based on energy considerations, the model specifically accounts for the time variability of the potential in inertial space, principally due to earth’s rotation. Analysis shows the latter to be a significant (~1m2/s2) effect that overshadows by many orders of magnitude other time dependencies caused by solar and lunar tidal potentials. Also, variations in earth rotation with respect to terrestrial and celestial coordinate frames are inconsequential. Results of simulations contrast of the new model to the simplified linear model (relating potential difference to range-rate) and delineate accuracy requirements in velocity vector measurements needed to supplement the range-rate measurements. The numerical analysis is oriented toward the scheduled Gravity Recovery and Climate Experiment (GRACE) mission and shows that accuracy in the velocity difference vector of 2~10−5 m/s would be commensurate within the model to the anticipated accuracy of 10−6 m/s in range-rate. A fast iterative method for gravity field determination from low Earth satellite orbit coordinates has been developed and implemented successfully. As the method is based on energy conservation and it avoids problems related to orbit dynamics and initial state. In addition, the particular geometry of a repeating orbit is exploited using a very efficient iterative estimation scheme, in which a set of normal equations is approximated by a sparse block-diagonal equivalent. Recovery experiments for spherical harmonic gravity field models up to degree and order 70 were conducted based on a 29-day simulated data set of orbit coordinates. The method was found to be very flexible and could be easily adapted to include observations of non-conservative accelerations, such as (to be) provided by satellites like CHAMP, GRACE, and GOCE. So, calculation of velocity and acceleration vectors is a necessary stage in Earth’s gravity field recovery using GRACE observations. Different numerical differentiation methods have been proposed to compute the acceleration vector. In this paper, Newton, spline and Taylor methods have been implemented. The effect of outliers has also been investigated in different differentiation techniques. The numerical analysis of the recovered solutions shows that the Newton method yields the optimal solution. The comparison is performed based on the difference in the simulated and recovered gravity anomalies and the geoidal heights.
    Keywords: Gravity, GRACE, Numerical differentiation, Outlier
  • Mojtaba Rashvand, Behrooz Oskooi Page 105
    The analytic signal method is a semiautomatic method for estimating the location of causative bodies in magnetic and gravity methods. The application of analytic signal for interpretation of two dimensional (2D) structures was introduced by Nabighian (1972). The analytic signal is defined as a complex function in which the real and imaginary parts are a pair of Hilbert transforms. In other words, the analytic signal is a combination of horizontal and vertical gradients of potential field. Analytic signal is a symmetric function with amplitude sensitive to parameters of the source. In case of 2D structures, the amplitude of the analytic signal is independent of the directional parameters such as inclination, declination and strike angle (Nabighian, 1972; Atchuta et al., 1981; Roest et al., 1992). The depth of 2D structures can be estimated using the width of the analytic signal or the ratio of the analytic signal to its higher derivatives (Hsu et al., 1996; Roest et al., 1992). Source’s parameters of a dyke such as width, dip, strike, magnetization and depth can be estimated by analytic signal method (Bastani & Pedersen, 2001). The nth-order enhanced analytic signal is defined as the analytic signal of the nth-order vertical derivative of the potential field. An automated method for estimating the depth, horizontal location and shape of 2D magnetic structures is the horizontal gradient of analytic signal method. This method is capable of interpreting low quality data because of using the first and second order derivatives of potential field in the main equations. The method of analytic signal estimates the horizontal location of the source by approximating the maximum amplitude of the signal; hence noise can affect the estimations. On the other hand, by using the horizontal gradient of analytic signal expressions, all of the source’s parameters could be approximated simultaneously. In this method, equations of the analytic signal, Euler enhanced analytic signal and horizontal gradient of analytic signal are combined to derive a linear equation. Using the first order analytic signal, horizontal gradient of analytic signal and linear inversion method, the depth and horizontal location of 2D magnetic bodies are obtained. The location estimation is independent of the shape of the causative bodies. The causative body’s geometry is estimated as a structural index by applying the least squares method. Data selection for solving the equations or width of windows is based on data quality. The optimum size is defined somehow to detect a signal specific anomaly and also variations of the anomaly in one window. In this study, in order to solve the equations of the horizontal gradient of analytic signal method, the data greater than twenty percent of maximum amplitude of the analytic signal were used. The analytic signal-Euler deconvolution combined method is an automated method to estimate depth and shape of the sources. This method is used to interpret 2D & 3D magnetic and gravity data. After substituting the appropriate derivatives of the Euler’s homogeneous equation in the equation of the analytic signal, major independent equations which are used to estimate the depth and shape of causative bodies, are derived. The horizontal location of causative bodies is estimated by Euler method or locating the maximum amplitude of the analytic signal. In this study, the accuracy and efficiency of each of the mentioned methods in interpretation of magnetic anomalies are evaluated. Methods were tested for different synthetic datasets provided by forward modeling. 2D magnetic models placed at different depths and random noise added for some models. Derivatives were calculated in frequency domain by using Fourier transform techniques. In this technique, bell-shapedness effect appears at the edges of the profiles. This effect could be corrected by linearly expanding the profiles. Upward continuation filter was applied on some synthetic data to decrease the noise level. In this paper, the applicability of the horizontal gradient of analytic signal method and the analytic signal-Euler combined method were tested. Both methods estimate the parameters of the causative bodies without any prior information. In both methods, there is not any explicit dependence on directional parameters (e.g. magnetization) in the main equations; hence, as the results show, estimations were not affected by remanent magnetization. The results also show accurate estimations of the horizontal gradient of analytic signal method for shape and horizontal location and efficient estimations of the analytic signal-Euler deconvolution combined method for depth.
    Keywords: Magnetic field derivative, Euler deconvolution, Analytic signal, Magnetic anomaly
  • Majid Jamie, Behrooz Oskooi Page 117
    This paper presents results of applying a new approach on 2D inversion of Magnetotelluric (MT) data in order to enhance resolution and stability of the inversion results. Due to non-linearity and limited coverage of data acquisition in an MT field campaign, minimizing the error by linearization of the problem in least squares inversion usually leads to an ill-posed problem. In general, an inverse problem is unstable, ill-posed and is characterized by non-uniqueness (Tikhonov et al., 1998). The concept of ill-posed problems goes back to Hadamard (1923). He defined a problem to be ill-posed if the solution is not unique or if it is not a continuous function of the data i.e., arbitrarily small perturbations in the input data can cause great changes in the solution. Hence, in order to stabilize the problem and come to a stable solution, further information should be incorporated. In order to acquire reasonable geoelectrical models, regularization of the problem by imposing definite constraints is necessary. Determination of a suitable Lagrangian multiplier in order to balance minimization of error and model roughness could be a useful approach to achieve both the resolution and the stability in inversion. In order to achieve both the resolution and the stability in least-squares inverse modelling in our study, an intermediate value of the Lagrangian multiplier must be chosen. Too large or small Lagrangian multipliers yield to undesirable effects on resolution and stability. In this paper, the regularization parameter is set by a value from log-linear interpolation (Yi et al., 2003). Where,, and, are minimum and maximum for Lagrangian multipliers and spread function respectively. The regularization parameter can be set optimally by the spread function of the ith model parameter which is defined by the parameter resolution matrix R. The spread function shows how much the ith model parameter is not resolvable and is written as bellow: Where, M is the total number of inversion parameters, is a weighting factor defined by the spatial distance between the ith and jth model parameters, and is a factor which indicates whether the constraint or regularization is imposed on the ith parameter and its neighboring parameters. An alternative to varying the Lagrangian multiplier as the iterations proceed is to use the spatially varying Lagrangian multiplier (Sasaki, 1989). Hence, varying the Lagrangian multiplier by trial and error is preferred to get resolution and stability. Small regularization parameters mean higher resolvable inversion blocks in parameter resolution analysis sense (Menke, 1989). We tested the capability of the Active Constraint Balancing (ACB) approach (Yi et al., 2003) in enhancing the resolving power of least-squares inversion results by applying it on 2D synthetic MT data generated from forward modeling code of Geotools-MT for simple models of the earth and then on the field data. Using ACB approach, the rms error and data misfit is much less than the conventional approach with fixed Lagrangian multiplier, which leads to higher resolving power and the stability of the inversion results. The inversion code which was used in this paper (Lee et al., 2009) consists of finite element for computing 2D MT model responses, and smoothness-constrained least-squares inversion. By comparing the resistivity sections, the anomalous object can be seen much clearer and distinct in the case of ACB approach. This enhancement in the resolution could be well interpreted as the result of using varying Lagrangian multipliers in the smoothness-constrained least-squares inversion using ACB approach.
    Keywords: Lagrangian multiplier, ACB, Resolution, Least, squares inversion, Magnetotelluric, Stability
  • Sanaz Smaeili, Mohammad Kazem Hafizi, Sajad Jazayeri, Jonaghani Page 131
    Ground Penetrating Radar (GPR) is one of high resolution geophysical methods for locating missed objects, buried equipments. GPR is a nondestructive testing method that is useful in various kinds of explorations such as mapping the archaeological sites. The safety of the archaeological sites is one of the most concerns of the archaeologists. One of the best GPR applications is in archaeology as a non-destructive exploration method. This method uses high frequency electromagnetic waves or Radio Frequency range (12.5–2300 MHz) to investigate the subsurface and detecting underground objects. It is used to measure features such as depth of the buried anomalies, layers thickness, moisture content, horizontal cracks, voids and hidden objects. The electromagnetic waves reflected from the interfaces in the subsurface are used to analyze the structure of the area under scan. The electromagnetic waves are emitted from an antenna that is not necessarily in direct contact with the earth surface. The same antenna can be used to record the reflected energy stemming from the dielectrical interfaces. The penetration depth of the GPR technique varies with acquisition and subsurface conditions. The data gathering procedure can be designed in two or three dimensions. If the target is such as pipe or the pre-known target, the data gathering can be done in 2D. But if the target is completely unknown, especially as the matter of its place, the three dimensional data grid will be more useful and accurate. The GPR signals diffract when they contact the anomalies. The anomalies seem as hyperboles in sections. With these hyperboles in the sections, accurate detection of anomaly's depth and other properties are impossible. For solving this problem, a processing method should be applied. The processing methods vary depends on the different goals. Using various processing methods with parameters can be applied for different purposes. The operator's ability in choosing the proportionate parameters is as important as choosing an appropriate processing algorithm to aim for an acceptable image. Although in some special conditions, some processing may cause unwanted problems in data. This paper contains investigations in an archaeological site in Kerman Province, Iran. The exact location of the site is in the middle of the Kerman city, the capital of the Province. The antenna which has been used was a 500 MHz GPR antenna, made by MALA. The target was the remains of one the most important and also old schools of Iran which had been buried under the soil for hundreds of years. On the top of the data collection areas, where had been specified by the archeologists, some houses and buildings had been made. The data gathering procedure in the mentioned area was a three dimensional one. The data were collected in 2009. The Reflex 3D software was used for the processing and the depth estimation of the acquisitioned data. Regarding the properties of the target in the study area, the processing algorithms can be different from case to case. For example, the data filtering that is usually being used for windowing the environmental noises in some cases, is sufficient for bolding the subsurface reflections. Although in some other cases, more and complicated levels of processing are needed. The whole processing algorithm depends on the land’s properties, the GPR instruments, the softwares and mainly the data gathering methods. So, in all of the cases not all of the processing levels are necessarily needed. Perhaps in some situations, some of the processing levels are not useful and may cause the operator unwanted difficulties in the data sections. In this paper, the needed processing levels in archaeological cases were studied, and also the results on the Kerman area have been presented. The gain filter was the most sufficient processing step in the processing algorithms. The changing of different parameters in this filter caused the most clear improvement in the results and for making the anomaly’s source known. So, in the cases that the depth range of anomaly’s source had been guessed by the experts or the other methods and the archaeological areas, a proper gain filter was recommended. One of the other methods that will be so useful in making the anomalies more clear in the GPR sections is the amplitude limitation. The results of applying this method are shown in this paper. The unwanted amplitudes are trouble-making in the GPR anomaly interpretations. The amplitude limitation helps the operator and the interpreter to distinguish the main anomalies from the environmental noises. The Dewow processing step is also one of the steps which will decrease the near surface or air signals between the transmitter and the receiver. As conclusion, two mentioned processing methods above, namely the gain filter and amplitude limitation, were used as two major parts of processing algorithms and helped the exploration team to clarify the source’s exact coordination.
    Keywords: Static correction, Gain, Dewow processing, Envelope, GPR
  • Abolghasem Kamkar Rouhani, Esmaeil Eshaghi, Alireza Arab Amiri Page 145
    Potential field images obtained in potential field data measurements are suitable tools for mineral and hydrocarbons resources explorations. These images consist of different anomalies which in many cases are contaminated with noise. The horizontal location of the boundaries of potential field anomaly sources is of interest in potential field interpretation. However, the edge of potential field sources is not clear, because of the loss of resolution of the anomaly shape with respect to the shape of their sources. Edge enhancement is a technique, applied to potential field data to produce regions of constant field amplitude that are separated by sharp boundaries, as an aid to interpretation. Various methods have been introduced for anomaly edge detection, such as the analytic signal, tilt angle, total horizontal gradient and profile curvature. The tilt angle is the ratio of the first vertical derivative to the horizontal gradient. Curvature of the geophysical data is one of the most important attributes with many applications in geophysical data processing and interpretation. The profile curvature at a point shows the change in slope in maximum gradient direction. We can compute the tilt angle and profile curvature by Eq. (1) and Eq. (2), respectively
    Keywords: Potential field, Data balancing, Hilbert transform, Anomaly edge
  • Abolghasem Kamkar Rouhani, Esmaeil Eshaghi, Alireza Arab Amiri Page 155
    Ground penetrating radar (GPR) method as a high resolution non-destructive geophysical method is used for detection of shallow subsurface targets. This method is based on the transmission of electromagnetic waves inside the earth and recording the reflected waves from the subsurface. As the method uses high-frequency electromagnetic waves in the frequency ranges of 12.5 to 2500 MHz (called GPR waves), it can only be used for shallow subsurface investigations. Using this method, continuous images of the reflections of GPR waves from the interfaces of subsurface media with different electrical properties are obtained. Shallow cavities, due to their different electrical characteristics from the background, are among the targets which can be detected by this method. Since the depth of penetration of GPR waves in an area is controlled by the electrical conductivity and permittivity of the ground of the survey area, the depth of penetration of GPR waves, where fine-grained sediments are present, is relatively lower due to higher electrical conductivity of fine sedimentary grains compared to coarse sedimentary grains. Thus, the relative grain sizes and clay content of the subsurface sediments can be investigated by the GPR method. In general, shallow subsurface structures, having different materials and thicknesses, can be detected by the method as the structures and their host media normally have different electrical (namely, conductivity and permittivity) properties. In this research work, the GPR data acquisition has been carried out using 250 MHz shielded antenna along 5 lines in Darkhanyab area near Mojen Town, which is located at the distance of approximately 25 km northwest of Shahrood City. The purpose of this GPR survey is to detect shallow subsurface structures such as the water Qanat in the area. Due to the low distance between the GPR transmitter and receiver as well as the electrical properties, especially the conductivity of the ground, and also, to remove the unwanted low frequency signals or reflections while preserving the high frequency signals, the Dewow filter was applied before any other processing to all GPR data sets. Short time intervals between the transmitted GPR pulses and the pulses received directly from the air-ground surface, and also, the existence of reflections from the shallow subsurface targets, cause signal saturation in the receiver. For this, the Dewow filter is applied on the GPR data to correct for signal saturation or Wow in the data. Different types of gains are also among the processing methods applied on the data to reduce the attenuating effect of the GPR waves as the depth increases. To demonstrate the effects of different gains and to select the optimum gain, we applied different gains on the GPR data. To convert the trace from a wavelet with both positive and negative components (i.e. sine or cosine nature) to a mono-pulse wavelet with positive components, we used the envelope filter. This process removes the oscillatory nature of the radar wavelet and shows the data in its true resolution, making it easier to interpret. In this research, for processing the two-dimensional (2-D) GPR data or sections, Win_Ekko_Pro software was used. In addition, for three-dimensional (3-D) processing and modelling of the GPR data, EKKO-Mapper and EKKO-3D software programs were used. To display the output data from the Win_Ekko_Pro and EKKO-3D software programs, we also used T3D software. These software programs have been developed by Canadian Sensors & software Company. The results of this research work indicate that using the characteristics of GPR waves in the 2-D GPR sections; we can detect the subsurface targets such as cavities and discriminate coarse-grained sediments from fine-grained sediments, and also, determine the electrical properties of subsurface layers with high success. High resolution of the GPR data in this research have enabled us to determine the water qanat interfaces with its surroundings such as soil-concrete, concrete-air, air-water and water-concrete interfaces in the subsurface. Furthermore, the high conductive clayey soils above the water qanat canal in some places cause high attenuation of the GPR waves, and thus, highly limit the depth of penetration of the GPR waves. This phenomenon is also seen in the surrounding zone of the water qanat canal that mainly occurs due to the seepage of water to the ground. The soil bedding can also be easily observed in the obtained GPR sections. The horizontal soil layers, having thicknesses of several centimeters, have covered the surface of the ground in the survey area. A high resistive subsurface zone in the GPR sections, characterized by the ringing phenomenon, is interpreted as a cavity. In general, the relative high conductivity of the ground in the area causes to have a limited depth of penetration of the GPR waves that rarely exceeds 2 meters. The location of the water qanat in the shallow subsurface of the area was evident from the 2-D and 3-D GPR modeled sections. However, the detection of the water qanat in depths greater than 1 or 2 meters was difficult or even impossible from the GPR results due to the limited depth of penetration of the GPR waves in the area. Overall, it was possible to discriminate coarse-grained sediments from fine-grained sediments, and to some extent, to determine the amount of clay content and moisture in the subsurface from processing, modeling and interpretation of GPR data.
    Keywords: Dielectric constant, Subsurface targets, Clay content, Ground penetrating radar (GPR)
  • Farnaz Hosseinpour, Alireza Mohebalhojeh, Farhang Ahmadi, Givi Page 175
    Climatological study of the mid-latitude storm tracks was investigated using the daily Global Forecast System (GFS) and the National Centers for Environmental Prediction (NCEP) Reanalysis data sets from 1948 to 2008 winter seasons. In anomalous winter 2007-2008, decreasing of eddy kinetic energy along with the reduction of eddy available potential energy upstream of Atlantic storm track are related to weakening of the baroclinic generation with respect to the long-term means. In addition to the baroclinic conversion, the most important term in strengthening and stretching of the jet-stream upstream of Atlantic storm track, the increase of the positive values of barotropic conversion plays a controlling role. Study of the energetics over the Middle East shows that in January 2008, the domination of the subtropical jet stream core over the south-western Iran was responsible for the increase of the mean kinetic energy. Baroclinic generation had a remarkable role in energy feeding from downstream of Mediterranean, which produced strong convergence of ageostrophic flux due to the downstream development over the Mediterranean storm track. Significant increase of baroclinic conversion, baroclinic generation, and divergence of ageostrophic flux have their maximum values over the north of Arabian Peninsula causing intense eastward radiation of eddy energy toward the central part of Iran. In this month, the maximum convergence of ageostrophic flux shows that on average, Iran was a strong sink of energy associated with deepening of thermal trough and dominance of unprecedented cold anomalies over the northern Iran in the recent decades. In February 2008, in addition to the increase of the mean kinetic energy due to the downstream displacement of subtropical jet-stream over south-western Iran, the maximum of positive barotropic conversion anomaly in Northern Hemisphere was located over this region. Local core of baroclinic generation over the central part of Iran elongated between the significant positive bands of barotropic conversion was accompanied by the increase of the eddy kinetic energy and eddy available potential energy, respectively. This indicates that Iran, on average, was a transmitter of energy toward the adjacent eastern regions during this month.
    Keywords: Eddy kinetic energy, Barotropic conversion, Eddy available potential energy, Barotropic generation, Baroclinic conversion, Ageostrophic flux
  • Nafiseh Pegahfar, Abbas Ali Aliakbari, Bidokhti, Peyman Zawar, Reza Page 189
    Mixing height of the atmospheric boundary depends on the vertical variation of temperature in the atmosphere which includes temperature inversion (including surface and elevated inversions)‎ that has a significant effect on air quality. The mixing height like some other meteorological variables has diurnal variations. The reason for that is its dependence on some other basic meteorological parameters e.g. surface turbulent fluxes (momentum and heat fluxes)‎, wind speed, temperature stratification. In urban area surface roughness and topography also affect the mixing height.
    Keywords: Urban boundary layer, Tehran, Stable boundary layer parameterization, PM10 concentration
  • Iman Soltanzadeh, Abbas Ali Aliakbari, Bidokhti, Peyman Zawar, Reza Page 207
    Wind is the carrier of pollutants and any other gaseous or particle matters in the atmosphere. Stable atmosphere with low wind provides favorable conditions for high contamination of pollutants in urban areas. The importance of mesoscale atmospheric flows in air pollution dispersion has been recognized in the past three decades and has been the focus of intensive research; both observational and numerical. Mesoscale or local scale circulations are more prominent when the synoptic pressure gradients are weak, allowing horizontal temperature contrasts to develop, which in turn lead to mesoscale pressure perturbations. Tehran, a city which is situated at the southern foothills of the Alborz Mountain chain has an average elevation of 1500m, and covers an area of 864 km2. Alborz Mountains have a significant influence on the dynamics and thermodynamic modification of air movement over the city. At the same time, the Urban Heat Island effect (UHI) can cause its own mesoscale flow, complicating an already complex local scale flow. The topography and the urban fabric can cause slope flows, mountain flows, and valley flows amongst many others factors. This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is unfavourable. So the average of soundings of Radiosonde at Mehrabad station, for the almost all fair days of cold seasons of 2005 to 2008 has been selected as ideal initial condition and boundary condition with 10 × 10 km spatial and 12-hour temporal resolutions. The simulations were carried out for a 3-day period in December 2005 when an aircraft, due to low visibility caused by high concentration of air pollution, crashed in 2 miles away from the end of runway into an inhabited area. Three simulations are prepared. For the first experiment, called control run, we used the default urban setting of Tehran. In the second simulation urban properties of Tehran was removed completely from land-use fed to the model to investigate the effect of urban area on thermal induced circulation of Tehran. This simulation is called NoURB simulation. To investigate the role of urbanization the 3rd simulation was prepared. In this simulation which will be referred as Urban simulation, three urban categories are used; class 31 of USGS land use/land cover used for “Low Intensity Residential” which includes areas with a mixture of constructed materials and vegetation. These areas are most commonly included as single-family housing units in which the population densities is lower than that in high intensity residential areas. Class 32 of USGS represented as “High Intensity Residential” which includes highly developed areas where people reside in high numbers. Finally class 33 of USGS used for “Commercial/Industrial” which includes infrastructure (e.g. roads, railroads, etc.) and all highly developed areas not classified as High Intensity Residential. The Noah LSM provides surface sensible and latent heat fluxes, and surface skin temperature as lower boundary conditions. It has asingle vegetation canopy layer and the following prognostic variables: soil moisture and temperature in the soil layers, water stored on the canopy, and snow stored on the ground. It includes: 1) increasing the roughness length from 0.5 m to 0.8 m to represent turbulence generated by roughness elements and drag due to buildings; 2) reducing surface albedo from 0.18 to 0.15 to represent the shortwave radiation trapping in the urban canyons; 3) using a larger volumetric heat capacity of 3.0 J m-3 K-1 for the urban surface (walls, roofs, and roads) which is usually consisted of concrete or asphalt materials; 4) increasing the value of soil thermal conductivity to 3.24 W m-1 K-1 to parameterize large heat storage in the urban surface and underlying surfaces, and 5) reducing green vegetation fraction over urban city to decrease evaporation. In order to better represent the physical processes involved in the exchange of heat, momentum, and water vapor in urban environment in mesoscale model, an UCM is coupled to the WRF model. The main purpose of the coupled model is to improve the description of lower boundary conditions and to provide more accurate forecasts for urban regions. The UCM is a single layer model which has a simplified urban geometry. Some of the features of the UCM include, shadowing from buildings, reflection of short and longwave radiation, wind profile in the canopy layer and multi-layer heat transfer equation for roof, wall and road surfaces. The basic function of an UCM is to take the urban geometry into account in its surface energy budgets and wind shear calculations. The urban model is based on the urban canopy model which includes: 1) 2-D street canyons that are parameterized to represent the effects of urban geometry on urban canyon heat distribution; 2) shadowing from buildings and reflection of radiation in the canopy layer; 3) the canyon orientation and diurnal cycle of solar azimuth angle, 4) man-made surface consists of eight canyons with different orientation; 5) Inoue’s model for canopy flows; 6) the multi-layer heat equation for the roof, wall, and road interior temperatures; 7) anthropogenic heating associated with energy consumption by human activities; and 8) a very thin bucket model for evaporation and runoff from road surface. The main limits in this kind of study over Tehran metropolitan are lack of observation data beside lack of documentation. The previous studies over Tehran indicate a significant increase in minimum temperatures in 50-year trend especially in cold seasons. This indicates that the artificial and anthropogenic heating leading to urban heat island in Tehran have been significant in this period. These studies also indicate that nocturnal drainage flows in Mehrabad Int. Airport synoptic station has also been weakened, in this 50-year period. This paper focuses on the use of state-of-the-art atmospheric numerical model – The Weather Research and Forecasting (WRF) – in an ideal situation to study the characteristics of the mesoscale flow systems that prevail over Tehran when air quality is bad. The results indicate that urban areas near complex topography can increase transfer of material (pollution) and energy in the boundary layer and from this layer to the free atmosphere. Other results from this study show UHI induces rural-urban flows which significantly reduce drainage wind speed.
    Keywords: Urban model, Local flows, WRF model, Complex topography
  • Sarmad Ghader, Alireza Mohebalhojeh, Marziyeh Sharifidoost Page 223
    To control the nonlinear numerical instability, throughout the time evolution of the Eulerian form of the nonlinear rotating shallow water equations, it is necessary to add numerical diffusion to the solution. It is clear that, this extra numerical diffusion degrades the accuracy of the numerical solution and should be kept as small as possible. In a conventional approach a hyper-diffusion is used to maintain the numerical stability. In the present work, the influence of using different orders of hyper-diffusion on the accuracy of the numerical solution of the shallow water equations generated by some high-order numerical methods is examined. Furthermore, application of an eighth-order compact spatial filter as an alternative way to provide the numerical diffusion is considered. In the present work, the vorticity-divergence-mass representation of the shallow water equations is considered for numerical simulation. To advance the governing equations in time the semi-implicit approach combined with the three level leapfrog method for the time discretization of the temporal derivatives, is used. The second-order central, the fourth-order compact, the sixth-order super compact finite difference and the pseudo-spectral methods are applied to spatial differencing of the shallow water equations. For a quantitative assessment of accuracy, the global measures of the distribution of mass between potential vorticity iso-levels, called mass error, is used. In addition, based on the results of some recent investigations, decomposing a flow into a balanced part representing vortical flow and an unbalanced part representing freely propagating inertia-gravity waves has found significant usefulness in the accuracy analysis of the numerical solution of the primitive equations. Therefore, in this work the representation of balance and imbalance are also used for quantitative assessment of the numerical accuracy. It is found that the numerical diffusion plays a crucial role in the accuracy of numerical solution. Results show that using lower order hyper-diffusion terms degrades the numerical accuracy. Furthermore, results show that using higher orders of hyper-diffusion for the sixth-order super compact and pseudo-spectral method is essential. In addition, it is observed that based on the quantitative measures of the mass error, using a sixth-order hyper-diffusion term or using the eighth-order compact spatial filter has nearly a similar effect on the numerical accuracy of the shallow water equations generated by the sixth-order super compact finite difference and the pseudo-spectral methods. The same conclusion is valid based on the quantitative measures of the imbalance in particular for hyper-diffusion terms with orders greater than two.
    Keywords: Compact filter, Hyper, diffusion, Pseudo, spectral method, Shallow water equations, Super compact method
  • Hamid Nazari Pour, Seyed Abolfazl Masoodian, Zahra Karimi Page 241
    In different regions, precipitation takes place with different persistencies and every persistency supplies a share of rainfall days and precipitation. Therefore, the importance of rainfall persistence could be evaluated in all places. Iran is located in Mid-Latitude of an arid region, in which the mean rainfall is 250 mm and it has dramatic tempo-spatial changes. Rainfalls with short persistence are of characteristics of arid regions and it is also tangible in Iran. However, Iran’s rainfalls persistence ranges from 1 to 45 days and have dramatic tempo-spatial changes, but the maximum amount and days of rainfalls are supplied by rainfalls with short persistency. So, the phenomenon of rainfalls with long persistency is considered as an extreme event which has extreme variability. As the persistence of precipitations increases, their role in generating Iran’s rainfall days decreases severely in such a way that the maximum rainfall days of Iran is supplied by one-day rainfalls. However, the share of one-day rainfalls in the supply of precipitation days of Iran’s Western half is more accentuated. In contrast, the increase in the persistence of rainfalls does not have an identical role in decreasing the supply of Iran’s precipitation. As the persistence of precipitations increases, the share of precipitation in the Central and Southwestern Iran decrease severely, but in Western and Northern Iran, vice versa is the case. In some heavy precipitation regions of Iran’s Western half, the decrease of precipitation persistence is associated with the decrease of the share of precipitation supply and in other regions; the decrease of the share of precipitation supply is gradual. Therefore, in every space, some of the persistent rainfalls supply the great share of precipitation days and precipitation amount and are considered important. However, it is possible that this precipitation persistency do not have such importance in those areas. Every kind of variability and change in the role of precipitation persistence in every space will be considerable. Spatial changes of one-day precipitation’s share in the supply of Iran’s precipitation days and precipitation amount could be evaluated from this angle. To evaluate the changes in one-day precipitation’ share in the supply of precipitation days and precipitation amount, the daily observations of precipitations in 1437 stations of throughout Iran was used. Drawing upon Kriging method, the observations of the stations were generalized in a regular network by 15*15 km dimensions and Iran’s isotheral digital maps were developed from 1961/03/21 till 2004/12/30. These digital maps include daily time series (15991 days) of precipitation amount for 7187 cells. Precipitation persistence in the time series of every cell was evaluated and in addition to that, their share in the supply of precipitation days and precipitation amount of each cell were also calculated. Then, the most important persistence of Iran’s precipitations (one-day persistence) was identified and their importance was investigated. Yearly and monthly time series of one-day precipitation’ share in the supply of precipitation days and precipitation amount were entered in a trend analysis for evaluating and understanding its changes and its results were considered. In spatial analyses including identification of climatologically variables trend, more confident way is that firstly, spatial interpolation is done; then, an appropriate trend test is performed on the data on the nodes. The results obtained from such analyses not only enjoy higher degree of spatial attribution, but based on closeness principle, spatial order of points themselves provide intuitional reason for accepting or rejecting trend analysis. One-day precipitations supply more share of Iran’s precipitation days compared to remaining precipitation persistencies in such a manner that it may be noted that in all regions of Iran, the frequency of one-day precipitations is maximum compared to remaining precipitation persistence. In contrast, Iran’s precipitation is provided by different persistencies and the share of one-day precipitations in precipitation supply is maximum only in Western half (Central and Southeastern parts). However, although one-day precipitations do not have much importance throughout Iran, the degree of their importance in Eastern half is maximum compared to Western half. The share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount has changed with time. The results of yearly changes of share of on-day precipitations in the supply of Iran’s precipitation days and precipitation amount indicate that their share in the supply of precipitation days decreases in one quarter of Iran’s area and only in 3% of Iran’s area, their share increases. Given that Western and Central Iran’s maximum precipitation days are provided by one-day precipitations, precipitation days of Eastern Iran have decreased. In addition, their share in the supply of precipitation days decreases in 1/5 of Iran’s area and only in 6% of Iran’s area, their share has increased. On the other hand, Given that Central Iran’s maximum precipitation days are provided by one-day precipitations, their share in the supply of precipitation days has decreased; just in discrete regions and along with Zagros and Alborz unevennesses, their share increases. The results of yearly changes of share of one-day precipitations in the supply of Iran’s precipitation indicate that their negative trend in all rainfall months is greater than their positive trend. Looking more generally into the share of monthly changes of one-day precipitations in the supply of Iran’s precipitation, the aspects of Iran’s precipitation concentration becomes evident, especially in Eastern and Central Iran.
    Keywords: Iran, One, day Precipitations, Persistency, Trend Analysis
  • Jamshid Ghanbari, Maryam Karimian, Iman Babaeian, Mahdi Motiei Page 259

    The Islamic calendar is based on lunar months, which begin when the thin crescent Moon is actually sighted in the western sky after sunset within a day or so after the New Moon. The Islamic dates begin at sunset on the previous day. The visibility of the lunar crescent as a function of the Moon's age - the time counted from the New Moon - is obviously of great importance to Muslims. The date and time of each New Moon can be computed exactly but the time that the Moon first becomes visible after the New Moon depends on many factors and cannot be predicted with certainty. The sighting of the lunar crescent within one day of New Moon is usually difficult. The crescent at this time is quite thin, has a low surface brightness and can easily be lost in the twilight. Generally, the lunar crescent will become visible to suitably-located, experienced observers with good sky conditions about one day after the New Moon. However, the time that the crescent actually becomes visible varies quite a bit from one month to another. The record for an early sighting of a lunar crescent with a telescope is 12.1 hours after New Moon; for naked-eye sightings, the record is 15.5 hours from New Moon. For Islamic calendar purposes, the sighting must be made with the unaided eye. Obviously, the visibility of the young lunar crescent depends on atmosphere conditions, the location and preparation of the observer. The prediction of the first sighting of the early crescent Moon is an interesting problem because it simultaneously involves a number of highly non-linear effects. Effects to be considered are the geometry of the Sun, Moon, the width and surface brightness of the crescent, the absorption of the Moon's light and the scattering of the Sun's light in the Earth's atmosphere, the physiology of human vision and natural horizon. The effects of meteorological conditions such as mean sea level pressure, visibility, mean temperature and humidity on Crescent visibility are studied in this paper. Our studied sites are located in the south, center and eastern part of Iran including Mashad, Bojnord, Birjand, Isfahan, Shiraz and Kerman cities. Two series of data are used in this study. The first one data were sighting and visibility of the lunar crescents which recorded by Moon's sighting groups in the above mentioned cities and the second series of data were the meteorological observations of mean sea level pressure, mean temperature, horizontal visibility and relative humidity in the same dates and locations of Moon's sighting. Horizontal visibility is divided into two categories of bellow and above 10kM. Period of study was 8 years starting from 1423 to 1430 according to Islamic calendar. Genetic algorithm is used to formulate the relations between moon visibilities and meteorological parameters. Genetic algorithms are one of the best ways to solve a problem for which little is known. They are very general algorithms and so may work well in any search space. Genetic algorithms use the principles of selection and evolution to produce several solutions to a given problem. Two methods of linear and non-linear approaches are used to model the statistical relations between the lunar visibilities and meteorological parameters. For linear-based method the following formula is used: We used the bellow formula for the nonlinear approach: Where MSE is the Mean Square Error, Robs and Rmod represent actual and modeled visibility of the Moon. P, RH, T and V are mean seas level pressure, relative humidity, temperature and visibility, respectively. (n-p) is degree of freedom and ai is constants. One of the important factors affecting crescent visibility is meteorological parameters, but they have not been considered well up to now. In this paper a Genetic algorithm has been used to find relationship between percentage of crescent lighting and meteorological parameters such as sea level pressure; mean temperature, relative humidity and horizontal visibility. In this regards, observations have been considered during the period of 1423-1430 lunar Hejri(Islamic calender) calendar for Mashad, Kerman, Shiraz, Esfahan, Birjand, and Boujnourd for two cases with about 10 km horizontal visibility. Error, bias and weighted factor of meteorological impacts on crescent visibility have been calculated after comparing modeled and observed crescent visibility. Results generally show that non-linear parameterization equations have more bias than linear equations. Maximum bias with 3.24 has been occurred in nonlinear model for horizontal visibility less than 10km over Birjand and Bojnourd sites. The minimum bias of crescent visibility has been occurred in Shiraz by 0.01 percent. The minimum and maximum percentages of relative error are found in Shiraz and Birjand by 1.96% and 99%, respectively. We also found that in linear modeling with horizontal visibility more than 10km, weighted effect of pressure increase by decreasing altitude from mean sea level and effect of humidity decreases by increasing altitude from mean sea level. Our result confirms that the crescent visibility is more sensitive both to pressure and horizontal visibility. Overlay, linear and nonlinear equations have acceptable results for modeling crescent visibility. Results of this paper reveal that meteorological parameterization of crescent visibility can be used for prediction of crescent visibility from meteorological view point.

    Keywords: Crescent lighting, Crescent moon visibility, Meteorological parameters, Genetic Algorithm