فهرست مطالب

فیزیک زمین و فضا - سال چهل و دوم شماره 2 (تابستان 1395)

فصلنامه فیزیک زمین و فضا
سال چهل و دوم شماره 2 (تابستان 1395)

  • تاریخ انتشار: 1395/07/04
  • تعداد عناوین: 18
|
  • حبیب رحیمی*، هوشمند زندی صفحات 221-232
    هدف این مطالعه بدست آوردن نقشه های توموگرافی دو بعدی سرعت گروه امواج ریلی برای پهنه شمال غربی فلات ایران است. برای این کار از داده های زمین لرزه های محلی مربوط به بازه زمانی 2006 تا 2013 ثبت شده در 10 ایستگاه باند پهن پژوهشگاه بین المللی زلزله شناسی و مهندسی زلزله استفاده شد. ابتدا منحنی های پاشندگی مد اساسی موج ریلی با استفاده از روش پردازش تک ایستگاهی برآورد شدند. در روش تک ایستگاهی، بعد از اعمال تصحیحات اولیه، منحنی های پاشندگی مد اساسی سرعت گروه موج ریلی با استفاده از آنالیز زمان- فرکانس (FTAN) برای مسیرهای مختلف چشمه- ایستگاه برآورد شدند. بعد از برآورد منحنی های پاشندگی مد اساسی سرعت گروه، با استفاده از روش وارون سازی خطی- دوبعدی ، نقشه های توموگرافی سرعت های گروه برای پریودهای 2 تا 50 ثانیه بدست آمدند. نتایج بدست آمده برای پریود 5 ثانیه یک آنومالی کم سرعت زیر آتشفشان سبلان را نشان می دهد، در حالیکه زیر آتشفشان سهند آنومالی پرسرعت مشاهده می شود. برای پریود 10 ثانیه نتایج متفاوتی مشاهده می شود. زیر آتشفشان سبلان آنومالی پرسرعتی مشاهده می شود، در حالیکه زیر آتشفشان سهند، و همچنین در امتداد کمان ماگمایی ارومیه-دختر آنومالی کم سرعتی مشاهده می شوند. برای پریود 20 ثانیه نتایج در بیشتر مناطق آنومال کم سرعت را نشان می دهد. برای پریود 40 ثانیه متفاوت می شود، بطوریکه آنومالی کم سرعت در بخش های جنوبی و آنومالی پرسرعت در بخش های شمالی قابل مشاهده است.
    کلیدواژگان: توموگرافی، سرعت گروه، منحنی های پاشندگی، مد اساسی، روش تک ایستگاهی، وارون سازی خطی
  • سید رضامیراشرفی، ناصر خاجی* صفحات 233-245
    در تحقیق حاضر، یک روش تئوری پیشنهاد می شود که در آن، با استفاده از روش گره مشترک که مبتنی بر روش عددی المان محدود است، مدلهای دو بعدی و سه بعدی از چند نوع گسل برای در نظر گرفتن ناپیوستگی و پیدا کردن میدان لغزش گسل ها ارائه شده است. روند حل مساله تا رسیدن به میدان لغزش گسل، شامل دو بخش حل مستقیم و حل معکوس می باشد. در حل مستقیم، با اعمال لغزش های یکنواخت معلوم در گسل، جابجایی های سطح زمین بدست آمده و صحت آن ها با نتایج تحلیلی مقایسه شده است. در بخش حل معکوس، با استفاده از روش گره مشترک، ماتریس های اپراتور گرین بدست آمده اند. برای منظور کردن ناپیوستگی ها با این روش، فقط بردار بارگذاری اصلاح می شود و هیچ افزایشی در تعداد درجات آزادی اعمال نمیشود، و بنابراین ماتریس سختی کل سیستم بدون تغییر میماند. درنهایت، برای پیدا کردن رژیم لغزش گسل و تعیین ناحیه قفل شده، بعد از رفع تکینگی ماتریس های اپراتور گرین، حل معکوس بدست می آید. صحت سنجی مدل های حل معکوس نیز نتایج قابل قبولی را نشان می دهند، که حاکی از تعیین ناحیه قفل شده گسلها با دقت قابل قبول هستند.
    کلیدواژگان: گسل، ناپیوستگی، روش المان محدود، گره مشترک، تابع گرین، ناحیه قفل شده
  • فروغ محمدامینی، الهام شعبانی*، نوربخش میرزایی صفحات 247-262
    تخمین اندازه ی بزرگترین زمینلرزه ا ی که ممکن است توسط یک گسل مشخص یا چشمه ی زمینلرزه رخ دهد، از جمله پارامترهای مهم تاثیرگذار درتحلیل خطر زمینلرزه است. یکی از روش های تخمین این بزرگی، استفاده از رابطه های تجربی بین بزرگی و پارامترهای گسل عامل زمینلرزه است. هدف اصلی این مطالعه ارائه ی بهترین روابط تجربی ممکن بین بزرگی و پارامترهای گسل به چند روش رگرسیون مناسب، مبتنی برکاتالوگ گردآوری شده و به روز شده ی زمینلرزه های منطقه است. برای این منظور، برخی از پارامترهای گسلش از جمله طول شکستگی سطحی، بیشینه جابه جایی های افقی و قائم و بزرگی های M_w و M_s بیشتر از 5/5 گزارششده ی دوره ی دستگاهی تا پایان سال 2014 میلادی برای زمینلرزه های ایران و نواحی مجاور با دقت گردآوری شده است. برای این منظور، سه مدل رگرسیون SR، ISR وOR به کار گرفته شده است.نشان داده شده است که ضرائب بین بزرگی ها و پارامترهای گسل در رابطه های حاصل از روش های SR و OR با خطای کمتری نسبت به روش ISR تخمین زده می شوند. در عوض، در روابط معکوس، طول شکستگی سطحی و بیشینه جابه جایی های قائم و افقی برحسب M_w و M_s، ضرائب حاصل ازروش ISR با کمترین میزان خطا نسبت به سایر روش ها تخمین زده می شوند. نتایج حاصل از این مطالعه با رابطه های مشابه جهانی مقایسه شده است. به عنوان نمونه، در رابطه ی بین M_s و طول شکستگی سطحی، رابطه ی میانگین جهانی به کار رفته تا بزرگی حدود 2/7 نسبت به رگرسیون OR در این مطالعه، دارای فراتخمین و پس از آن فروتخمین دارد.
    کلیدواژگان: روابط تجربی، پارامترهای گسل، بیشینه بزرگی، طول شکستگی سطحی، بیشینه جابه جایی سطحی
  • مهدی ماهری پیرو، عبدالرضا قدس*، مجید عباسی صفحات 263-279
    در این مطالعه به منظور بررسی تغییرات جانبی ضخامت پوسته در ایران، با استفاده از روش توموگرافی لرزه ای تغییرات سرعت گروه بیشینه دامنه ی جابجایی موج برشی ML در پوسته ی ایران بدست آمده است. داده های استفاده شده در این مطالعه شامل 56152 خوانش سرعت ML استخراج شده از شکل موج های ثبت شده توسط شبکه های لرزه نگاری دائم و موقت ایران است. زلزله های انتخاب شده شامل 2943 رویداد زلزله در قالب 63 خوشه ی لرزه ای با خطای مکان یابی 5 کیلومتر یا کمتر هستند. با استفاده از روش کمترین مربعات مقید، وارون سازی مستقیم برای بدست آوردن نقشه ی سرعت دوبعدی موج ML به همراه تصحیحات ایستگاه و چشمه انجام شد. نقشه ی سرعت موج برشی بدست آمده از این مطالعه تشابه زیادی با نقشه ی سرعت امواج Pnدارد که بیانگر وابستگی شدید سرعت موج Lg به تغییرات ضخامت پوسته و سرعت گوشته ی بالایی است. سرعت های بیشتر از 4 کیلومتر بر ثانیه برای حوضه ی خزر جنوبی و زاگرس نشان دهنده انسداد انتشار موج Lg در این مناطق است. مرز سرعتی بین زاگرس و ایران مرکزی به طور قابل ملاحظه ای از گسل اصلی زاگرس انحراف دارد که می تواند نشان دهنده ی زیرراندگی بخشی از صفحه ی عربی به زیر ایران مرکزی باشد. منطقه ی لوت دارای سرعت کم موج Lg است که می تواند نشانگر قاره ای بودن پوسته لوت باشد. البرز، قسمت اعظم ایران مرکزی و بخصوص منطقه شمال غرب ایران دارای سرعت کم موج Lg هستند که می تواند با گرم بودن پوسته در این مناطق در ارتباط باشد.
    کلیدواژگان: موج Lg، سرعت موج برشی، توموگرافی لرزه ای، کمترین مربعات مقید، حوضه خزر جنوبی، زاگرس
  • زینت مهدی دوست جلالی، اسدالله شاه بهرامی* صفحات 281-292
    سیستم های خبره معمولا در امر پیش بینی زلزله استفاده می شوند. در این سیستم های خبره از پارامترهای مختلفی مانند رفتار گسل، غلظت رادون، انرژی، پالس و تعداد ضربه استفاده می شود. با بررسی این پارامترها می توان میزان رخداد زلزله را برآورد کرد. میزان دقت پیش بینی زلزله توسط این سیستم های خبره نسبت به روش های غیرهوشمند نسبتا بالاتر است. در این مقاله ابتدا میزان دقت و نوع داده های استفاده شده در انواع مختلف سیستم های خبره در زمینه پیش بینی زلزله مورد مطالعه قرار گرفت. علاوه بر آن، پیش بینی زلزله با یک سیستم خبره مبتنی بر الگوریتم های مختلف ماشین بردار پشتیبان، درخت تصمیم گیری، شبکه عصبی مصنوعی، ماشین بردار پشتیبان براساس بهینه سازی ازدحام ذرات، بیزین و شبکه پرسپترون چندلایه در محیط Rapidminer پیاده سازی شد. نتایج به دست آمده نشان داد که سیستم خبره مبتنی بر ماشین بردار پشتیبان که با الگوریتم ازدحام ذرات بهینه شده باشد نسبت به سیستم های خبره مبتنی بر شبکه عصبی، ماشین بردار پشتیبان، بیزین، درخت تصمیم گیری و شبکه پرسپترون چندلایه دارای دقت پیش بینی بهتری است.
    کلیدواژگان: پیش بینی، سیستم خبره، زلزله
  • مهرداد سلیمانی منفرد*، علی خلیل زاده صفحات 293-308
    روش های مرسوم در پردازش داده های لرزه نگاری بازتابی، در مواجه با ساختارهای پیچیده و یا حضور تغییرات شدید جانبی سرعت، معمولا نتایج مطلوبی بدست نخواهند داد. بدین ترتیب در روش های نوین تصویر سازی لرزه ای، الگوریتم های مورد استفاده به گونه ایی طراحی می شوند که بتوان تصاویر لرزه ای را مستقل از مدل سرعت تهیه کرد. روش سطح بازتاب مشترک، به عنوان یک روش مستقل از مدل سرعت در ساختارهای با هندسه پیچیده، تصویر لرزه ای قابل قبولی بدست می دهد. در این تحقیق، سعی گردید که با استفاده از مزایای روش سطح بازتاب مشترک و عدم حساسیت بالای روش کوچ زمانی پیش از برانبارش کیرشهف به مدل سرعت، با تلفیق این دو روش، بتوان با برخی از مشکلات تصویر سازی در ساختارهای پیچیده، مقابله نمود. بدین ترتیب پس از تعیین استراتژی تلفیق و تعیین معادلات لازم، داده لرزه ای مربوط به منطقه ای در غرب ایران مورد پردازش قرار گرفت. همچنین نتایج با روش مرسوم کوچ زمانی پیش از برانبارش نیز مقایسه گردید. با مقایسه نتایج بدست آمده از هر دو روش، مشاهده گردید که تصویر لرزه ای بدست آمده از تلفیق روش سطح بازتاب مشترک و کوچ زمانی پیش از برانبارش، قادر به آشکارسازی بیشتر و بهتر ساختارهای مورد جستجو بود. از دیگر مزایای تلفیق این دو زوش، استفاده از یک مدل سرعت بسیار ساده به منظور تهیه مقطع نهایی بود که این مساله در روش های مرسوم کوچ بسیار حائز اهمیت می باشد. بنابراین می توان بیان کرد که در مناطق با ساختارهای پیچیده، بتوان با استفاده از تلفیق این دو روش تصویر لرزه ای با کیفیت قابل قبول تهیه کرد.
    کلیدواژگان: تصویرسازی لرزه ای، سطح بازتاب مشترک، سطح پراش مشترک، کوچ زمانی پیش از برانبارش، تداخل شیب ها، ساختار پیچیده ی زمین شناسی
  • راشد پورمیرزایی*، احمد زارعان، رسول حمیدزاده مقدم صفحات 309-323
    مطالعه و مدلسازی ویژگی های سرعتی مناطق نزدیک سطح زمین به دلیل ارتباط مستقیم آن با تاسیسات شهری واقع برروی آن، از لحاظ ژئوتکنیکی و مهندسی زلزله از اهمیت خاصی برخوردار است. در سال های اخیر میکروترمور شکست مرزی (ReMi) برای تخمین منحنی های پاشش و در نهایت مدلسازی سرعت موج برشی، به-دلیل هزینه پائین و سرعت بالای برداشت داده ها مورد استقبال قرار گرفته است. اما مشکل اساسی در پردازش این داده ها وارون سازی منحنی پاشش جهت تخمین سرعت امواج برشی است. در مقاله حاضر سعی شده است با پیشنهاد وارون سازی همزمان امواج ReMi (امواج ری لی) و امواج انکساری (زمان سیر امواج) با روش بهینه سازی الگوریتم ژنتیک چند هدفه و استفاده از مفهوم جبهه پارتو تخمینی از ساختار سرعت موج برشی ارائه شود. برنامه الگوریتم مذکور در محیط متلب نوشته شده است. روش پیشنهاد شده در ابتدا به وسیله مدل های مصنوعی مورد ارزیابی قرار گرفت و در ادامه برای ارزیابی بیشتر روی داده های تجربی اعمال شد. بدین منظور یک ایستگاه در جنوب تبریز، که بیشتر در برگیرنده واحدهای سنگی میوسن-پلیوسن و رسوبات آذرآورای هستند، برداشت شد. نتایج به دست آمده با استفاده از الگوریتم پیشنهاد شده با روش وارون سازی منفرد داده ReMi با استفاده از روش وارون-سازی گروه ذرات مقایسه شد. نتایج وارون سازی به دست آمده، در مورد مدل های مصنوعی و هم داده های تجربی بیانگر عملکرد قابل قبول الگوریتم پیشنهاد شده، به عنوان یک روش موثر در وارون سازی همزمان داده های ژئوفیزیکی، در مقایسه با سایر روش های مرسوم می باشد. این الگوریتم یک راهکار مناسب در کاهش عدم یکتایی نتایج وارون سازی است.
    کلیدواژگان: وارون سازی همزمان، میکروترمور، امواج انکساری، بهینه سازی چند هدفه، سرعت موج برشی، الگوریتم ژنتیک
  • مرتضی عرفانیان نوروززاده*، وحید ابراهیم زاده اردستانی صفحات 325-335
    در این تحقیق از طریق روابط محاسباتی جدید، پارامترها ی هندسی چشمه ی آنومالی، بر اساس مدل استوانه ی شیبدار، محاسبه می شود. روش های گذشته عمدتا روی حالت های افقی و عمودی استوانه بررسی می شد و علاوه بر این زاویه ی امتداد آن نیز درنظرگرفته نمی شد. روش مورد استفاده در این مقاله با حل محدودیت های موجود در روش های پیشین، اجازه ی چرخش آزادانه-ی استوانه در فضای زیر سطحی را فراهم می کند. نهایتا با این روش می توان تغییرات بیشتری از جمله زاویه شیب ، امتداد ، شعاع، عمق تا بالای چشمه و تباین چگالی استوانه را بررسی کرد. روابط جدید معکوس سازی در این مقاله، روابطی هستند که در گذشته توسط سو و همکارانش مطالعه شده اند. در این مقاله نیز با 38 مدل مختلف و استفاده از الگوریتم معکوس سازی مورد استفاده توسط سو و همکارانش، روابط جدید روی مدل های دیگر مطالعه شده اند. در اینجا نیز همانند مطالعه ی قبلی برای معکوس سازی از الگوریتم رگرسیون گیری خطی چندگانه و روش حداقل مربعات استفاده می شود. در این الگوریتم بین پارامترهای چشمه به عنوان متغیر مستقل و یک سری نشانگرها به عنوان متغیر وابسته ارتباط خطی برقرار می شود. نشانگرها روابطی هستند که از مختصات بخش های خاصی از آنومالی باقی مانده استخراج می شوند. این بخش ها قسمت هایی از منحنی آنومالی هستند که با تغییر پارامترها، تغییرات قابل ملاحظه ای دارند. در این مقاله یک شفت در منطقه ی سیاه بیشه به عنوان مطالعه ی صحرایی بکار گرفته شده است. زون ریزشی در این شفت توسط داده های میکروگرانی آشکارسازی شده و توسط روش معرفی شده در این مقاله با استفاده از مدل استوانه ی شیبدار مدل سازی شده است.
    کلیدواژگان: رگرسیون خطی، آنومالی گرانی، مدل استوانه، شفت معدنی، زون ریزشی
  • کیلان راست بین، سعید وطن خواه*، وحید ابراهیم زاده اردستانی صفحات 337-348
    در این مقاله روشی برای وارون سازی داده های گرانی سنجی با استفاده از تابع منظم کننده نرم یک ارائه شده است. استفاده از این نوع پایدارکننده مساله وارون را به سمت حصول جواب هایی متراکم و با مرزهای تیز سوق می دهد، بنابراین برای بازسازی ساختارهای زمین-شناسی دارای مرزهای گسسته مناسب است. ارتباط نزدیک بین منظم کننده نرم یک با قید فشردگی بررسی شده است. برای محاسبه جوابی که تابع هدف نرم 1 را کمینه کند، الگوریتمIRLS (Iteratively Reweighted Least Square) به کار می رود. در هر تکرار تابع وزن دهی پارامترهای مدل با استفاده از مدل به دست آمده در تکرار قبل بهنگام می شود. حل عددی مساله وارون با استفاده از تجزیه مقادیر تکین تعمیم یافته انجام پذیرفته است. پارامتر تنظیم کننده تعادل بین دو عبارت تابع هدف با استفاده از روش UPRE (Unbiased Predictive Risk Estimator) محاسبه می شود. برای بررسی کارایی روش، داده مصنوعی تولید شده توسط یک دایک شیب دار استفاده شده است. مدل حاصل از وارون سازی تفکیک پذیری نسبتا بالایی دارد، مرزهای بازسازی شده، شیب و تباین چگالی آن نزدیک به مدل اصلی هستند. نتایج دلالت برآن دارد که استفاده از منظم کننده نرم یک، به همراه سایر قیود مورد نیاز، می تواند روشی موثر برای شناسایی مرزهای توده زیرسطحی باشد. برای نشان دادن کارایی عملی این روش داده گرانی برداشت شده بر روی سد گتوند در جنوب غربی ایران برای مدل سازی مورد استفاده قرار گرفته است. نتایج حاصل از وارون سازی این داده ها انطباق نسبتا خوبی با نتایج حاصل از حفاری های صورت گرفته در منطقه نشان می دهند.
    کلیدواژگان: گرانی سنجی، وارون سازی، منظم سازی، نرم یک، پارامتر تنظیم
  • وحید انتظارسعادت، سید هانی متولی عنبران* صفحات 349-356
    به دلیل اهمیت زیاد تعیین مرزهای آنومالی در تفسیر نقشه های گرانی و مغناطیس، روش های گوناگونی برای تعیین مرز آنومالی ها وجود دارند. برخی از این روش ها عبارت اند از فیلتر زاویه تیلت (T)، فیلتر مشتق افقی کل زاویه تیلت (THDT)، فیلتر تتا، فیلتر TDX، فیلتر زاویه تیلیت هذلولوی (HTA) و غیره. فیلترهای فاز محلی قابلیت های زیادی دارند اما با یک ضعف عمده و آن کاهش دقت آنها در رویارویی با منابع عمیق است. این تحقیق توانایی روش گرادیان کل افقی نرمال شده (NTHD) را در تعیین مرز آنومالی ها بررسی و آن را با سایر روش های مرسوم مقایسه می نماید.
    به منظور بررسی کارایی فیلتر NTHD، این فیلتر بر روی مدل مصنوعی مکعب اعمال گردید و برای بررسی بیشتر، به مدل مکعب مصنوعی نویز گوسین اضافه گردید و سپس این فیلتر بر روی داده های حاوی نویز اعمال که نتایج به دست آمده در هر دو مورد نشان از کارایی بالای این فیلتر را دارند. علاوه بر این، فیلترNTHD و دیگر فیلترهای ذکر شده بر روی مدل های مصنوعی مکعبی ترکیبی با عمق-های مختلف و در مجاورت هم، به کار بسته شد که تقریبا تمام روش ها در تعیین مرز آنومالی ها سطحی موفق عمل کردند اما فیلتر NTHD در مقایسه با فیلترهای دیگر جزئیات بیشتری را در دسترس قرار می دهد به طوری که در تعیین مرز آنومالی های عمیق قرار گرفته در مجاورت آنومالی های سطحی، بهبود واضحی نشان می دهد. نهایتا فیلتر NTHD بر روی داده های واقعی توده معدنی موبرون (Mobrun) کانادا اعمال گردید که نتایج آن با مطالعات قبلی تطابق خوبی را نشان می دهد.
    کلیدواژگان: تعیین مرز آنومالی، فیلتر NTHD، فیلترهای فاز محلی، آنومالی های گرانی، مشتق افقی کل، توده معدنی موبرون کانادا
  • بهزاد سرلک، حمید آقاجانی*، علی نجاتی کلاته صفحات 357-368
    استفاده از روش های ژئوفیزیکی در کاوش های باستان شناسی جهت تعیین مرز بی هنجاری قبل از حفاری می تواند مفید و موثر باشد. در این میان به دلیل نداشتن اثرات مخرب محیطی روش گرانی سنجی یکی از روش های پرکاربرد می باشد. در گرانی سنجی تباین چگالی بین دیواره ها و اتاقک ها مورد مطالعه قرار می گیرد. در این مقاله از داده های گرانی و فیلترهای فازی جهت بررسی دیواره های ساختارهای زیرسطحی در محوطه باستانی تپه حصار دامغان استفاده شده است. در این راستا علاوه بر فیلترهای زاویه تمایل، نقشه تتا، لاپلاسین، تانژانت هایپرپولیک، از یک فیلتر جدید برای تفکیک و بارزسازی ساختارهای زیرسطحی در داده های مصنوعی و داده های واقعی محوطه باستانی تپه حصار استفاده گردید. فیلتر جدید نسبت مشتق به هنجار نامیده شده که با نرمال کردن مجموع مشتق های متعامد در راستای محورهای افقی به دست می آید. نتایج حاصل از بررسی داده های گرانی تپه حصار نشان می دهد که تفکیک و بارزسازی دیواره ها و اتاقک ها در محدوده برداشت با نتایج حفاری های انجام شده توسط گروه باستان شناسی انطباق بالایی دارد.
    کلیدواژگان: گرانی سنجی، تپه حصار، باستان شناسی، آشکارسازی لبه، فیلترهای فازی، نسبت مشتق بهنجار
  • جمال الدین بنی عامریان، بهروز اسکویی*، موریتزیو فدی صفحات 369-391
    در این تحقیق، از میان روش های متعدد تخمین عمق و نوع ساختار چشمه های میدان پتانسیل (میدان مغناطیسی و
    در این مقاله برخی از متداول ترین روش های تخمین عمق و نوع ساختار چشمه های میدان پتانسیل (مغناطیسی و گرانی) از جمله روش های سیگنال تحلیلی یا گرادیان کل، تصویرسازی مشخصه هایچشمه (Source Parameter Imaging-SPI)، تصویرسازی بهبود یافته مشخصه های چشمه،(Improved SPI-iSPI)، واهمامیخت اویلر(Euler deconvolution)، عدد موج محلی (Local wavenumber) و روش ترکیبی سیگنال تحلیلی و واهمامیخت اویلر (ANEUL) مورد بحث قرار می گیرند. این روش ها غالبا بر مبنای مشتق های میدان و یا ترکیب های مختلف آن می باشند. سپس گروه دیگری از روش ها تحت عنوان روش های چندمقیاسی (multiscale) که در سال های اخیر برای تعیین عمق و شکل چشمه ها مطرح شده اند بررسی می شوند. این روش ها همانگونه که از نام آنها پیداست به تحلیل میدان در مقیاس های مختلف می پردازند و عموما از پایداری بیشتری نسبت به نوفه برخوردارند. برای درک بهتر روش های معرفی شده، این روش ها روی داده های مصنوعی تولید شده توسط یک مدل که پی سنگ مغناطیسی را شبیه سازی می کند به کاربرده می شوند، سپس نتایج به دست آمده با یکدیگر مقایسه می شوند. علاوه بر این نشان داده می شود که چگونه با استفاده از نتایج روش های مختلف می توان جواب های به دست آمده را اعتبارسنجی کرد. در این مقاله تنها به مطالعه ساختارهای دوبعدی پرداخته می شود. در نهایت این روش ها برای تحلیل داده های واقعی ناشی از یک ساختار دوبعدی مغناطیسی مورد استفاده قرار می گیرند.
    کلیدواژگان: تخمین عمق، مشتق میدان، میدان گرانی، میدان مغناطیسی، تحلیل چند مقیاسی میدان پتانسیل، ادامه فراسو
  • مصطفی موسی پور یاسوری، صفا خزایی* صفحات 393-403
    روش های متعدد عرضه شده برای افزایش توان تفکیک لبه ها، اکثرا براساس مشتقات افقی یا عمودی میدان پتانسیل هستند. فیلتر ی مانند مشتق کل افقی (THD) که در تعیین لبه منابع عمیق کارایی کمتری دارد و یا فیلتر هایی مانند TDX و Theta که حساسیت کمتری به عمق بی هنجاری وهمچنین دقت بیشتری در تشخیص لبه ها، نسبت به فیلتر THD دارند. در این تحقیق، روش مشتق افقی متعادل (BHD) و مشتق کل افقی مرتبه دوم به صورت نرمالایز شده (TDX2) برای تعیین دقیق مرز بی هنجاری های میدان پتانسیل بکار رفته است. در این تحقیق بر روی داده های یک مدل مصنوعی بدون نویز، کارایی فیلتر های مورد استفاده بررسی و سپس بر روی داده های مدل با نویز اتفاقی 2% مقایسه ی بین فیلتر ها انجام شده است. نتایج حاصل نشان می دهدکه مختصات افقی لبه به دست آمده از فیلترهای BHD و TDX2، در مقایسه با فیلتر های TDX و Theta، به مختصات لبه مدل نزدیکتر است. همچنین، فیلترهای BHD وTDX2 لبه بی هنجاری های حاصل از تداخل اجسام مجزا با دامنه های متفاوت را از دیگر فیلترها بهتر تفکیک می کنند . همچنین در این تحقیق، فیلتر های مذکور بر روی داده های گرانی قنات موسسه ژئوفیزیک به کار برده شده است. بر اساس نتایج حاصله، فیلتر های TDX2 و BHD با تفکیک لبه-های بی هنجاری قنات و سایر بی هنجاری های موجود، روند کلی قنات را به خوبی نشان می دهند. لبه ی به دست آمده از این فیلتر ها حدودا 27/1 متر است، در حالیکه برای فیلترهای TDX و Theta، 7/2 متر است که با توجه به اطلاعات زمین شناسی منطقه احتمال وجود قناتی با این عرض بعید است.
    کلیدواژگان: فیلتر مشتق افقی متعادل، تعیین لبه، فیلتر مشتق کل افقی مرتبه دوم نرمالایز شده، میدان پتانسیل
  • تیمور علیزاده*، قاسم عزیزی، علیرضا محب الحجه، فرامرز خوش اخلاق صفحات 405-417
    در این پژوهش تغییرات زمانی- مکانی چرخندها در مدیترانه با الگوریتم رقومی طی دوره 1980 تا 2013 مطالعه گردید. برای این منظور از داده های باز تحلیل شده ERA-Enterim با تفکیک شبکه 5/0 × 5 /0 درجه در تراز 1000 هکتوپاسکال استفاده گردید، گام زمانی این داده ها بصورت 6 ساعته و محدوده -10 تا 64 درجه طول شرقی و 23 تا 50 عرض شمالی برای این پژوهش انتخاب گردید. برای شناسایی چرخندها از الگوریتم کمینه فشار تراز 1000 هکتوپاسکال، یک نقطه در مقایسه با 8 نقطه اطراف آن استفاده گردید و برای حذف چرخندهای سطحی و حرارتی، میانگین تاوایی نسبی تراز 800 هکتوپاسکال، نقطه کمینه تا شعاع 300 کیلومتری آن محاسبه گردید. نتایج نشان داد که چرخندهای شناسایی شده با این شرایط، در دریای مدیترانه به روی ایران نمی رسند و حداکثر تمرکز آنها در خاورمیانه بر روی مرکز کشور عراق است. بررسی روند سالانه چرخندها نشان دادکه با ضریب افزایشی اندکی فراوانی چرخندهای مدیترانه در حال افزایش بوده این روند در غرب مدیترانه بیشتر از شرق مدیترانه بوده است. نکته قابل توجه در این پژوهش، جابه جایی مکانی چرخندها در مدیترانه بوده که به سمت غرب و عرض شمالی تر جابه جا شده اند.
    کلیدواژگان: دریای مدیترانه، الگوریتم چرخندزایی، مرکز ثقل، تغییرات زمانی، مکانی
  • میر رضا غفاری رزین*، بهزاد وثوقی صفحات 419-437
    سیگنالهای سیستم تعیین موقعیت جهانی (GPS) اطلاعات باارزشی را از ساختار فیزیکی یونوسفر در اختیار می گذارند. با کمک این مشاهدات می توان مقدار محتوای الکترونی کلی (TEC) را برای هر مسیر دید مابین گیرنده و ماهواره بدست آورد. در این مقاله اندازه گیری های بدست آمده از 22 ایستگاه موجود در شمالغرب ایران (48>λ>44 ،40>φ>36) جهت تعیین مقدار محتوای الکترونی کلی در راستای قائم (VTEC) استفاده شده است. بدلیل کمبود مشاهدات و توزیع مکانی نامناسب ایستگاه ها، جهت برآورد زمانی- مکانی مقدار VTEC در سایر نقاط، دو مدل شبکه عصبی مصنوعی 3 لایه(MLP-ANN) و شبکه عصبی با توابع پایه شعاعی(RBFNN) بر اساس الگوریتم پس انتشار خطا (BPA) بکار گرفته شده است. 3 ایستگاه آزمون با توزیع مناسب جهت ارزیابی صحت نتایج انتخاب شده است. کمینه خطای نسبی در نقاط آزمون برای شبکه عصبی مصنوعی 3 لایه40/1 % و برای شبکه عصبی با توابع پایه شعاعی 88/1 % محاسبه شده است. محاسبه خطاهای نسبی کم و همچنین آنالیز انجام گرفته، بیانگر قابلیت بالای روش های GPS+MLP-ANN و GPS+RBFNN در مقایسه با روش های متعارف درون یابی در نشان دادن تغییرات زمانی- مکانی یونوسفر می باشد.
    کلیدواژگان: یونوسفر، شبکه عصبی چند لایه، شبکه عصبی با توابع پایه شعاعی، IGS، GPS
  • صدیقه امیدی، محمدحسین معماریان*، سعید فرزانه صفحات 439-448
    مقاله حاضر در راستای کشف وجود یا عدم وجود رابطه تحولات پارامترهای یون سپهری با پدیده هواشناختی بارش، تغییرات بیشینه های روزانه محتوای کلی الکترونی قائم (VTEC) یون سپهری حاصل از پردازش داده های خام دو ایستگاه GPS را با تغییرات بارش ثبت شده در دو ایستگاه همدیدی واقع در محل ایستگاه های GPS، که در بازه زمانی مورد بررسی از نظر میزان بارش تفاوت قابل توجهی داشتند، مورد تحلیل و مقایسه قرار داده است. این تحلیل ها در شرایط آرام ژئومغناطیسی صورت گرفت و در نتیجه مشاهده شد که بین تغییرات بیشینه های VTEC یون سپهری و وقوع بارش، رابطه ای نظام مند وجود دارد، به گونه ای که پس از وقوع افت و خیزهایی با دامنه مخصوص در بیشینه VTEC، پیک های بارشی رخ داده اند. همچنین در ایستگاهی که تغییرات بیشینه های VTEC یون سپهری نسبتا ملایم بود، شرایط پایدار وردسپهری مشاهده شد، در حالی که در همان بازه زمانی در ایستگاهی که تغییرات بیشینه های مذکور نسبتا شدید بود، ناپایداری وردسپهری حاکم است.
    کلیدواژگان: GPS، VTEC، یون سپهر، وردسپهر، بارش
  • سیف الله رسولی*، الهه حریری، سیامک خادمی صفحات 449-458
    در این مقاله روشی جدید برای تعیین عمق دید جوی، با استفاده از تصویربرداری از یک توری خطی انعکاسی با تابع برتابندگی سینوسی که گام آن متغیر است ارائه می شود. اندازه گیری عمق دید در ساعت مشخصی از روزهای مختلف ماه های آذر و دی ماه سال 1391 که عمق دید متفاوتی داشتند انجام شد. یک توری خطی سینوسی با گام متغیر در محدوده 7 تا 110 میلیمتر در مساحت 1 متر در 3 متر روی مقوای سفید رنگ و مات چاپ شد و توری روی قاب چوبی مناسبی تعبیه و بطور عمودی در ارتفاع 3 متری از سطح زمین نصب شد. از این توری از فاصله 475 متری توسط یک تلسکوپ که در کانون آن یک عدد دوربینCCD قرار داده شده بود تصویربرداری شد. تصاویر ثبت شده با استفاده از نرم افزار MATLAB تحلیل شد و نمایانی موضعی تصاویر توری تعیین شدند. گرد و غبار موجود در جو باعث از بین رفتن تفکیک گام های کوچکتر توری می شود. با تعیین کوچکترین گام قابل مشاهده در تصاویر، حد تفکیک زاویه ای از فاصله 475 متری تعیین شد. با مقایسه کوچکترین زاویه دید بدست آمده با کوچکترین زاویه مربوط به اندازه گیری عمق دید توسط چشم غیرمسلح و فرض خطی بودن حجم آلاینده ها نسبت به طول مسیر دید، عمق دید بدست آمد. مقادیر عمق دید بدست آمده از این روش با عمق دید گزارش شده توسط ایستگاه هواشناسی فرودگاه زنجان که در مجاورت محل اندازه گیری قرار دارد، مقایسه شدند که مطابقت خوبی مابین آنها مشاهده می شود.
    کلیدواژگان: عمق دید جوی، توری خطی با دامنه سینوسی و گام متغیر، تصویربرداری و انتشار نور از میان جو، ریزگردها، گرد و غبار
  • سیده سمانه ثابت قدم *، مسعود خوش سیما، فرهنگ احمدی گیوی صفحات 459-467
    دید ، علاوه بر اینکه به عنوان یکی از مهم ترین ویژگی های نوری جو محسوب می شود، از جنبه های مختلف از جمله ترافیک زمینی و هوایی دارای اهمیت است. در مقاله حاضر بانک اطلاعات تاریخی برای نشان دادن تغییرات دید در 4 فرودگاه پرتردد کشور شامل فرودگاه های مهرآباد تهران، مشهد، شیراز و اصفهان در دوره مشترک 40 ساله بررسی شده است. ضریب خاموشی با استفاده از رابطه کشمایدر برآورد می شود. تغییرات بلندمدت خاموشی در روزهای آفتابی و نسبتا خشک با کاربست پالایه در شرایط رطوبت نسبی بیش از 70% و ابرناکی بیش از 50% مطالعه شده است.
    نتایج نشان می دهد که روند تغییرات میانگین سالانه ضریب خاموشی جو در بازه زمانی 1970 تا 2010 افزایش نسبتا ملایمی را در هر چهار ایستگاه فرودگاهی دارد. مقایسه حدهای پایین و بالای دید، تغییر قابل ملاحظه ای در ایستگاه های مختلف را نشان نمی دهد. کم ترین مقادیر حدی دید مربوط به ایستگاه مهرآباد تهران است. بررسی تغییرات ماهانه میانگین دید افقی حاکی از آن است که در ماه های زمستان کمترین مقادیر وجود داشته و اندازه آن حدود 2 کیلومتر کمتر از ماه های دیگر است. ارتباط مستقیم میان رطوبت نسبی و ضریب خاموشی جو احتمالا ناشی از تبدیل آلاینده های اولیه به ثانویه در حضور رطوبت است. نتایج مطالعه حساسیت روند تغییرات بلندمدت خاموشی به رطوبت نسبی و ابرناکی نشان می دهد که با اعمال پالایه، تغییر اندکی در مقادیر مطلق داده ها ایجاد می شود؛ ولی در روند کلی تغییرات خاموشی، به ویژه در ایستگاه های تهران و اصفهان، تغییری دیده نمی شود که بیانگر کاهش دید در بلندمدت به دلیل افزایش غلظت آلاینده های جوی است.
    کلیدواژگان: ضریب خاموشی جو، دید افقی، فرودگاه های پرتردد، رطوبت، تغییرات بلندمدت
|
  • Hoshmand Zandi, Habib Rahimi* Pages 221-232
    The explanation of the elastic, or velocity structure of the Earth has long been a goal of the world’s seismologists. For the first few decades of seismological research, the investigation on velocity structure was restricted to the determination of one-dimensional models of the solid Earth and of various regions within it. Seismologists are currently obtaining three dimensional velocity models and are working to resolve finer and finer features in the Earth. The knowledge of seismic velocity structure of the crust and the upper mantle is important for several reasons: these include accurate location of earthquakes, determination of the composition and origin of the outer layers of the Earth, improvement of our ability to discriminate nuclear explosions from earthquake, interpretation of large-scale tectonics and reliable assessment of earthquake hazard. The Iranian part of the Alpine-Himalayan collision zone consists of an assemblage of lithospheric blocks, features a complex tectonic setting, which results from the collision and convergence of the Arabian plate towards Eurasia, thus investigation of the structure of the lithosphere and the asthenosphere of the Iranian plateau is of great interest.
    The North West of Iran is affected by important seismic activity concentrated along the North Tabriz Fault. In recent centuries, more than five successive and destructive seismic events have occurred along the North Tabriz Fault. The North West of Iran is particularly rich in geological and Most of NW Iran is located in a volcanic arc zone of Cenozoic age, including the Quaternary. Some of the main geological features in the North western of Iranian plateau include the Sahand Volcano, the Urmia Lake, salt deposits, travertine deposits, springs, limestone caves, tectonic structures and Cenozoic vertebrate fossils. Sahand and Sabalan peaks are the most prominent geological as well as topographic feature in the region.
    The aim of this study is to obtain two dimensional tomographic maps of Rayleigh wave group velocity of the Northwest part of Iran plateau. To do this, the local earthquakes data during the period 2006 to 2013, recorded at 10 broadband stations of the International Institute of Earthquake Engineering and Seismology (IIEES) were used. firstly, Rayleigh wave fundamental mode dispersion curves using single-station method were estimated. In single-station method after the preliminary correction, Rayleigh wave group velocity for each source-station using time-frequency analysis (FTAN) were estimated. After estimating group velocity dispersion curves, using a 2D-linear inversion procedure, the group velocity tomographic maps for the period 2-50 s were obtained. Each tomographic map has been discretized with a grid of 0.5° of latitude per 0.5° of longitude. The results at period 5 s show a low velocity anomaly beneath the Sabalan volcano, whereas beneath the Sahand volcano a high velocity anomaly is observed. At period 10 s the results are different. Beneath the Sabalan volcano a high velocity anomaly is observed, whereas beneath the Sahand volcano and also along the Urumieh-Dokhtar Magmatic Arc a low velocity anomaly are observed. At period 20 s in most of the study area, a low velocity anomaly is observed. The results at period 40 s are different, so that a low velocity anomaly in the southern part and a high velocity anomaly in the northern part are observed.
    Keywords: tomography, group velocity, dispersion curves, fundamental mode, single, station method, linear inversion
  • Reza Mirashrafi, Naser Khaji* Pages 233-245
    In this paper, a theoretical approach is proposed, in which spatial distribution of the strength of interplate coupling between two faces of strike-slip faults is investigated in detail through the inverse analysis of synthesized geodetic data. Synthesized (or available) geodetic data representing free surface movements is implemented to determine the solution of undertaken inverse problem that computes slippage vectors’ rates. Analytical approaches for treatment of faults in crustal deformation analysis involve some limitations. One important limitation of these methods is idealization of uniform dislocation on a rectangular fault plane in a uniform medium or half space. In fact, the real source is more complex than that supposed in these models and thus only the first-order aspects of the source characteristics can be evaluated from a uniform dislocation model. Isotropic and homogeneous material properties are the main assumptions of these methods. The Finite Element Method (FEM) on the other hand, allows easy treatment of complex boundary shape (interface zone) and internal variations of material properties. The FEM can simulate source geometry flexibly, and is also able to regard geological regimes and various layered structures. The standard equations of inverse problems offer a straightforward way for finding slippage vectors at two faces of the considered fault. One of the new aspects of the current study is evaluation of Green’s Operator Matrix (GOM) by means of FEM. This concept enables us to overcome all limitations of traditional inverse methods. In other words, the Green’s functions are not only functions of interface’s geometry, but are also functions of some other parameters like far-field boundary conditions, and geological structures (various material properties) which are not regarded in the traditional analytical inversion analysis. To implement fault sliding in a continuum-based FEM program, Split Node Technique (SNT) as a simple and efficient method is applied. This method does not increase the number of Degree Of Freedom (DOF) and the global stiffness matrix of system remains unchanged, which is the major advantage of this method. Furthermore, no net forces or moments are induced on the finite element mesh. This method is a direct approach and does not need any iteration, which is a common feature of other methods (e.g., contact problem techniques, or interface/joint elements). The initial idea of SNT for simple one-dimensional element is developed to 2D and 3D domains in the present research. How to find the Green’s functions by the FEM? By applying unit slippage vectors in each DOF of the interface nodes, we can determine corresponding component of the GOM. As other common inversion problems, singularity of coefficient matrix is the main problem. This problem particularly emerges if the number of DOFs is too large. The numerical procedure does not fail algorithmically, however it returns a set of slippage vectors that are wrong, even though direct substitution back into the original equations results in acceptable free-surface deformations. Singular Value Decomposition (SVD) diagnoses precisely what the problem is. In some cases, SVD will not only diagnose the problem, it will also solve it. The approach in current research is based on kinematic modeling of seismological problem. In other words, we only investigate fault movement, not causes of the occurred movement. In this research, both forward and inverse steps are considered to completely solve the problem. The forward step is performed by applying the slip along the fault’s faces and determining the displacement at the ground surface. This step is done using the FEM, whose results are compared with the analytical ones to verify the forward step. In the inverse solution on the other hand, our goal is to reach fault slip field using of surface displacement obtained from the first step as input data. Here, using this technique, 2D and 3D models of different types of strike-slip faults are presented in elastic mode for splitting purposes. The final step is to verify the inverse solution obtained for all models, from which the coupled zones of the considered faults are determined with acceptable accuracies.
    Keywords: Fault, Discontinuity, Finite Element Method, Split node technique, Green's operator, Coupled zone
  • Forough Mohammad Amini, Elham Shabani *, Noorbakhsh Mirzaei Pages 247-262
    To assess seismic hazard, it is essential to estimate the potential seismicity, particularly estimation of size of the largest earthquake that can be identified by a distinct fault or earthquake source. One of the methods applied to estimate magnitude of earthquake is use of empirical relationships between magnitude and fault parameters. Fault parameters for the earthquakes from early instrumental period to 2014 are compiled to develop a series of empirical relationships among M_w and M_s with surface rupture length, horizontal and vertical displacements. The objective of this study is provision of an accurate piece of information about previous earthquakes through studies and investigations as well as a comprehensive and through catalog of the recent earthquakes so as to represent precise empirical relationships between the magnitude and earthquake fault parameters. Moreover, the information about earthquakes with magnitudes greater than M_w and M_s ≥5.5 were selected. In the beginning, the relationships between magnitudes and fault parameters were acquired, then the database on the basis of three slip types, consisting of strike-slip, normal and thrust, were manipulated and eventually relations were obtained. One also should not overlook the fact that three regression models, including SR, ISR and OR, were employed in this study. In SR and ISR methods, no error are considered for independent variables, whereas the mentioned error is taken into account in OR method. The OR method is obtained by minimization of the squares of the orthogonal distances to the best-fit line, whereas SR is derived by minimizing the squares of the vertical offset and also inverted standard least-squares regression (ISR) is derived by minimizing the squares of the horizontal offsets. However, roughly equal uncertainties for the two variables are regarded in the SR and OR methods. According to the obtained results, the best relationship between M_w and M_s with surface rupture length was established with correlation coefficients of 0.87 and 0.86, respectively. Also the relationships between M_s with maximum horizontal and vertical displacements with correlation coefficient of 0.63 and 0.62 in a respect way, are far better than the relationships between M_w with maximum horizontal and vertical displacements with correlation coefficient of 0.59 and 0.59. In addition, the relationships between magnitudes (M_w or M_s) and maximum horizontal displacements indicate a better fit than the relationships between the magnitudes and maximum vertical displacements. It is also worth mentioning that the best fit between M_w or M_s and surface rupture length was acquired with correlation coefficients of 0.87 and 0.86, respectively, by separating the database based on slip type. Whatever was mentioned about relationships between magnitudes and maximum horizontal and vertical displacements, is still valid by separating the database on the basis of slip type. According to the results of the direct relations, the SR and OR regression methods are far better than the ISR regression method with less error than ISR regression method. What’s more, in inverse relations, the ISR regression method estimates the coefficients with the lowest error rate in comparison with other methods. As an outcome, the findings were established from the current study are better than the global relationships for Iran and its adjacent regions. For example, in the light of the relationship between M_s and surface rupture length, the utilized global relation has been overestimated and then underestimated up to M=7.2 with respect to OR regression. The obtained relations have been simultaneously compared to two global relations within relationship between MS and surface rupture length. The results would seem to suggest that both global relations are overestimated and then underestimated up to M=6.5 in comparison with other regression methods.
    Keywords: Empirical relationships, Source parameters, Maximum magnitude, Surface rupture length, Maximum surface displacement
  • Mehdi Maheri Peyrov, Madjid Abbasi, Abdolreza Ghods* Pages 263-279
    We useWe use ML shear wave velocity to derive a high resolution 2D ML shear wave velocity map for the Iranian Plateau. The ML amplitudes and arrival times are routinely measured for the calculation of local magnitude. ML shear wave velocity is very sensitive to the lateral change of crustal thickness and switches between the velocity of Lg and Sn waves. An Lg wave will die out as soon as encounter a sudden crustal change in favor of formation of mantle Sn wave. The collected data base is consisted of 56152 ML velocity belong to 2943 precisely relocated events happened during 1996 to 2012. The arrival time of ML amplitudes were read from waveforms of permanent and temporary networks in Iran. Using the arrival time of an ML amplitude and its ray length, we calculate average shear wave velocity for each ray. The selected events are consisted of 63 clusters with epicentral location uncertainty of 5 km or less. The cluster approach adopted in this work allows us to easily calculate empirical velocity error for each summary ray connecting a given observing station to the corresponding cluster. This also reduces drastically the number of initial 56152 rays to just 3107 summary rays and thus significantly reduces the required computation time for the seismic tomography. Except for the Makran region, the summary rays provide a good coverage for most of Iran. Using a constrained direct damped weighted least square inversion scheme, we inverted the ML velocity for a 2D ML shear wave velocity map of Iran along with its cluster and station correction terms. In our tomography, we constrained the velocity of each cell based on the azimuthal coverage of the hitting rays. The input average velocity for each ML ray was also weighted based on its empirical reading spread. The computed ML shear velocity varies mostly between 2.9 and 3.6 km/s, so suggesting that the majority of the rays are indeed Lg rays. The map shows a general similarity with previous maps of Pn velocity indicating that ML shear wave velocity is strongly affected by lateral changes of crustal thickness and upper mantle velocity. Our results show that Caspian Basin, and Zagros regions are Lg blocking regions. We speculate that the blockage of Lg wave in Zagros is related to strong lateral crustal thickness changes caused by the orogenic processes. We also noted that the shear wave velocity border between the Zagros and Central Iran is considerably deviating from Zagros suture line indicating a partial underthrusting of the cold Arabian plate beneath the Central Iran. The Lg blockage in South Caspian basin is either related to its postulated oceanic type crust and/or strong lateral change in its crustal thickness. East of the Caspian Sea shows high velocities likewise its interior, implying the low plain is underlane by either an oceanic type crust or a transitional crust with large lateral variations of crustal thickness. The ML velocity map also shows a velocity in the range of Lg velocity for the Lut block and thus implying a continental nature for the unknown Lut block. Alborz, most of the Central Iran and especially the northwestern Iran show rather low Lg velocities suggesting a warm continental crust.
    Keywords: Seismic tomography, Constrained least square, Zagros, South Caspian Basin, Lg wave, Shear wave's velocity
  • Zinat Mehdidoust Jalali, Asadollah Shahbahrami* Pages 281-292
    The crisis in the event of an incident or accident occurs suddenly and unexpectedly that urgent attention is needed for proper decision making. Despite technological advances, the suffering caused by natural disaster such as earthquake, flood, avalanche, hurricane, volcano, fire and there is abnormal. Mining activities are always risks with the risks of mine called, has been associated. Seismic hazard in underground mines is one of the threats to human life. Seismic hazard identification much more difficult to identify the natural hazards to an earthquake. Using advanced seismic earthquake and Seismoacoustic predict. The accuracy of the information generated is not optimal. The complexity of seismic processes and the disproportion between the low-energy seismic events and a number of high-energy phenomena (eg greater than 10 ^ 4 J) makes statistical techniques to predict the seismic hazard is not enough. So the search for better opportunities for risk prediction has become imperative. Clustering techniques for seismic hazard data and artificial neural networks were used to predict. In most applications, the results obtained by the methods listed in the situation "dangerous" and "safe" have been reported. Unbalanced distribution of positive samples (dangerous situation) and negative (safe condition), is a serious problem in seismic hazard forecasting. Currently, the methods used can not be expected to take appropriate and high sensitivity. A number of factors related to the risk of earthquakes, was proposed. Among other factors, the shake with energy greater than 10 ^ 4 J mentioned earthquake prediction can be defined with different methods. But the main goal is for all methods of seismic hazard assessment. In some cases, the data on the time and date, an increase in seismic activity that could lead to the destruction of rocks using pressure (Rockburst), the predicted.
    You can not focus on one parameter, occurrence or non-occurrence of earthquakes in the area or within the time limit specified. Should study several parameters at the same time be an acceptable technique for predicting earthquake found that one way to do this is to use expert system. The news systems provided that these systems have the ability to use more, better and smarter them. For example, in hydrology analysis only considered changes in water ions other parameters such as fault behavior, changes in sea level, etc., are not considered to predict earthquakes. On the other hand all expert systems to predict earthquakes from Precursor not use to predict. Some expert systems by the time information, location and depth of previous earthquakes, earthquakes predict the future. Studies have shown that large earthquakes decision tree and artificial neural network down accurately predicted. In this article by support vector machines based on particle swarm big earthquakes with higher accuracy than was some other expert systems.
    Expert systems commonly used in earthquake prediction. In these expert systems, various parameters are used such as fault behavior, the concentration of radon, energy, pulse and the number of bumps. The occurrence of earthquakes can be measured by checking these parameters. The accuracy of earthquake prediction by expert systems is relatively higher than non-expert system methods. In this paper, the accuracy and the kinds of data used in different expert systems in the field of earthquake prediction were studied. In addition, different algorithms to predict earthquake with an expert system based on support vector machines, decision trees, neural networks, support vector machines based on particle swarm optimization, Bayesian and MLP network was implemented in Rapidminer. The results show that support vector machine-based expert system that is optimized by particle swarm algorithm in comparison to neural network-based expert systems, support vector machines, Bayesian, decision tree and network MLP has a better prediction accuracy.
    Keywords: Earthquake, expert system, Prediction
  • Mehrdad Soleimani Monfared*, Ali Khalilzadeh Pages 293-308
    The conventional approach to seismic data analysis consists of two main steps: estimating seismic velocities, (the subsurface macro-model), and seismic imaging, (mapping of the reflected seismic energy to the reflector positions). The aim and the major challenge in the seismic data analysis is the construction of the best undistorted image. This challenge would be more problematic when geometrical complexity and lateral heterogeneity increase. It is obvious that conventional reflection seismic data processing methods cannot solve the problem of seismic imaging in complex geological structures. It is because of that most of those processing methods are strongly depended on seismic velocity propagation model. However, obtaining a precise velocity model as accurate as possible is always a controversial task. For this purpose, imaging methods are employed that do not rely on the explicit knowledge of subsurface velocity model. Therefore, in most of the researches of new seismic imaging methods, efforts are oriented to develop velocity independent imaging algorithms. The first idea of velocity-independent time-domain seismic imaging belongs to authors considered decomposing seismic data into a range of local slopes. Then methods that consider inversion of full waveform from the data were introduced. However, these methods are not the ultimate solution, because we need the velocity model for final depth imaging. Thus some methods are introducing to use advantage of pre-stack migration with iterative velocity model updating while using seismic imaging methods that don’t fully rely on velocity model. These integration methods aims to combine new updating formula for the first part (estimating seismic velocities) of the processing chain and use new velocity independent methods for the second part (Seismic imaging).
    Time migration is a common fast and robust process of obtaining seismic image. This process is considered adequate for the areas with mild lateral velocity variation. Moreover, time migration produces images in very specific time migration coordinates (x0; t0). However, even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. The main reason that this velocity variation will make distortion in the section is that reflected and diffracted energies will be placed in wrong positions. If this displacing could be slightly removed by any operator, e. g. Kirchhoff migration operator, in each offset section, another surface operator could be used to stack offset sections and enhance the final migrated section. In this study, we selected pre-stack time imaging with Kirchhoff migration algorithm method and the common reflection surface stack method for integration. Common reflection surface stack method is among velocity independent methods used for imaging in complex structures. The CRS operator will gathers any reflected and/or diffracted energy that could not be gathered by the conventional Kirchhoff summation operator. Thus if the geometrical distortions were corrected by the Kirchhoff operator and reflected energies were placed in their true locations, CRS operator will collects all the related diffracted energy from a depth point and will coherently stack these energies to image that point. The equation that comes in the following is the Kirchhoff operator:that defines the wave-field parameter, ΔP in each (x, t) point. To integrate these two methods, the CRS operator would be created for each (x, t) point and will be inserted in the above equation. These equations are: and where the angles are the take-off and emergence angle of the central ray and Ki are wave-front curvatures. aCO and bCO are related to x and t, respectively. Diffraction curve would be obtained for each point and the CRS surface would be created for that point.
    To investigate the efficiency of this method, the algorithm applied on a 2D seismic data. This data is from west of Iran which contains complex geometry with mild to strong lateral velocity change. After pre-processing steps, a smooth initial velocity model was derived for performing ray tracing. The kinematic ray tracing was used to define the common reflection surface operator. Afterwards, data were processed by Kirchhoff migration. In the next step, velocity model were corrected for residual move out. Finally pre-stack data was migrated again by the new corrected velocity model. This section should be compared with the result of PSTM and CRS integration method. The new migrated section could better shows faulting and bending of the reflectors. High thickness of Gachsaran formation in the region and strong lateral velocity change in different parts of the section, makes low illumination of the beneath Gachsaran structure. However, the new algorithm could gathers as much as possible reflected and/or diffracted energy from those structures in the data. Therefore, more clear structures and reflectors would be observed in the section and the general quality of the data would be enhanced. Finally, it could be concluded that by applying this proposed integration method will gives high quality image by increasing the signal to noise ratio and solving the problem of conflicting dips.
    Keywords: Seismic imaging, Common reflection surface (CRS), Common diffraction surface (CDS), Pre stack time migration (PSTM), Conflicting dips, Geological complex structure
  • Rashed Poormirzaee*, Ahmad Zarean, Rasoul Hamidzade Moghadam Pages 309-323
    It is very important to study and simulate S-wave structure for near surface (alluvium parts, in particular) owing to its direct relationship to urban facilities in geotechnical and earthquake engineering studies. So, in seismic microzonation, the first step is to study and identify S-wave pattern in alluvium in order to categorize different parts of cites according to S-wave velocity. Current techniques of estimating shallow shear velocities for assessment of earthquake site response are too costly for use at most construction sites. They require large sources to be effective in noisy urban settings, or specialized independent recorders laid out in an extensive array. Recently, refraction microtremor (ReMi) data have been frequently used for estimating of dispersion curves and simulating velocity of S-waves, because ReMi method is fast and cheap. However, inversion is the main problem in processing ReMi data for estimating velocity of S-waves. With the development of computer science, emergence of single-and multi-objective optimization techniques and inspiration of science from nature, an opportunity has been provided for decrease in non-uniqueness of inversion and finding the best possible solution. In this study, the joint inversion of microtremor and seismic refraction data was proposed using multi-objective Genetic Algorithm optimization and Pareto front concept for estimating S-wave velocity. After programming the multi-objective Genetic algorithm in Matlab, its efficiency was investigated by synthetic models and real datasets. Real datasets were obtained from 1 stations in south part of Tabriz (near Elgoli Road) that contain Miocene –Pliocene and pyroclastic bedrocks. For actual dataset we used Refraction microtremor (ReMi) as a passive method for achieve Rayleigh wave and seismic refraction data as an active method for getting travel time. For ReMi and seismic refraction data acquisition, the same layout can easily provide both P-wave travel times and surface wave dispersion curves if the sampling parameters are properly designed to satisfy the requirements of the two techniques. In current study ReMi and seismic refraction method was performed with using an OYO 24-channal seismograph and 4.5Hz and 28Hz geophones with a receiver spacing of 5m. for ReMi, Unfiltered 17 second records were collected at study site. Also in this study, resistivity data, as auxiliary information, are used. Resistivity method can provide information about bed rock and water table in study area. For this goal, resistivity measurements were carried out by using high-resolution RESECS resistivity meter system. In this study Wenner array, with 32 electrodes (2m unit electrode spacing), for measurements of 1-D resistivity imaging profile is used. For evaluation of proposed joint inversion algorithm, the results were compared with single inversion of ReMi data by Particle Swarm Optimization (PSO) algorithm. with Using joint inversion algorithm, a three layer subsurface model was found, which the first layer velocity is 321m/s and its thickness is 5.8m, second layer velocity is 365m/s and its thickness is 4.6m and last layer velocity is 547m/s.. The results of inversion in both synthetic and real dataset proved the reliability of proposed method, as a powerful technique for joint inversion, in comparison to current methods. . Also by Pareto concept the quality of inversion procedure can be easily detected. Because symmetry of Pareto front is strongly depends to accuracy of estimations. By using joint inversion algorithm we can achieve to a more correct Vs structure and decrease the non-uniqueness of Rayleigh wave inversion.
    Keywords: Joint inversion, Microtremor, Seismic refraction, multi, objective optimization, Shear wave velocity, Genetic Algorithm
  • Morteza Erfanian Norozzadeh*, Vahid Ebrahim Zadeh Ardestani Pages 325-335
    In this paper, a new formula is applied for the calculation of gravity anomalies from a cylinder model representing a geological body. Compared to conventional methods, this new development allows the cylinder to be freely oriented in space. 38 gravity forward models are produced each of which anomaly attributes are calculated. Then a linear relationship is established between attributes and source parameters as a new formula. These attributes are relations which calculated from coordinates of special parts of residual gravity anomaly curve. Using this linear relationship, source parameters can be then estimated by gravity anomaly attributes. Linear relationship is obtained by least-square method and minimizing differences. Consequently, Compared to previous methods, this new development considers multiple factors that have impact on geophysical observations (some neglected in previous studies) and more variables are considered such as dip angle, strike direction, size, depth to the top, and density of the cylinder. These parameters are important for determination of the geometric of subsurface geological body. Based on a series of forward modeling using the new formula, a multiple linear regression system has been developed. The multiple linear regression method relates the variations of residual gravity anomaly which is changed by variations source parameters. Based on previous studies, we suppose that the shape as well the amplitude of the gravity anomaly will change with the changes of cylinder occurrence. We use the multiple linear regression to examine if there is a linear relationship between each parameter of the cylinder and a series of attributes from the gravity anomaly. Actually, in this algorithm, we assessed the variability of a residual gravity anomaly as a function of the source occurrence. In previous study, seven most significant parameters of a cylinder-like body were identified, which influence the shape of the gravity anomaly as well its intensity. We build up a linear regression system that contains six linear relationships between the six most significant parameters of a cylinder-like body and seven attributes from the gravity anomaly. Those equations allow the user to estimate the multiple parameters of an elongated geological body simultaneously under the constraint of gravity observations. Integrating those calculations into gravity surveys helps in making a drilling decision. In this article, a shaft situated in Siah bisheh dam has been considered as a case study to verify proposed formula. This case study was a part of The Siah Bishe Pumped Storage project. The project was located in the Alborz mountain range, 125 km north of Tehran. The site can be reached on the main Chalus road, connecting Tehran with the Caspian Sea. The project area lies in the southern part of the Paleozoic- Mesozoic Central Range of the alpine Alborz mountain chain. The rock sequences in the project area consist of massive limestones, detrital series (sandstones, shales) and volcanic rocks of Permian formations, Triassic dolomites and Jurassic formations with black shales and sandstones. Several tectonic faults are crossing the project alignment. In this area, main purpose of gravity surveying was exploration of collapse zones. The gravity data used in this study come from gravity department of institute of geophysics in IRAN. The spatial resolution of the original gravity observations was 15 m between stations. The Bouguer anomaly grid was then interpolated to a spacing of 3 m using a minimum curvature gridding algorithm. The residual Bouguer anomaly is obtained after removing a first order polynomial trend to the Bouguer anomaly. Moreover, its engineering and geological parameters have been calculated. Collapsed zone in abovementioned shaft has been determined and illustrated using microgravity data. We applied the new method to residual gravity anomaly curve of collapsed zone. Finally, this zone has also been mathematically modeled using inclined cylinder.
    Keywords: multiple linear regression, gravity anomalies, cylinder model, shaft
  • Saeed Vatankhah* Pages 337-348
    In this paper the inversion of gravity data using L1–norm stabilizer is considered. The inversion is an important step in the interpretation of data. In gravity data inversion, the goal is to estimate density and geometry of the unknown subsurface model from a set of known observation measured on the surface. Commonly, rectangular prisms are used to model the subsurface under the survey area. The unknown density contrasts within each prism are the parameters which should be estimated. The inversion of gravity data is an example of underdetermined and ill-posed problem, i.e. the solution can be non-unique and unstable. Thus, in order to find an acceptable solution regularization should be imposed. Solution is usually obtained by minimizing a global objective function consisting of two terms, data misfit and the regularization term. Data misfit measures how well an obtained model can reproduce the observed data. Usually, it is assumed noise in gravity data is Gaussian, therefore a L2–norm measure of the error between observed and predicted data is well suited for data misfit. There are several choices for a stabilizer, depends on type of features one wants to see from inverted model. A typical choice is a L2 –norm of a low-order differential operator applied to the model, which also a priori information and depth weighting can be incorporated (Li and Oldenburg, 1996). In this case the objective function is quadratic, then minimization of the function results a linear system to be solved. However, the models recovered in this way are characterized by smooth feature which are not always consistent with the real geological structures. There are situations in which the sources are localized and separated by sharp, distinct interfaces. To deal with this problem, during last decades, researchers have proposed a few types of stabilizer. Last and Kubik (1983) presented a compactness criterion for gravity inversion that seeks to minimize the area (or volume in 3D) of the causative body. Portniaguine and Zhdanov (1999) based on this stabilizer, who named the minimum support (MS), developed the minimum gradient support (MGS) stabilizer. For both constraint, the regularization term can be written as the weighted L2–type norm of the model. Therefore, the problem of the minimization of the objective function can be treated same as conventional Tikhonov functional. The only difference is that a priori variable weighting matrix for model parameters incorporated in the regularization term. Thus the Iteratively Reweighted Least Square (IRLS) algorithm is required to solve the problem. Other possibility for stabilizer is the minimization of the L1-norm of model or gradient of model, the latter indicates total variation regularization. The L1–norm stabilizer allows occurrence of large elements in the inverted model among mostly small values. Therefore, it can be used to obtain sharp boundaries and blocky features. Although the L1–norm stabilizer has favorable properties, in reconstruction of sparse models, its numerical implementation in a minimization problem can be difficult because its derivatives with respect to an element is not defined at zero. To overcome this difficulty, in this paper, the L1–norm stabilizer is approximated by a reweighted L2 –norm term. The algorithm is extended to gravity inverse problem, which needs depth weighting and other priori information to be included in the objective function. For estimating the regularization parameter, which balances between two terms of objective function, the Unbiased Predictive Risk Estimator (UPRE) method is used. The solution of the resulting objective functional is found using Generalized Singular Value Decomposition (GSVD), also provides for efficient determination of the regularization parameter at each iteration. Simulation using synthetic data of a dipping dike demonstrates that the method is capable to reconstruct focused image, boundaries and slop of the reconstructed model are close to those of the original model. The method is applied on gravity data acquired over the Gotvand dam site, in the south-west of Iran. The results show rather good agreement with those obtained from the boreholes.
    Keywords: Gravimetry, Inversion, Regularization, L1, norm, Regularization Parameter
  • Vahid Entezar Saadat, Seyed Hani Motavalli Anbaran* Pages 349-356
    There are several varieties of edge detection method. Edge detection and edge enhancement techniques play an essential role in interpreting potential field data. Some of common methods are: Tilt angle filter (TA), Total horizontal derivative of tilt angle filter (THDT), Theta filter, TDX filter, Hyperbolic tilt angle (HTA). These filters maximum values are when facing with the edge of an anomalous mass, and their minimum values are above the anomalous mass except tilt angle filter which is positive when over the source and passes through zero when over or near the edge. Local phase filters (edge enhancement methods) are based on the phase variation of the derivative quantities.
    The mentioned filters have different advantages like flexibility in making new filters but a universal disadvantage of these methods is that they cannot display the large and small amplitude edges simultaneously. In this paper the ability of normalized total horizontal derivative (NTHD) method is shown and it compared with the other methods. The NTHD method is based on the ratio of the horizontal derivative to the maximum of nearby values which are in an arbitrary window. The maxima of the NTHD method are located on the edges of causative sources.
    To determine the ability of NTHD method, it is applied on an artificial rectangular prism which is created with Matlab software. In order to find the stability of the method when facing a noisy data, a Gaussian noise created with randn command in Matlab area and added to the artificially rectangular prism and then the NTHD method applied on it. To evaluate the capability of this method with prevalent edge detection methods, a Matlab code has written and the numbered edge detection filters were applied on several artificially rectangular bodies which are in shallow and deep depths. The results showed that almost all filters delineated edges of shallow anomalies successfully but when they facing with the deeper anomalies their ability rising down and cannot detect the edges precisely. The excellence of the NTHD method in recognition of source edges is due to the fact that it can make the strong and weak amplitude edges visible simultaneously and can bring out more details. The edge detection technique (NTHD) was further applied on the Mobrun gravity anomaly which digitized from Grant and West (1965). The gravity data of Mobrun ore body consist of thirteen profiles. The distance between profiles is 60 meters and in each profile, data have space of 30 meters. In order to reduce the existent noise in these data, we upward data for a distance of 10 meters then we apply NTHD filter with a 1×1 window. The result of applying the NTHD method on Mobrun ore body are in concord with the prevalent results of exploration bore data and precisely detects the edges of anomaly. In order to examine the results of edge detection, we use the data of borehole that they dugout along the AB profile. Among the boreholes the BH2 borehole is near to the edge of Mobrun ore body and is in concord with the edge detection results of NTHD method.
    Keywords: Edge detection of anomalies, NTHD filter, Local phase filters, gravity anomalies, Total horizontal derivative, Mobrun ore body of Canada
  • Behzad Sarlak, Ali Nejati Kalateh, Hamid Aghajani* Pages 357-368
    The use of geophysical methods, before digging, can be effective in the archaeological explorations. In the meantime, the gravimetery is the one of the most widely methods that be use, due to lack of harmful environmental effects. The gravity method is based on density contrast between the anomalous body and the country rocks or around of them. For archeology studying, the target is detection of sub-surface structures which was made in the enceint. But here it’s posible was coverd by some overburden such as alluvium. There are varity of density contarst that can be detect by using gravity data, thereforet the density contrast between the walls and chambers can be studied by the gravimetery. In this article, to investigate the subsurface structures of walls in the ancient area of Tepe-Hissar in Damghan, the gravity data and fuzzy filtres was used. In geophysical prospecting there are some nosiy data that must be removed. The first all of required corrections for example instrument drift correction, free air and slab bougure, latitude and terrain corrections were done on gravity data.
    In this way the bougure gravity anomaly was obtained. Images of the gravity field of the Earth are used worldwide as part of exploration programs for mineral, hydrocarbons, and archaeology and etc. resources. When the data quality permits, a range of highpass filters, such as downward continuation or vertical derivatives, can be applied to bring out fine detail. Also, In order to separate the residual anomaliy from regional gravity we used trend surface method. Local phase filters provide an alternative approach but conventional phase functions need to be unwrapped to remove phase ambiguity (Fitzgerald et al., 1997). Therefore, detection of the boundary of chambers or walls and the horizontal location of sources can be obtained from derivative based filters such as the horizontal gradient magnitude, tilt-angle, theta-map, Laplacian and tangent hyperbolic, however these methods typically fail for archaeological purposes due to the high noise content of these datasets. In this paper, the first similary to prospecting area a synthetic model prepared which combined some chambers and walls, and the chambers or rooms have filled with the alluvium and soil. Based on the filters, here we can detect the edges that the density change sharp and density contrast will be high or very low.
    One of the conventional phase filter that use for edge detection is the tilt angle (Miller and Singh, 1994). The gradient tilt angle has some interesting properties. As a dimensionless ratio it responds equally well to shallow and deep sources and to a large dynamic range of amplitudes for sources at the same level. Because the tilt angle is based on a ratio of derivatives, it enhances large and small amplitude anomalies well. The results show the tilt angle of the synthetic and real data. The tilt angle is effective in balancing the amplitudes of the different anomalies, but it is not primarily an edge-detection filter. The theta map uses the analytic signal amplitude to normalize the total horizontal derivative (Wijns et al. 2005). The amplitude of the response of this filter from the deeper and shallow source bodies is similar, although the response from the deeper bodies is rather diffuse. The hyperbolic tilt angle (HTA) filter uses of the real part of the hyperbolic tangent function in the tilt angle calculation achieved better delineation of the edges of the anomalous body than the other filters we use here. The maximum value of the HTA gives location of the body edges (Cooper and Cowan, 2006).
    Edge enhancement in potential-field data helps geological and archaeological interpretation. There are many methods for enhancing edges, most of which are high-pass filters based on the horizontal or vertical derivatives of the field. Normalized Derivatives Ratio (NDR), a new edge-detection filter, is based on ratios of the Derivatives orthogonal to the horizontal of the field. The NDR is demonstrated using synthetic and real gravity data from an archaeology site, Tepe-Hissar. Compared with other filters, the NDR filter produces more detailed results as can see that the separation and detection walls and chambers have a high compliance with the results of excavations carried out.
    Keywords: Gravity, Tepe, Hissar, Archaeology, edge detection, Fuzzy filters, Normalized Derivatives Ratio
  • Jamaledin Baniamerian, Behrooz Oskooi *, Maurizio Fedi Pages 369-391
    The main goal of interpretation of potential fields (gravity and magnetic) data is determination of the buried source parameters including the depth, horizontal position, the structural index and the physical properties, which is magnetic susceptibility in the magnetic cases and density for gravity fields. For a large number of data it is so expensive modelling the data by using the inversion algorithms. So, in the last decades a variety of algorithms have been developed for source parameters estimate (Nabighian et al, 2005a, b). The depth is often the most important parameter to being determined. Early depth to source methods were mainly graphically. The automatic methods began to appear owing to computers and digital data available in 2D and 3D cases, most of them being field derivative based methods. Many of these approaches have restricted use because of limiting assumptions in their theory.
    Since there is no method being the best, it is wise to use a variety of methods. In this paper, firstly, some of the widely used methods are described, such as Euler deconvolution (Thompson, 1982), analytic signal (Nabighian, 1972, 1974), source parameter imaging (SPI) (Thurston and Smith, 1997), Improved source parameters imaging (iSPI) (Smith et al, 1998), enhanced local wavenumber (ELW) (Salem et al, 2005) and ANEUL (Salem and Ravat, 2003). These are applied to a synthetic data produced by a fairly complex model simulating a magnetic basement, in order to estimate the position and structural index of its structures.
    The SPI method requires second-order derivatives of the field and uses the local wavenumber concept. The SPI is a fast and automatic method, assuming as source-model either a 2D sloping contact or a 2D dipping thin sheet model; it provides estimates of depth, dip, edge location and physical property contrast. iSPI uses the same concept as in SPI and is applied to 2D data. ELW is a combination of local wavenumber in Euler equation to determine source location and structural index. In this method a window is selected around the local-wave number peaks, and the source position is estimated by solving an over determined problem in each window. Once the source location is estimated, the structural index is obtained by using the main equations of the method. ANEUL is a fully automatic method, whose main equation is obtained by using analytic signal up to order 2 in Euler-homogeneity equation.
    Since all the above mentioned methods make use of the field derivatives of different orders, high-quality datasets are needed, otherwise the final results may be affected by even severe errors.
    Multiscale methods (Fedi and Pilkington, 2012), are a different class of interpretative methods, based on the behavior of the potential fields at different altitudes. They allow determination of depth, horizontal location and source-type. Taking advantage of a combined use of upward continuation and field differentiation, these methods are very stable and not sensitive to noise as the other methods are.
    In this paper, among the several multiscale methods we use DEXP transformation (Fedi, 2007), automatic DEXP (Abbas et al, 2014) and Geometric method (Fedi et al, 2009, 2012). By DEXP transformation the field is calculated at some altitudes and it is scaled by using a power-law of the altitude. The depth can be obtained by finding the extreme points of the DEXP field. Automatic DEXP is based on computing the local wavenumber (of any order) at several latitudes and then scaling with a proper function to estimate the structural index and the source position. This new version of the DEXP transform is fully automatic and does not need any priori information. By the geometric approach, the maxima of the field at various scales are located along lines that are called ridges. Extrapolating the true ridges below the measurement surface, is enough to find the source position at the ridge intersection.
    In spite of using noise-free data set in the synthetic case, it is shown that the classical methods do not provide results as accurate as those by multiscale methods. Comparison among more methods and evaluation of their consistency will be really important and practical, in real cases, to evaluate the final results and to make decision about individuating the best ones.
    We conclude the paper by applying all the methods to a magnetic profile over a 2D structure, in order to estimate its parameters. None of these methods is restricted to the magnetic fields, and they can be also applied to gravity fields or its derivatives as well.
    Keywords: Depth estimate, Field derivative, Gravity filed, Magnetic field, Multiscal analysis of potential fields, Upward continuation
  • Safa Khazaei * Pages 393-403
    Multiple methods available to increase the resolution of the edges, which usually are the functions composed by the first-order horizontal derivative and vertical derivative of potential field data. Cooper and Cowan (2006) presented total horizontal derivative (THD) which has less performance in determining edge of deep sources. Cooper and Cowan (2006) and Wijns et al. (2005) respectively have presented TDX and the Theta which are less sensitive to depth of the anomaly, also they have greater accuracy in detecting of the edges than the THD filter. The edge detection filters are all the functions based on the horizontal derivative and vertical derivatives of the data. The maxima of total horizontal derivative and the zero of the vertical derivative are corresponding to the edges of the source.
    We find that the horizontal coordinates of the maxima of total horizontal derivative and the zero of vertical derivative are both bigger than the true locations of the edges, and the coordinates of the maxima of total horizontal derivative are more close to the true edges. We prove that the ratio of the total horizontal derivative to the second-order vertical derivative can obtain more accurate edge detection results.
    We use the balanced horizontal derivative (BHD) edge detection filter, which uses the ratio of the first-order horizontal derivative to the second-order horizontal derivative to recognize the edges of the source, and the recognized edges by the BHD edge detection filter are more correct and are more insensitive to noise. We also use of second-order normalized total horizontal derivative (TDX2) to detect anomaly edges, which uses the ratio of the first vertical derivative of the horizontal to the second-order horizontal derivative to recognize the edges of the source, which gives the same results as the BHD. Ma and et al. (2014) have presented the BHD and the TDX2 filters as follows:BHD=tan^(-1)⁡(√(("∂f" /"∂x" )^2 "" ("∂f" /"∂y" )^2 )/(k×-((∂^2 f)/(∂x^2 )(∂^2 f)/(∂y^2 )) )) (1)
    where, ∂f/∂x, ∂f/∂y and ∂2f/∂z2 are the derivatives of the data f, and, K=(mean(∂f/∂z))/(mean((∂^2 f)/〖∂z〗^2 )) and the maxima of the absolute value of the BHD are corresponding to the edges of the source.
    〖TDX〗_2=tan^(-1)⁡(√(〖((∂f_z)/∂x)〗^2〖((∂f_z)/∂y)〗^2 )/|(∂^2 f)/(∂z^2 )| ) (2)
    where, fz is the first-order vertical derivative of potential field f.
    The results of the modeling on synthetic and real data shows that the edges that recognized by the TDX2 and the BHD are more precise and clear, by the presented methods are consistent with the true values for the shallow anomaly .and the BHD and the TDX2 filters can display the edges of shallow and deep sources simultaneously. For the anomaly of three separated objects interfere with different domains, filters break boundary anomalies better than previous filters. Also in this study, the mentioned filters have been used on gravity data of the aqueduct in the geophysics institute. Based on the obtained results, TDX2 and BHD filters show well the general trend of the aqueduct by separating the anomaly edge of the aqueduct and other existing anomalies on the aqueduct residual map. The width of the edge obtained by applying these filters is about 1.27m whereas that is 2.7m for the TDX and Theta filters which is impossible according to the geological information of the study area. Studies show that, the obtained width for the aqueduct is determined by the BHD and TDX2 filters more accurate than others.
    Keywords: balanced horizontal derivative filter, edge detection, second, order normalized total horizontal derivative filter, potential field
  • Teimour Alizadeh* Pages 405-417
    Today, extra tropical cyclones are recognized not only for the important influence they exert on midlatitude weather conditions but also for their integral role in the earth’s climate system. (Gary luckmann, 2012). Extra tropical cyclones are fundamental meteorological features and play a key role in a broad range of weather phenomena. They are a central component maintaining the global atmospheric energy, moisture, and momentum budgets. They are on the one hand responsible for an important part of our water supply, and on the other are intimately linked with many natural hazards affecting the middle and high latitudes (wind damage, precipitation-related flooding, storm surges, and marine storminess). Thus, it is important to provide for society an accurate diagnosis of cyclone activity, which includes a baseline climatology of extra tropical storms (e.g., Hoskins and Hodges 2002). Identifying and tracking extra tropical cyclones might seem, superficially, to be a straightforward activity, but in reality it is very challenging (Urs neu and etal, 2013). One of the region of cyclogenesis is Mediterranean Sea in north hemisphere that positive vorticity and cyclogenesis occurred in west and east of this sea.
    There are many study with numerical method about cyclogenesis in Mediterranean Sea and at first Petersen 1957, had identified the cyclone frequency in Mediterranean in north hemisphere study. Alpert and etal 1990, study the cyclogenesis in west and east of Mediterranean Sea and found that vorticity in Mediterranean Sea have main role in cyclogenesis in Mediterranean region, also lee ward cyclogenesis have main role in Geneva golf and cypress. One of the first study that performed about impact of cyclones of Mediterranean over climate of Iran is Alijani research, 1364, that he found that this feature have main impact on climate of Iran. Sedaghat and others 1378, had been studied the cyclone tracking of cyclone in middle east and they said that the most of cyclone are in west and North West of Iran. One of the most reason of this study is identify of spatial temporal change of Mediterranean cyclone. Because the climate change and global warming caused the change in atmospheric general circulation and change in atmospheric phenomena.
    For numerical cyclone detection used the ERA-Enterim database, this is last reanalysis of global atmosphere by ECMWF by Dee and etal, 2011 which available in six hours interval and with resolution 0.5*0.5 longitude and latitude in duration of 1980 to 2013. The cyclone positions are defined by local minima of the geopotential height (z1000) of the 1000-hPa surface considering the neighborhood of eight grid points. Additionally, in order to locate intense vortices, the mean vorticity of a minimum point in 600 km radios required. (Blender and Etal, 1999) The threshold of mean vorticity is 5*〖10〗^(-5) s^(-1)because in this region there are shallow and thermal low and this is the best threshold for exclude of those. After calculation of cyclogenesis the spatial-temporal change of those identify with frequency sum and mean center for three duration of eleven years.
    Spatial distribution cyclones shows in Figure 2 and include of 12180 cyclone in this region. As see in fig 2 the more cyclogenesis occurred in the west of Mediterranean and the most concentration of those are in west and east of Italia whatever we move to east of Mediterranean decrease from the cyclone concentrations the main center of cyclongenesis in east of Mediterranean are in cypress coast and Turkish west coast. It is considerable that in any time cyclone with more than one frequency are in Iran.
    About of 14 percent of cyclones in Mediterranean occurred in January that mostly composed in west and east of Italia, in additional in this month cyclone concentration are in Turkish and Syria. The most frequency of cyclogenesis occurred in February and the lowest frequency of those occurred in the July. The trend of cyclogenesis in Mediterranean are positive and cyclones are increasing by 1.04 coefficient in each year. But the coefficient of determination in the regression equation are very low and nearest to the zero this shows that the regression equation couldn’t show the logic correlation between time and cyclone frequency. The trend month of cyclone in Mediterranean in May, July, august and October are negative and in others months are positive. But in month trend such as years trend the coefficient of determination of regression equation are low. The result show that the temporal trend of cyclogenesis in Mediterranean don’t change in this study area. The mean center of cyclogenesis show that the center of mean cyclone in the three duration have a shift to west and north latitude.
    Keywords: Mediterranean Sea, Cyclone detection Algorithm, Mean center, Temporal, Spatial Distribution
  • Mir Reza Ghaffari Razin*, Behzad Voosoghi Pages 419-437
    Global positioning system (GPS) signals provide valuable information about ionosphere physical structure. Using these signals, can be derived total electron content (TEC) for each line of sight between the receiver and the satellite. For historic and other sparse data sets, the reconstruction of TEC images is often performed using multivariate interpolation techniques. Recently it has become clear that the techniques derived from artificial intelligence research and modern computer science provide a number of system aids to analyze and predict the behavior of complex solar-terrestrial dynamic systems. Methods of artificial intelligence have provided tools which potentially make the task of ionospheric modeling possible. Artificial neural network (ANN) provides an inexplicit non-linear model to learn relations between inputs and outputs using training data.
    Neural network is an information processing system which is formed by a large number of simple processing elements, known as artificial nerves. The input data are multiplied by the corresponding weight and the summation are entered into neurons. Each neuron has an activation function. Inputs pass to the activation function and determine the output of neurons. The behavior of neural network is related to communication between nodes. Using training data, the designed ANN can be adjusted in an iterative procedure to determine optimal parameters of ANN. Then for an unknown input, we can compute corresponding output using the trained ANN. The neurons of input and output layers are determined according to the number of input and output parameters. The number of neurons in the hidden layer can be determined by trial and error through minimizing total error of the ANN. For this minimization, each ANN parameter’s share in the total error should be computed which can be achieved by a back-propagating algorithm.
    Radial basis function neural network (RBFNN) is known from the approximation theory as it is applied to the real multivariate interpolation problem. RBFNN is popularized by Moody and Darken (1989), and many researchers suggested it as an alternative ANN structure to MLP. RBFNN is very useful for function approximation and classification problems because of its more compact topology and faster learning speed. RBFNN is configured with three layers. An input layer consists of source neurons and distributes input vectors to each of the neurons in the hidden layer without any multiplicative factors. The single hidden layer has receptive field units (hidden neurons) each of which represents a nonlinear transfer function called a basis function. The output layer produces a linear weighted sum of hidden neuron outputs and supplies the response of RBFNN.
    Due to the nonlinearity of ionosphere physical properties, in this paper, multi-layer perceptron artificial neural networks (MLP-ANN) and RBFNN used to model and predict the spatial and temporal variations of vertical TEC (VTEC) over Iran. The used model is able to estimate and predict the VTEC within and also near the network. For this work, observations of 22 GPS stations in northwest of Iran (360
    Keywords: Ionosphere, Multi, Layer Artificial neural network, Radial Basis Function neural network, VTEC, GPS
  • Sedighe Omidi, Mohammad Hossein Memarian* Pages 439-448
    Reduced rainfall in many parts of the world have prompted scientists to try to manipulate, in different ways, to the nature, causing precipitation from accumulated water in the atmosphere. Atmospheric ionization is a new method for producing precipitation and has attracted much attention. In this regard, in recent years, in some regions of the world, in different ways, the relation between meteorological phenomena, especially tropical cyclones, and ionosphere are studied. Because of importance of water and precipitation, in present paper in order to detect the presence or absence of correlation between ionosphere and the precipitation meteorological phenomena, used three kind of data consist of geomagnetic, ionospheric and meteorological data.
    The meteorological and geomagnetic data are measured data but the ionospheric data derived from raw GPS data. For analysis, In the first step, it analyzed the geomagnetic conditions by use of two geomagnetic indices consist of Kp and Dst for an especial period. In the second step, analyzed the data of ionospheric Vertical Total Electron Content (VTEC) and omitted geomagnetic disturbed days from same period. The daily maximums of VTEC, as the differentiating factor of various days, was selected. In the third step, it analyses and compares time series of maximums of ionospheric VTEC from processing of data from two GPS stations, in quiet geomagnetic conditions, with variations of precipitation recorded in two synoptic stations situated in same place of GPS stations, that have considerable difference in precipitation. These stations are Shiraz as the high rainfall station and Esfahan as the low rainfall station that considered in A 41-day period since 1 Jan to 10 February 2011. Also it is compared and analyzed the time series of variations of maximums of VTEC, to mean the difference of maximum of VTEC in any day than maximum of VTEC in days ago, with time series of precipitation in both stations. With these analysis it was observed that there is systematic relationship between variations of maximum of ionospheric VTEC and occurrence of precipitation. So that after occurrence intense fluctuations with special amplitude in maximum of VTEC, occur precipitation peaks. Therefore along with mild changes of maximums of ionospheric VTEC in a period, it observe tropospheric stable conditions and along with intense changes of maximums of ionospheric in the same period, it observed unstable tropospheric conditions and precipitation occurred. Also, the time series of maximums of VTEC in the same period obtained from running the global ionospheric model IRI2012 and compared with the time series of maximums of ionospheric VTEC from processing the raw data of GPS. it seems that the best kind of data for study the troposphere-ionosphere correlation, are ionospheric data from processing the raw data of GPS stations and the ionospheric data from running the ionospheric global models, due to the inability show details, are not suitable for this. It may be because of this, that maximums VTEC from processing raw data of GPS in this paper are point measurements and the maximums of VTEC from global ionospheric model IRI2012 has global scale and simulate large scale variations of ionospheric parameters.
    Keywords: VTEC, GPS, Ionosphere, Troposphere, Precipitation
  • Saifollah Rasouli *, Elaheh Hariri, Siamak Khademi Pages 449-458
    In meteorology, daytime atmospheric visibility distance (or visibility) is defined as the greatest distance at which a large dark object against the light sky at the horizon can be seen and clearly recognized by an unaided eye. For determination of maximum visible distance at nighttime, usually, a known, preferably unfocused, moderately intense light source is used in which, at the maximum distance it can be recognized. These atmospheric visibility distance definitions were developed based on human observation through the atmosphere. Presence of dust, fog, haze, pollution, or smoke, at the atmosphere reduces the meteorological visibility. Visibility degradation is one of manifestations of the atmospheric pollutions and airborne particles, which is mainly due to absorption and scattering effects of aerosols in the atmosphere. In this regard, the local air quality can also is reflected by the atmospheric visibility distance. Measurement of atmospheric visibility distance is an important issue in the transportation. Low visibility of atmosphere is mainly a problem of traffic safety. Therefore, existence a reliable atmospheric visibility, at roads for driving, at airports for takeoff and landing of airplanes, at ports for movement of ships, and so on, is necessary.
    Based on the mentioned definitions, it seems that, the measurement of atmospheric visibility distance is affected by many factors such as the size and shape of the target, the air light intensities of the observing area, the observer’s angle to the target and height, the light intensities for night targets, and so on. In addition, human factors also affected the measurements because of the requirement that the visibility targets be both detected and recognized by the naked eye. Many instrumentation approaches for measuring atmospheric visibility distance have been developed. Transmissometers and scatter meters are two types of instruments are used for determination of the atmospheric visibility distance. A transmissometer operates by sending a narrow collimated laser beam through the atmosphere. It extrapolates the attenuation of the laser beam at a known path length in order to estimate the distance for which the emitted light is attenuated by 95%. A scatter meter assesses the dispersion of a light beam at a particular scattering angle. In this work, we introduce a new, originally an optical method, based on the measurement of the optical visibility or contrast of image of a periodic pattern that captured by a telescope equipped with a digital camera through the atmosphere. In comparing to other methods of the measurement of atmospheric visibility distance, this method is less affected by the setup and instruments factors.
    In this paper we have presented a new method for measuring atmospheric visibility distance by imaging from a reflective sinusoidal linear grating having variable spatial period. In the experiment a sinusoidal grating with variable period of 𝟕mm to 𝟏𝟏cm is printed on an area 𝟏 m 𝟑m and pasted on a suitable wooden frame. The frame is installed at 𝟑m height from the ground surface and 𝟒𝟕𝟓m distance from an imagining system consisting a telescope and a CCD camera. The telescope is a Newtonian telescope. The CCD is installed at the focal plane of the telescope. Images of the grating are recorded through the atmosphere by the CCD at different days and different day times. Local visibilities of the grating images are measured and cut-off frequency of the patterns is determined. From the cut-off frequency of the image patterns the atmospheric visibility distance is determined for the recording time. Our results for the visibility distance at different times are comparable with the weather report from the Zanjan airport station that was used another method.
    Keywords: Atmospheric visibility distance, sinusoidal linear grating with variable period, imaging, propagation through the atmosphere, aerosols, dust
  • Masoud Khoshsima, Farhang Ahmadi, Givi, Samane Sabetghadam* Pages 459-467
    Atmospheric visibility is a key factor in everyday life mainly in aviation, industry and surface traffic. It has been defined as the greatest distance at which an observer can just see a black object viewed against the horizon sky which is known as the visual range. Furthermore, light extinction which is mostly due to absorption and scattering effects of aerosols in the atmosphere, can be calculated from visual range using Koschmieder formula. Visibility and extinction have similar units. In a non-polluted atmosphere, visibility ranges from 145 to 225 km, and in normal atmospheric condition it ranges from 10 to 100 km; yet in polluted areas it can be remarkably low down. Visibility is regularly measured at synoptic meteorological stations all over the world as a standard meteorological parameter. Skilled observers have been measuring the visual range using individual markers at known distance from the meteorological location against the horizon sky.
    In this paper, airport visibility data for the period of 1970 to 2010 are examined in the four busiest airports in Iran including Tehran-Mehrabad, Mashhad, Shiraz and Isfahan. All data from the four airport stations have been used for analyzing the temporal variations. The analyses are based on daily average measurements, i.e. the average of 9, 12 and 15 UTC data. Midday values are usually used in studies of this kind as they are more representative of regional visibility levels, because early morning radiation fogs and high relative humidity which may reflect only the local environment would regularly have dispersed by midday. The historical trends of extinction coefficient based on visual range for the four aforementioned airports are computed. Trend is determined by a least square regression analysis of midday average extinction. In general, an upward atmospheric extinction trend is seen for all stations. Tehran-Mehrabad airport has the most increasing trend. The extinction was around 0.3 km-1 in early 70’s but it increases in the present and reaches up to around 0.4 km-1.
    Airport visibility data that inherently undervalue the true visibility are most appropriately summarized by cumulative percentiles. The Nth cumulative percentile is the visibility that is equal or exceeds N percent of the time. Visibility data lends itself well to the treatment in this manner. Daily visibility observations are investigated during the last four decades at 10th and 90th cumulative percentiles to show the threshold visibility in each airport. The 10th and 90th cumulative percentiles of visibility are used to identify the frequency of ‘good’ and ‘poor’ visibilities, respectively. Results show that there is not a distinctive difference between the 40-year poor or good visibilities among all stations. However, Tehran airport has the least quantity in visual range than the other airports. The good visibility has the largest threshold value of around 20 km in Shiraz and Mashhad airports.
    Monthly comparison of extinction for 40 years, shows that there is an about 2-km difference between winter months and the rest of the year. It can be due to the effect of weather or concentration of pollutants in different months. The results of correlation analysis indicate that the difference may be due to the variation in relative humidity value in different months. To minimize the effect of humidity, the days with relative humidity value of above 70 percent and cloudiness of above 4/8 of sky are removed from the visibility trend analysis. Detailed analyses show that the trends of the screened days are nearly parallel to the trends of raw data, but with a slight difference in each airport. Increase in extinction is also observed since 1970 when absolute values of extinction change. The extinction trend is not significantly changed in Tehran and Isfahan airport which may emphasis on the role of air pollution on atmospheric extinction.
    Keywords: Extinction coefficient, Horizontal visibility, Airport, relative humidity, Long, term trend