hamid reza siahkoohi
-
مدل سازی انتشار موج لرزه ای، از گام های کلیدی در تصویرسازی لرزه ای، نقش مهمی در مطالعات ژیومکانیک نفت بر مبنای تصویرسازی غیرمخرب زیرسطحی ایفا می کند. دقیق ترین روش های تصویرسازی غیرمخرب لرزه ای نیازمند مدل سازی لرزه ای بر اساس حل عددی معادله مشتق جزیی انتشار موج هستند. در حالت ساده تر، این معادله در شرا یط چگالی ثابت و در محیط آکوستیک حل می شود. مطالعات بسیاری با هدف توسعه روش های مدل سازی برای وارد کردن ملاحظات شرایط واقعی انجام می پذیرند. یکی از این ملاحظات، که اهمیت شایانی در مطالعات ژیومکانیک در خشکی دارد، در نظر گرفتن توپوگرافی در مدل سازی است. در این پژوهش، ما از روش مرزی غوطه ور با هدف حل معادله انتشار موج در محیط های با توپوگرافی پیچیده در حوزه زمان استفاده کردیم، و این روش را برای انتشار موج در محیط ویسکوآکوستیک و در حضور چگالی متغیر توسعه دادیم. معادله انتشار موج در محیط های ویسکوآکوستیک، به دلیل وجود سازوکار جذب (اتلاف و پاشش)، معادله ای مختلط بوده که حل آن در حوزه زمان بسیار چالش برانگیز است. در این مطالعه، یکی از نوین ترین روش ها برای وارد کردن اثر جذب در انتشار موج در حوزه زمان استفاده شده است. درنهایت عملکرد روش توسعه یافته در این مطالعه، با مدل سازی انتشار موج در محیط های ویسکوآکوستیک با توپوگرافی پیچیده و چگالی متغیر مورد بررسی و آزمایش قرار گرفته است، که اثر کاهش دامنه در اثر جذب در محیط ویسکوآکوستیک به وضوح مشهود بوده و دامنه میدان موج ویسکوآکوستیک مدل سازی شده در محل توپوگرافی مطابق انتظار مقدار صفر دارد.کلید واژگان: تصویرسازی غیرمخرب, مدلسازی لرزه ای, ویسکو آکوستیک, توپوگرافی, تفاضل محدودSeismic wave propagation modeling, one of the key steps in seismic imaging, plays an important role in oil geomechanics studies based on subsurface non-destructive imaging. Two of the most accurate non-destructive seismic imaging methods are full waveform inversion (FWI), and least squares reverse time migration (LSRTM). Since the general approach of these methods to recover geomechanical properties is based on minimization of data residuals, an accurate forward modeling operator makes the inversion results independent from the modeling errors. Seismic modeling is based on the numerical solution of the partial differential equation of wave propagation using methods such as finite difference or finite element. In the simplest case, this equation is solved under conditions of constant density and for an acoustic medium. Different methods have been conducted to develop modeling methods by considering real conditions. One of these considerations, which has a significant impact in geomechanics onshore studies, is the topography. In this research, we used the immersion boundary method to solve the wave equation in the time domain in environments with complex topography, and we developed this method for wave propagation in a visco-acoustic medium with variable density. In visco-acoustic environments, including the effect of absorption (dispersion and dissipation) in modeling, requires the complex wave equation, which is challenging to solve in the time domain. In this study, a novel method is used to incorporate the effect of absorption in wave propagation in the time domain. Finally, the performance of the proposed method is investigated by modeling the wave propagation in visco-acoustic models with complex topography and variable density.Keywords: Non-Destructive Imaging, seismic modeling, Visco-Acoustic, Topography, finite difference
-
همدوسی در رویدادهای بازتابی موجود در داده های لرزه ای پیش از برانبارش، عاملی تعیین کننده در کیفیت نهایی تصویرسازی زیرسطحی با این داده ها محسوب می شود. این همدوسی به نسبت پیچیدگی محیط انتشار موج و خطاهای عملیاتی در مرحله برداشت داده های لرزه ای کاهش می یابد که در صورت جبران نشدن اثر آنها، پردازش داده های لرزه ای را با چالش جدی مواجه خواهد کرد. یکی از خطاهای رایج در برداشت داده های لرزه ای مناطق کوهستانی، جابه جایی بدون ثبت مجدد ایستگاه های چشمه لرزه ای به دلیل محدودیت های عملیاتی است. با توجه به میزان سختی عملیات برداشت، احتمال وقوع این نوع خطا در ایستگاه های گیرنده نیز وجود دارد. در صنعت، روش هایی برای افزایش همدوسی رویدادهای بازتابی وجود دارد که فرض اولیه آنها، ایستایی ناهمدوسی های موجود در بعد زمان و پیش بینی پذیری آنها در بعد دورافت و در نتیجه، توانایی جبران آنها با روش های تصحیح استاتیک و دینامیک متداول است، اما برخی خطاهای عملیاتی و پیچیدگی های محیط انتشار موج، موارد نقضی هستند که می توانند آن جابه جایی ها را از حالت پیش بینی پذیری خارج کنند. جابه جایی های بدون ثبت مجدد چشمه یا گیرنده موج لرزه ای و وجود بی هنجاری های سرعت محلی در مسیر انتشار امواج لرزه ای، که معمولا در لرزه نگاری کوهستان با هم رخ می دهند، از جمله این موارد هستند. اثر این گونه عوامل ناهمدوس ساز در داده ها و استفاده از روابط رایج تصحیح استاتیک و دینامیک برای آنها، می تواند جابه جایی های ناایستا در بعد زمان و پیش بینی ناپذیر در بعد مکان ایجاد کند. در این شرایط تصحیحات استاتیک و دینامیک متداول کارایی لازم را برای جبران ناهمدوسی های رویدادهای بازتابی ندارند. این مقاله با تحلیل برخی داده های مصنوعی و واقعی سعی در تبیین موضوع دارد و پیشنهاد می کند از یک روش تصحیح جابه جایی زمانی ناپایا در داده های لرزه ای پیش از برانبارش برای جبران این ناهمدوسی ها استفاده شود.
کلید واژگان: عملیات لرزه نگاری کوهستان, خطاهای عملیاتی, ناهمدوسی رویدادهای بازتابی, جابه جایی زمانی ناپایا, بی هنجاری های سرعت, استاتیک باقیماندهA determinant factor in generating qualified seismic images of subsurface layers is the coherency of reflection events in pre-stack seismic data. Accordingly, this level of coherency could be reduced due to the complexity of propagation media or operational errors during seismic acquisition. It will be challenging to process acquired data if these incoherencies are not compensated for. In mountainous seismic acquisition projects, displacement of the source is one of the more common causes of errors. Such displacement could be happened for receiver points as well, due to the access or operational difficulties in seismic acquisition. In the processing stage, there are some methods for improving coherency based on the assumption that incoherencies are stationary both in time and offset. In particular, it is assumed that all reflection events shifts are static in the time domain and predictable in the offset domain. Thus, they can be compensated by conventional static and dynamic corrections. Nevertheless, there are some violent circumstances where the shifts will no longer be predictable. The displacement of source or receiver stations without resurveying of them and the presence of velocity anomalies in wave propagation media are some examples. These often occur simultaneously in mountainous seismic terrain. Conventional static and dynamic corrections cannot properly rectify such non-stationary inconsistencies in the mentioned circumstances. Although applying the residual static correction to compensate for such incoherencies can improve coherency for some reflector’s events, especially those with good amplitude and frequency content, for the others it can actually make it worse, especially for deep reflectors. For precise seismic reflection imaging, the magnitude of non-stationary shifts plays a key role. Small shifts will result in a reduction of the frequency content of the final stack section. For bigger size of these shifts, the quality of stacked reflections will be reduced and then can be eliminated, consequently. As a well-known fact, in seismic operations, logically increasing frequency content in order to get a better resolution usually involves spending significant time and money. However, with the mentioned operational errors, this is essentially wasteful. To avoid such operational errors, the activities of different crews of the seismic project should be thoroughly checked, in particular, the survey and drilling crews for the accuracy of elevation and coordinates of the shot points. Using both synthetic and real data, this paper attempted to explain how reflection event incoherencies frequently occur during mountainous seismic acquisition, and then proposed an algorithm for compensating those incoherencies with a non-stationary time shift correction in pre-stack seismic data.
Keywords: mountainous seismic operation, operational errors, incoherency of reflection events, non-stationary time shifts, velocity anomalies, residual static -
پراش ها اطلاعات مفیدی را درباره عارضه های زمین شناختی زیرسطحی مانند گسل ها، ناپیوستگی ها، چین خوردگی ها و مانند آنها ارایه می دهند. در مطالعه گسل ها برگشتگی قطبایی موج های پراشیده از لبه گسل ها یک چالش بزرگ به شمار می رود. در دهه های اخیر چندین روش در حوزه های پیش و پس برانباشت برای تعیین ویژگی ها و محل پراش ها ارایه شده است ولی بیشتر آنها در مواجهه با برگشتگی قطبایی امواج پراشیده از لبه گسل ها، توانایی کافی برای آشکارسازی پراش ها را ندارند. روش های تابع شباهت و مهاجرت کرشهف به عنوان دو روش مرسوم، بدون در نظر گرفتن برگشتگی قطبایی، برای تصویرسازی پراش های منتشر شده از لبه گسل ها دچار مشکل می شوند. در این مقاله دو راهکار برای بهبود کیفیت نشانگر پراشندگی معرفی می شود که حضور برگشتگی قطبایی تاثیری در کارایی آن ندارد. در راهکار ارایه شده برای برطرف کردن مشکل برگشتگی قطبایی، خم برون راند پراش به چند گروه تقسیم و سپس تابع شباهت به صورت محلی برای هر کدام از این گروه ها محاسبه می شود. تابع شباهتی محلی به کار رفته در این تحقیق، زمان سیر امواج پراشیده را از معادله ریشه دوم دوگانه به دست می آورد. همچنین روشی برای بهبود بیشتر تصویر چشمه های پراش استفاده می شود که با به کارگیری پنجره اریب در آن، به نوعی تابع شباهت محلی را به صورت تابعی از زمان وزن دار می کند. کارایی روش تابع شباهت محلی با استفاده از وزن دهی پنجره اریب و بدون آن بر روی داده های لرزه ای مصنوعی و واقعی بدون اعمال هیچ گونه تصحیح قطبایی بررسی شده که نتایج حاکی از کانونش و تفکیک پذیری بهتر پراش ها در لبه گسل ها است.
کلید واژگان: آشکارسازی گسل, تصویرسازی پراش, برگشتگی قطبایی, تابع شباهت محلی, پنجره اریب, نشانگر پراشندگیImproving diffractivity attribute to image faults using tapered local semblance in post-stack domainDiffractions carry useful and important information about subsurface features such as unconformities, faults, pinch-out, and so on. On the other hand, most of the information is encoded in diffractions. Polarity reversal across diffraction move out curves that are generated from fault’s edges, is a great challenge in seismic diffraction imaging. For the last few decades, several conventional methods in the pre-and post-stack domains, have been carried out for the diffractions characteristics and their locations. But most of these methods were not able to deal with the polarity reversal for diffraction imaging, some of them were time consuming, and needed to have some correction to deal with polarity changes, especially in diffraction caused by fault’s edges. Despite a large amount of research that has been carried out on diffraction imaging, very few studies have been devoted to the challenge of the polarity reversal across move out surfaces. We used the semblance function along with the hyperbolic move out curves for the diffractions that their travel times have been calculated using the double-square-root equation. As we know, using both semblance and Kirchhoff migration for diffraction imaging from fault’s edges, without which taking the polarity reversal into account would fail. This is caused by presence of same number of positive and negative wavelets in the diffraction move out curves. For solving this problem, we divided the global scanning window along hyperbolic move out surfaces into several subdivided window and the local semblance measurements over the sub-windows were performed separately. Every point in image domain is considered as a diffraction point that we call this points as image points. The final semblance measure at each image point is calculated by averaging the semblance measurements from sub-divided smaller windows. We also contaminated the synthetic data with white Gaussian noise, having different signal to noise ratios. Results showed no significant differences due to the fact that random arrivals in seismic data do not influence the semblance measurement. In next step to improve the diffraction imaging, we used tapered local semblance due to interference of diffractions with dominated reflection waves, other data and even other diffractions, especially at far offsets from diffraction’s apex. We called the proposed method as tapered local semblance method. The method weights the data from top to the bottom along the time axis, we also use less number of traces at shallow parts and more traces at deeper parts to reduce the harming effect of the interference. To coup with this task, we introduced a triangle taper to take few number of traces at the early arrival parts and more traces at the late arrival parts, instead of using a box with constant number of traces in the apertures from top to the bottom of the window. We tested several tapers with different angles of apex to determine the optimum one. We evaluated both methods on synthetic data as well as field recorded dataset. Both methods required no polarity reversal corrections to be applied. The obtained results showed the ability of our workflow to having higher resolution and good localization for diffractions from fault’s edges in synthetic data. The results obtained from using the tapered local semblance method on field recorded dataset showed more diffractivity than local semblance method.
Keywords: Fault Detection, diffraction imaging, polarity reversal, tapered local semblance, tapered aperture, diffractivity attribute -
با توجه به پیچیدگی و تنوع ساختار تکتونیکی در منطقه خاورمیانه، استفاده از روشی که بتواند عمق موهو را با بیشترین همخوانی با این ساختارها را ارایه دهد، از اهمیت ویژه ای برخوردار است. در این مقاله به مقایسه عمق موهو به دست آمده در منطقه خاور میانه با استفاده از دو شیوه متفاوت؛ 1) وارون سازی گرانی منشور های کروی، 2) تخمین عمق موهو با به کار گیری منشور های کروی و استفاده از مدل پوسته لرزه ایCRUST1.0 ، می پردازیم. در حالت کلی به دست آوردن عمق از داده های گرانی یک مسئله وارون غیرخطی است. در هر دو شیوه داده های گرانی با استفاده از روش یودا برگردان می شوند. با توجه به وسعت منطقه، استفاده از منشورهای کروی به جای منشورهای تخت در روش وارون سازی به کار رفته علاوه بر در نظر گرفتن انحنای زمین موجب کارآمدی روش نیز می شود. کمینه عمق موهوی به دست آمده از روش اول 12 کیلومتر مربوط به بخش هایی از اقیانوس هند و بیشینه عمق موهو 54 کیلومتر مربوط به قسمت های غربی فلات تبت است که با مرز صفحات و ساختارهای تکتونیکی همبسته است. محدوده عمق موهو در روش دوم در بازه 5/7 تا 49 کیلومتر است که مقدار کمینه مربوط به بخش هایی از اقیانوس هند و مقدار بیشینه مربوط به قسمت هایی از زاگرس است. مقایسه نتایج دو روش نشان می دهد که نتایج حاصل از روش اول به دلیل وارون سازی داده های گرانی سنجی نسبت به روش دوم که از مدل پوسته لرزه ای CRUST 1.0 برای تخمین عمق موهو استفاده کرده، کاملا با مرز صفحات همخوانی داشته و کلیه ساختار های تکتونیکی منطقه را نشان می دهد.
کلید واژگان: عمق موهو, منشور های کروی, وارون سازی داده های گرانی, مدل پوسته CRUST1.0Study of Moho in Middle East and surrounding region is of great importance for scientists, because it has a rich geological history and contains parts of the Eurasian, Indian, African and Arabian plates as the main plates and some small plates. According to complexity and different tectonic structures in Middle East using a proper method that yields a Moho depth model which is in accordance with these structures, has a great importance. In this paper we compare the Moho depth obtained from two different methods, 1) Gravity data inversion of spherical prisms (tesseroids) and 2) Moho depth evaluation using tesseroids and CRUST1.0 crustal model. Determining of Moho depth from gravity data is a nonlinear inverse problem. Regarding the extent of the study area we use an efficient inversion method (Uieda’s inversion method) in order to consider the earth's curvature by using spherical prisms instead of rectangular prisms. In this method one needs to minimize the cost function, where is the fidelity term, is the penalty term and is regularization parameter. In this method in addition to Moho depth, we need to estimate three hyper parameters namely the regularization parameter ( ), Moho reference level ( ) and density contrast ( ). They are estimated in two steps during the inversion by holdout-cross validation methods.To estimate the relief of the Moho from gravity data, first one must obtain the gravitational effect of the target anomalous density distribution attributed to the Moho relief, this requires eliminating all gravity effects other than that of the target anomalous density from observed data. In the first method tesseroid modeling is used to calculate the gravity effect of the topography and sediments. The effect of topography and crustal sediments are removed using global topography and crustal models. In the second method first we extract Moho depth over the study region from CRUST1.0 model and then evaluate gravity effect arising from this anomalous Moho, then using inversion method to estimate the Moho depth from CRUST 1.0 model. According to the results, the minimum depth of Moho is about 12 km in some parts of Indian Ocean and the maximum depth is about 54 km in the west of Tibetan plateau from the first method which is in accordance with plate boundaries and correlates well with the prominent tectonic features of the Middle East region. The Moho depth obtained from the second method varies between 7.5 and 49 km where the minimum depth is related to the parts of Indian Ocean and maximum depth is appeared in parts of the Zagros in Iran. Comparing the results of two methods demonstrates the acceptable performance of the adapted inversion procedure and utilization of spherical prisms but the calculated Moho depth from second method failed to estimate acceptable Moho depth especially in divergent boundary at Red sea, Gulf of Aden and Indian Ocean. The results indicate that the CRUST1.0 model, at least over an area with large extent, is not a suitable model for gravity inversion and Moho depth estimation.
Keywords: Moho depth, Spherical Prisms, gravity data inversion, CRUST1.0 crustal model -
تخمین پروفیل سرعت موج برشی لایه های سطحی در پروژه های مهندسی از اهمیت بالایی برخوردار است. این پارامتر به عنوان مثال در پروژه های ژیوتکنیکی به منظور دسته بندی نوع خاک و در پروژه های مهندسی زلزله برای تعیین پاسخ ساختگاه مورد استفاده قرار می گیرد. در حال حاضر روش های مختلفی برای تخمین پروفیل سرعت موج برشی وجود دارد که از میان آنها روش های MASW و MALW به واسطه سرعت اجرای بالا، هزینه کم و در مواردی غیرمخرب بودن، بسیار پرکاربرد هستند. یکی از مشکلات مهم پیش روی روش های امواج سطحی پدیده همبوسی مدهاست که در حضور تباین شدید سرعتی (شرایط مرسوم نهشته شدن رسوبات جوان روی سنگ بستر) رخ می دهد. این پدیده می تواند منجر به تفسیر غلط ازمنحنی پاشش و حصول مدل سرعتی اشتباه شود. این موضوع برای تصمیم سازی در پروژه های مهندسی می تواند بسیار خطرناک باشد. در این مطالعه ضمن بررسی شرایط ایجاد همبوسی مدها، عملکرد دو روش تحلیل امواج سطحی فوق در حضور این مشکل مقایسه شده است. نتایج نشان می دهد برخلاف روش MASW که به وجود تباین های سرعتی بالا حساسیت زیادی دارد، روش MALW در مقابل این شرایط پیچیده عملکرد مطلوبی از خود نشان می دهد. همچنین در این مطالعه نشان داده شده است که با استفاده از روش HVSR در کنار روش های امواج سطحی می توان فرکانس وقوع همبوسی را نیز پیش بینی کرد و از منحنی H/V حاصل در وارون سازی همزمان با داده های امواج سطحی بهره برد. روشن است که این موضوع از عدم قطعیت ذاتی موجود در روش های ژیوفیزیکی کاسته و بر دقت مدل سرعتی نهایی می افزاید.
کلید واژگان: تحلیل چندکاناله امواج سطحی, پروفیل سرعت موج برشی, میکروترمور, وارون سازی همزمانThere are two types of seismic waves: those that can propagate inside a medium (body waves) and those traveling along the Earth’s surface (surface waves). In the last decades, a number of papers dealing with surface waves have been published but it must be recalled that their theoretical description and first applications date back to almost a century ago. Surface waves have been in fact used for a number of applications since 1920s: nondestructive testing (even for medical applications, geotechnical studies and crustal seismology). Recently the interest toward their applications has increased both for the increasing demand for efficient methodologies to apply in engineering projects and because the recent regulations addressing the assessment of the seismic hazard (for instance the Eurocode8) that are giving the necessary emphasis to the determination of shear-wave velocity vertical profile. This parameter is commonly used in geotechnical studies for classifying soil types.Among various methods for estimating shear-wave velocity profile, MASW and MALW methods are most popular because of their fast performance, low cost and their nondestructive nature. These methods are based on analyzing dispersive properties of Rayleigh and Love waves. In surface wave methods a correct identification of the modes is essential to avoid serious errors in building near surface shear wave velocity model. Here we consider the case of higher-mode misidentification known as “osculation” where the energy peak shifts at low frequencies from the fundamental to the first higher mode. This jump occurs around a well-defined frequency where the two modes get very close to each other. This problem is known to take place in complex subsurface situations, for example in inversely dispersive sites or in presence of a strong impedance contrast, such as a soil layer resting on top of the bedrock. This phenomenon can cause a misleading interpretation of dispersion curve by the operator, which is completely hazardous for engineering projects.In this paper we investigated mode osculation phenomenon for both MASW and MALW methods using synthetic and real datasets. We showed that MALW has a far better performance facing this problem, while it is a main drawback for the MASW method. Generally, when we encounter a low-velocity layer in the subsurface, the identification of Rayleigh wave’s fundamental mode (MASW method) becomes almost impossible, while at the same time dispersion modes of Love waves (MALW method) are well separated, even in extreme conditions. In addition, we showed that performing single-station microtremor ellipticity analysis can also be quite useful. It can warn against the presence of a strong impedance contrast, it indicates the critical frequency at which mode osculation takes place, and also the HVSR data can be used as a constraint in the inversion process of surface wave data. So performing HVSR method alongside MASW and MALW methods not only can predict mode osculation frequency and strong impedance contrasts presence, but also can help us with joint-inversion of the surface wave data, resulting in a more solid Vs profile. We evaluated the performances of the proposed methods on real and synthetic seismic data and results were satisfying.
Keywords: MASW, MALW, Shear wave velocity profile, HVSR, Joint-inversion -
سرعت موج برشی (Vs) در پروژه های مهندسی (همچون عمران، ژیوتکنیک و ژیوفیزیک) برای تعیین پارامترهای دینامیکی خاک و طبقه بندی آن استفاده می شود. میانگین سرعت موج برشی تا عمق 30 متر (Vs30) در ژیوتکنیک برای ارزیابی روانگرایی خاک و در مهندسی زلزله برای تعیین پریود خاک، ضریب بزرگ نمایی ساختگاه و تعیین کاهندگی کاربرد دارد. معمولا سرعت میانگین موج برشی از مدل سرعت موج برشی حاصل از وارون منحنی پاشش، لرزه درون چاهی یا شکست مرزی به دست می آید. متاسفانه وارون سازی منحنی پاشش برای تخمین سرعت موج برشی بخش زمان بر فرایند تحلیل امواج سطحی است. در این مطالعه با ارایه روشی بدون نیاز به وارون منحنی های پاشش، نه تنها سرعت میانگین موج برشی (Vsz) بلکه سرعت میانگین موج تراکمی (Vpz) را به طور مستقیم از منحنی های پاشش، تا عمق نفوذ موج سطحی، تخمین می زنیم. برای این منظور نیازمند رابطه ای بین طول موج های مختلف موج سطحی و عمق های بررسی آنها هستیم. برآورد این رابطه نیاز به مدل سرعت موج برشی (مرجع) در منطقه مورد مطالعه دارد که می تواند از داده های چاه، پروفیل شکست مرزی یا از وارون منحنی پاشش به دست آید. با توجه به وابستگی مقادیر (Vsz) و حساسیت رابطه برآوردشده به نسبت پواسون در ادامه ضمن برآورد نسبت پواسون، پروفیل سرعت میانگین موج تراکمی تا عمق z (Vpz) از پروفیل (Vsz) تخمین زده می شود. نتایج اعمال روش روی داده های لرزه ای واقعی و مصنوعی نشان داد که می توان سرعت میانگین موج برشی و تراکمی را با عدم قطعیت کمتر از 10 درصد در ساختگاه هایی با تغییرات جانبی خیلی زیاد تخمین زد.
کلید واژگان: آنالیز امواج سطحی, منحنی پاشش, رابطه طول موج-عمق, سرعت میانگین موج برشی, سرعت میانگین موج تراکمیShear wave velocity (ܸ௦) and its average based on travel time from the surface to a depth of 30 m, is known as (ܸ௦ଷ) are often used in engineering projects to determine soil parameters, evaluate the dynamic properties of the soil and classify it. This quantity is directly related to the important property of soil and rock, i.e., their shear strength. The average shear wave velocity is used in geotechnics to assess soil liquefaction and in earthquake engineering to determine soil period, site amplification coefficient, and determination of attenuation. Usually, the average shear wave velocity is obtained from shear wave refraction survey, PS logging or from shear wave velocity profile obtained by inversion of experimental dispersion curve of surface waves. Surface wave analysis is one of the methods for estimating the profile of shear wave velocity, but inverting of dispersion curve is a time-consuming part of this process and also, the inverse problem has a non-unique solution. This becomes more evident when the goal is to determine a two- or three-dimensional shear wave velocity model. This study provides a method to estimate directly the average shear wave velocity (ܸ௦௭) as well as the average compressional wave velocity (ܸ௭) from dispersion curves of surface waves without the need to invert the dispersion curves. For this purpose, we need to exploit the relation between surface wave wavelength and investigation depth. Estimating the wavelength-depth relationship requires access to a shear wave velocity model (a reference model) in the study area, which can be obtained from well data, refraction seismic profiles, or by inverting one of the experimental surface wave dispersion curves. The ܸ௦௭ is then estimated directly from dispersion curve using the wavelength-depth relationship. In addition, due to the dependence of the value of ܸ௦௭ to Poisson's ratio and the sensitivity of the estimated wavelength-depth relationship to this ratio, we estimate the Poisson's ratio profile and average compressional velocity (ܸ௭) for the study area, from the ܸ௦௭. For a given range of Poisson's ratio values, theoretical dispersion curves of the synthetic earth models are determined by forward modeling. Then using these dispersion curves and estimated average shear wave velocity of the model, the wavelength-depth relationship corresponding to each Poisson's ratio is determined. In the next step by comparing experimental and estimated wavelength-depth relationships, one can estimate the Poisson's ratio at each depth. Then the average compressional wave velocity (ܸ௭) is estimated using the ܸ௦௭ and the Poisson's ratios. We evaluated the performance of the proposed method by applying on both real MASW seismic data set from USA and synthetic seismic data. The synthetic data collected over synthetic earth model and showed that the average shear and compression waves velocities are estimated with uncertainty of less than 10% in layered earth model with very large lateral variations in shear and compression waves velocities. According to the results, the proposed method can be used to take the non-destructive advantages of the surface wave method in engineering, geotechnical, and earthquake engineering projects to get the average shear wave velocity ܸ௦௭.
Keywords: surface waves analysis, dispersion curve, wavelength-depth relationship, average shear wave velocity, average compressive wave velocity -
لرزه نگاری سه مولفه ای از امواج لرزه ای در هر ایستگاه در سه راستا نمونه برداری کرده و اطلاعات امواج P و S را به صورت همزمان برداشت می کند. در برخی کاربردها استفاده از اطلاعات موج S به همراه اطلاعات موج P برآورد درستی از خصوصیات مخزن به دست می دهد. چون در برداشت های لرزه ای گاهی به علت شرایط منطقه یا خرابی دستگاه ها، نمونه برداری مکانی یکنواخت از امواج لرزه ای میسر نیست، باید از روش های درون یابی استفاده کرد تا ردلرزه های برداشت نشده بازسازی شوند. روش های مرسوم بازسازی ردلرزه های گم شده در لرزه نگاری سه مولفه ای، معمولا با اجرای روش روی هر مولفه به طور جداگانه انجام می شود، که می تواند به ویژگی های ظریف در رکورد صدمه بزند. در این تحقیق روشی برای بازسازی همزمان هرسه مولفه به یک باره ارائه می شود (برگرفته از ایده استنتون و ساشی) که با استفاده از تبدیل فوریه کواترنیون و الگوریتم تصویر بر روی مجموعه های محدب (QPOCS)، انجام می شود. نمایش برداری داده مختلط در حوزه فرکانس می تواند به وسیله قراردادن بخش های حقیقی و موهومی هر مولفه در آرگومان های یک کواترنیون، به دست آید. این روش اجازه می دهد تا عملگرها بر روی هر دو مولفه به طور همزمان اعمال شوند. مزیت این روش به خاطر همپوشانی طیفی مولفه ها در حوزه فرکانس-عددموج می باشد. نتیجه حاصل از کاربرد این روش بر داده های سه مولفه ای مصنوعی و واقعی، با نتیجه حاصل از کاربرد الگوریتم POCS بر روی هر مولفه به تنهایی، مقایسه شده که نتایج بازسازی با استفاده از الگوریتم QPOCS دارای کیفیت بازسازی بهتری می باشند.کلید واژگان: درون یابی سه بعدی, لرزه نگاری سه مولفه ای, تبدیل فوریه, کواترنیون, مجموعه محدبThree-component (3C) seismic data acquisition method samples seismic wave field at each station along three Cartesian coordinates, simultaneously. Many reservoirs have been discovered and determined by the generation and recording of P waves only, but the P wave alone cannot provide a comprehensive description of the reservoir characteristics. In some studies, S-wave information is required in addition to P-wave information to get a correct estimation from reservoir properties. By the three component seismic acquisition, P and S waves’ information can be recorded simultaneously. More often in seismic surveys, one cannot sample seismic wave field uniformly the along spatial direction due to environment limitations or instrument malfunctions; inevitably we have to use interpolation methods for reconstruction of missing traces. Reconstruction of missing or noisy traces is done using the projection onto convex sets (POCS). The POCS algorithm is a simple algorithm which is suitable for reconstruction of irregularly lost traces in a regular grid using multiple repetitive Fourier transforms. Conventional methods for reconstruction of missing traces in three component acquisition is usually done by implementation of POCS on each component separately, which could damage any subtle features in the record. This research introduces a method to reconstruct all three components at once using the quaternion Fourier transform and Projection onto Convex Sets (QPOCS).Quaternions in mathematics are a commutative numbers system that extend the complex numbers system. As the ordinary complex numbers can be displayed on two dimensions, these numbers can also be displayed on four dimensions. Quaternions were first introduced by William Rowan Hamilton when looking for a way to extend complex numbers to three dimensions. He knew how to sum and multiply three-dimensional numbers, but he was looking for a way to divide these numbers into each other. In 1843, Hamilton discovered that the division of quaternions requires a fourth dimension. Quaternion Algebra is often shown with H (in honor of Hamilton). The two-component data vector representation in the frequency domain can be obtained by putting the real and imaginary parts of each component in the arguments of a quaternion. This method allows operators to apply both components simultaneously. Quaternions are converted to Frequency-wavenumber domain by Quaternion Fourier Transform (QFT) and a single domain spectrum for both components is defined using the polar representation of the Quaternions. Quaternions have other applications in seismic data processing such as computing spectral attributes, multi-component velocity analysis and multi-component deconvolution. The advantage of this method is because of the spectral overlapping of the components in the frequency-wavenumber domain, thus the perpendicularity of input components is preserved (signals are not interconnected) and similarities between components are maintained that helps improve the quality of reconstruction. The coding of this method has been done in MATLAB environment and results of applying the proposed method on 3-component synthetic and real seismic data are compared to that of the POCS algorithm when applied on each component separately. The results of reconstruction using QPOCS algorithm indicate a better quality for reconstructed seismic data and in the output data, the percentage of produced artifacts is lower than that of the POCS algorithm on each component alone.Keywords: Three-dimensional interpolation, Three-component seismic acquisition, Fourier transform, Quaternion, Convex set
-
Spatial aliasing is an unwanted side effect that produces artifacts during seismic data processing, imaging and interpolation. It is often caused by insufficient spatial sampling of seismic data and often happens in CMP (Common Mid-Point) gather. To tackle this artifact, several techniques have been developed in time-space domain as well as frequency domain such as frequency-wavenumber, frequency-space, and frequency-time. The main advantages of seismic interpolation in time-space domain over frequency domain are: a) frequency components of the initial signals are preserved, and b) the prior knowledge that a seismic event consists of many plane wave segments, can be used. Using the later advantage, a seismic event can be predicted by pursuing the continuity of seismic events in a trace-by-trace manner. This process, which has become popular in seismic data reconstruction and imaging within the past few years, is known as predictive painting. We use predictive painting to predict the wavefronts and two-way-travel time curves in regularly sampled CMP gathers followed by increasing the number of traces by cubic interpolation. Then, the amplitude of the interpolated trace is obtained by averaging the amplitudes of the neighbouring traces. Performance of the proposed method is demonstrated on several synthetic seismic data examples as well as a field data set.Keywords: Spatial aliasing, interpolation, time-space, local slope, predictive painting
-
فیلتر پخش ناهمسانگرد می تواند به عنوان یک روش کارآمد سطح نوفه تصادفی را در بسیاری از داده های کاهش دهد، هرچند در استفاده از این فیلتر برای داده های لرزه ای با سطح نوفه بالا، باید جانب احتیاط را در مورد ظهور رویداد های غیرواقعی در مقطع رعایت نمود. در این مقاله، به عنوان یک راه حل به منظور مقابله با این مسئله، با معرفی یک چهارچوب هوشمند خودکار، خروجی بهینه فیلتر، برای هر نقطه از داده های ورودی، از طریق شبکه استنتاج عصبی- فازی تطبیق پذیر استخراج می شود. آموزش شبکه عصبی-فازی، با استفاده از خروجی فیلتر پخش ناهمسانگرد و نیز خوشه بندی فازی و توسط الگوریتم C- Mean تعیین می گردد. آزمایش های انجام شده در این تحقیق نشان می دهد که در مقام مقایسه با فیلتر پخش ناهمسانگرد مرسوم، روش ارائه شده به صورت محسوس، در دستیابی به مقاطع مصنوعی با نسبت سیگنال به نوفه بالاتر، حداکثر به میزان 32% عملکرد فیلتر پخش ناهمسانگرد را ارتقا داده است. در داده های حقیقی نیز، علاوه بر تضعیف نوفه های مقطع، نسبت به حفظ رویدادهای همدوس مقطع، دقیق تر عمل کرده است.کلید واژگان: تضعیف نوفه تصادفی, فیلتر پخش ناهمسانگرد, شبکه استنتاج عصبی- فازی تطبیق پذیر, خوشه بندی فازیSummary: Anisotropic diffusion filtering (ADF) is widely used as an efficient method in random noise attenuation problems, and various modifications to its original version have been proposed. The main reason could be the thought that ADF preserves edge features with acceptable performance beside noise attenuation procedure. In seismic data processing, however, it should be noticed that using ADF could cause severe changes (artifacts) in the zones that are highly contaminated with random noise. In this paper, the optimum value is derived, by introducing an automatic framework based on two artificial intelligence (AI) algorithms, adaptive neuro-fuzzy inferences (ANFIS) and fuzzy c-mean clustering (FCM). The neuro-fuzzy network is trained using original data, successive ADF values are calculated for each data point, and FCM output is obtained in a weighted averaging manner adapted with estimated noise level. The trained network is, then, generalized to all data, and thus, the ANFIS optimized version of ADF, called here AOADF, is achieved. Comparison of the results of the ADF and AOADF experiments reveals that in synthetic common mid-point (CMP) gathers, the proposed method improves peak signal to noise ratio (PSNR) value, 40% higher than ADF (in the best case) and in real CMP and common offset sorted gathers, the performance of AOADF is considerably higher than ADF, in terms of random noise attenuation without adding unwanted artifacts and preserving continuity of coherence components. Introduction: As an inevitable phenomenon in seismic data acquisition, random noise affects the processing and interpretation results of seismic reflection data. Although it is expected that random noise decreases dramatically by increasing the stack fold, observation of random noise at far offsets and later arrivals, especially in relatively deep acquisitions, could be considered as a common case. This paper intends to enhance the signal to noise ratio (SNR) of the seismic reflection data by attenuating background random noise and preserving reflection data, utilizing powerful potential of the ANFIS and FCM in model discrimination and the ability of ADF in random noise attenuation. The proposed method mainly seeks to improve the ability of ADF in the zones when the input data is highly contaminated with random noise, and thus, ADF output usually severely cause unwanted artifacts. Achieving latter goal is mainly based on increasing the weight of FCM output value, in the averaging scheme designed for optimum output calculation. Methodology and Approaches: In the interior of a segment in the input data, the nonlinear isotropic diffusion behaves almost like the linear diffusion filter, but one should consider that at edges, diffusion is inhibited. Therefore, noise at edges cannot be eliminated successfully by the mentioned process. As a solution to this problem, anisotropic models do not only take into account the modulus of the edge detector, but also its direction. ANFIS, as a neural-fuzzy system, combines the learning capabilities of neural networks with the functionality of fuzzy inference system. FCM is a robust method for analysis of data and construction of models, more natural than hard clustering, in almost all problems. Data on the boundaries between several classes are not forced to fully belong to one of the classes, but rather are assigned membership degrees between 0 and 1 indicating their partial membership. In the method adapted in this research, at the very first stage, ADF with some different diffusion coefficients (5 to 80) are calculated for a small part of input data. At the next step, the standard deviations of ADF values are compared to each other. The point with higher standard deviation could be considered as noise related data point and vice versa. Hence, sorting, and then, selecting the first, let us say five percent of data, could relatively promise that we have selected one percent of dataset, which are less likely to be masked by random noise. Using the set of selected data and a weighted average of initial ADF values, original values of input data and output of FCM analysis, the training pairs for ANFIS network will be constructed. The AOADF output will be automatically achieved by generalizing the trained network to all data. Results and Conclusions: Although ADF is a strong method for random noise attenuation in many problems, in seismic data processing, ADF could cause artifacts in the zones that are highly contaminated with random noise. In this research, as the comparison of the results of AOADF and ADF on synthetic and real seismic datasets indicates, the AOADF method considerably performs better in random noise attenuation and in preserving the continuity of the coherence events without adding significant artifacts.Keywords: Anisotropic Diffusion , ANFIS , Random Noise Attenuation, Seismic , Fuzzy C-Mean Clustering
-
مهاجرت به روش کیرشهف یکی از ساده ترین و رایج ترین الگوریتم های مهاجرت داده های لرزه ای است. از آنجا که عملگر مهاجرت کیرشهف، الحاقی عملگر مدل سازی است، قادر به بازسازی درست دامنه بازتاب ها نبوده و تصویر نهایی مهاجرت یافته دارای وضوح کافی نخواهد بود. مهاجرت کمترین مربعات برای رفع این مشکل و بازسازی صحیح دامنه معرفی شد اما بخاطر ابعاد بزرگ ماتریس ها، حل مساله به صورت تکراری انجام می شود که زمان بر است. اگرچه در مقایسه با حل الحاقی، حل کمترین مربعات موجب بهبود دامنه می شود، ولی تصویر حاصل کماکان وضوح کافی نخواهد داشت. در این مقاله با منظم سازی نرم-1 برای تزریق تنکی به جواب کمترین مربعات کیرشهف یک روش مهاجرت با تفکیک پذیری بالا ارائه می شود. در اینجا مهاجرت لرزه ای به شکل یک مساله بهینه سازی با قید تنکی فرمول بندی و با الگوریتم شکافت عملگری برگمن حل می شود. از خصوصیات مطلوب این الگوریتم همگرایی بالا و حل مسائل مقید بدون نیاز به محاسبات وارون ماتریس و تنها با استفاده از عملگرهای مهاجرت و مدل سازی است. نتایج حاصل از داده های شبیه سازی شده عملکرد بسیار بهتر الگوریتم پیشنهادی به لحاظ تفکیک پذیری در قیاس با الگوریتم مرسوم مهاجرت کیرشهف را نشان می دهند. مهاجرت کمترین مربعات قادر به کاهش اثرات ناشی از ناقص بودن داده در تصویر مهاجرت یافته می باشد. لذا روش پیشنهادی نیز با افزایش کیفیت تصویر حاصل از مهاجرت کمترین مربعات، تصویری مهاجرت یافته از یک داده ناقص با تفکیک پذیری بالاتری تولید خواهد کرد. نتایج حاصل از اعمال روش بر روی داده مصنوعی و واقعی عملکرد مطلوب آن را نشان می دهد.
کلید واژگان: مهاجرت کیرشهف, مهاجرت کمترین مربعات, تفکیک پذیری, منظم سازی نرم-1, تنکی, بهینه سازی تنک, شکافت عملگری برگمن
For decades Kirchhoff migration has been one of the simplest migration algorithms and also the most frequently used method of migration in industry. This is due to its relatively low computational cost and its flexibility in handling acquisition and topography irregularities. The standard seismic migration operator can be regarded as the adjoint of a seismic forward modeling operator, which acts on a set of subsurface parameters to generate the observed data. Such adjoint operators are able to provide an approximate inverse of the forward modeling operator and only recover the time of the events (Claerbout, 1992). They cannot retrieve the amplitude of reflections, thus leading to a decrease in the resolution of the final migrated image. The standard seismic migration (adjoint) operators can be modified to better approximate the inverse operators. Least-squares migration (LSM) techniques have been developed to fully inverse the forward modeling procedures by minimizing the difference between observed and modeled data in a least-squares sense. An LSM is able to reduce the (Kirchhoff) migration artifacts, enhance the resolution and retrieve seismic amplitudes. Although implementing LSM instead of conventional migration, leads to resolution enhancement. It also brings some new numerical and computational challenges which need to be addressed properly. Due to the ill-conditioned nature of the inverse operator and also incompleteness of the data, the method generates unavoidable artifacts which severely degrade the resolution of the migrated image obtained by the non-regularized LSM method. The instability of LSM methods suggests developing a regularized algorithm capable of including reasonable physical constraints. Including the seismic wavelet into the migration operator, migration will generate the earth reflectivity image which can be considered as a sparse image, so applying the sparseness constraint, e.g., via the minimization of the 1-norm of reflectivity model, can help to regularize the model and prevent it from getting noisy artifacts (Gholami and Sacchi, 2013).
In this article, based on the Bregmanized operator splitting (BOS), we propose a high resolution migration algorithm by applying sparseness constraints to the solution of least-squares Kirchhoff migration (LSKM). The Bregmanized operator splitting is employed as a solver of the generated sparsity-promoting LSKM for its simplicity, efficiency, stability and fast convergence. Independence of matrix inversion and fast convergence rate are two main properties of the proposed algorithm. Numerical results from field and synthetic seismic data show that migrated sections generated by this 1-norm regularized Kirchhoff migration method are more focused than those generated by the conventional Kirchhoff/LS migration.
Regular spatial sampling of the data at Nyquist rate is another major challenge which may not be achieved in practice due to the coarse source-receiver distributions and presence of possible gaps in the recording lines. The proposed model-based migration algorithm is able to handle the incompleteness issues and is stable in the presence of noise in the data. In this article, we tested the performance of our proposed method on synthetic data in the presence of coarse sampling and also acquisition gaps. The results confirmed that the proposed sparsity-promoting migration is able to generate accurate migrated images from incomplete and inaccurate dataKeywords: Kirchhoff migration, Inverse operator, Least-squares, Bregmanized operator splitting, Sparsity-constrained, Incomplete data -
روش های وارون برای حل مسائل در شاخه های علمی مختلفی بکار برده می شوند؛ اما شرایط حاکم بر داده های ژئوفیزیکی استفاده از این روش ها را پیچیده می کند. این شرایط را می توان به تکرار ناپذیری برداشت داده های ژئوفیزیکی به علت هزینه های بالای عملیات و نیز تاثیری که گسسته سازی روی حل مسئله اعمال می نماید، دانست. به طور کلی مسائل وارون در ژئوفیزیک در زمره مسائل بد وضع دسته بندی می شوند. این بدان معناست که برای انتخاب پاسخ مناسب منظم سازی و اعمال شرایطی منطبق با اصول فیزیکی حاکم بر مسئله لازم است. تخمین مدل سرعت از وارون زمان رسید امواج لرزه ای از جمله این مسائل در لرزه شناسی است. منظم ساز تیخنوف و منظم ساز تغییرات کلی از جمله روش های مرسوم برای حل این گونه مسائل هستند. روش اول توانمندی بالایی در بازسازی سیگنال های هموار دارد؛ ولی ناپیوستگی ها و لبه های سیگنال را به خوبی بازیافت نمی کند. در مقابل روش دوم به خوبی قادر به بازسازی لبه های سیگنال است؛ ولی بخش های هموار را دچار اعوجاج می کند. در این مقاله یک روش منظم سازی بر اساس ترکیب روش های حل تیخنوف درجه دوم و تغییرات کلی بیان می شود. در این مطالعه برای تخمین مدل سرعت از زمان رسید امواج لرزه ای در پروفیل لرزه ای قائم توانمندی روش های منظم ساز مرسوم با روش ترکیبی روی مدل های مصنوعی مقایسه می شود. نتایج اعمال روش ها روی مدل های سرعت حاوی لبه و بخش هموار، حاکی از رفع کاستی های دو روش مرسوم توسط روش ترکیبی است. علاوه بر آن سه روش فوق در وارون زمان رسید امواج ثبت شده در پروفیل لرزه ای قائم (VSP) واقعی نیز اعمال و نتایج به دست آمده با هم مقایسه و مورد بحث قرار می گیرد.کلید واژگان: وارون زمان رسید, حل مسائل وارون, منظم ساز تیخنوف, منظم ساز تغییرات کلی, پروفیل لرزه ای قائم, پارامتر منظم سازیSummary A variety of methods has been presented to invert arrival time of seismic waves for velocity distribution over the survey area. In real world, the velocity distribution model consists of blocky structures as well as smooth varying parts. In such cases, implementation of Tikhonov regularization will recover the smooth varying portion of the velocity model; while the Total Variation (TV) regularization is capable of recovering the blocky varying parts of the velocity model. In this research, a technique for solving inverse problems based on a combination of second order Tikhonov and TV regularizations is proposed. The methods are tested on both synthetic and real arrival times and the results are presented.
Introduction Inverse problems are applied in different disciplines. These methods in geophysical applications become a bit complicated due to special conditions of geophysical data. The conditions arise from the fact that geophysical data are collected only once due to the high acquisition cost; or it is due to the effects of discretization of the problem. In general, the issues that we encounter in geophysical inverse problems are classified as ill posed inverse problems. Inverse problems are divided into three categories: Over-determined, Under-determined, and Even-determined. Overdetermined systems often have no solutions; it is possible for an over-determined system of equations to have either many solutions or exactly one solution. For even-determined problems, it is possible to have a unique solution. A system of equations with fewer equations than variables is under-determined. In many cases under-determined systems of equations have infinitely many solutions. Under-determined systems need the regularization. In such cases regularization is required to choose a suitable model from those fitting the data. Two conventional regularization methods are Tikhonov regularization and TV regularization.
Methodology and Approaches For a linear inverse problem d Gme where m dis data, n mis model parameters, m eis noise, and m n Gis forward operator, the Tikhonov regularization is expressed as: 2 2 2 2 J d Gm Lm i , where 0 is the regularization parameter controlling the conditioning of the problem and matrix L is a regularization operator. The Tikhonov regularization brings the advantage of linearity of the problem so that m can be determined analytically by solving normal equations m G G L L G d T i T i T 1 ( ).
The TV regularization restricts the domain of possible candidate solutions to those having sparse gradient, and can be expressed as the following minimization equation: 1 1 2 2 J d Gm Lm with a solution m which can be determined by solving normal equations: G G L L m G d T T k k T 1 1 1 1 ) 2 ( , where proposed. Assume signal x is a combination of two signals 1 2 x x x , where 1 x is a blocky component and 2 x is a smooth component of x . The proposed method can be approximated by using a new minimization problem of the form arg min ( ) ( ) 1 1 1 1 2 2 2 2 2 1, 2 1 2 2 y G x x L x L x x x , where , are regularization parameters, and L1 and L2 are first-order and second-order derivative operators. An iteratively re-weighted least squares (IRLS) technique is used as a fast and an efficient algorithm for minimization of the cost function. The solution x can be determined iteratively by solving normal weighted equations: G Gx G G L L x G yG G L x L x G Gx G yT T T TT T T T1 2 2 21 1 1 1 2( )( ) )2(, where ) 1 ( ) ( 2 1 1 1 1... 1 i i M L x x diag is a diagonal matrix and is a small positive number. In this study, a method also is presented for determination of the regularization parameters similar to the L-curve method.
Results and ConclusionsIn this paper, a technique for solving inverse problems based on a combination of second order Tikhonov and TV regularizations is proposed. The method eliminates their individual weaknesses and recovers both blocky and smooth portions of the model.Keywords: Traveltime Inversion, Inverse Problems, Tikhonov Regularization, TV Regularization, Vertical Seismic Profile (VSP), Regularization Parameters -
Magnetotellurics is an exploration method which is based on measurement of natural electric and magnetic fields of the Earth and is increasingly used in geological applications, petroleum industry, geothermal sources detection and crust and lithosphere studies. In this work, discrete wavelet transform of magnetotelluric signals was performed. Discrete wavelet transform decomposes signals into coefficients in multi-scales. Noise and signal portions are separable in multi-scale mode. Therefore, noise can be discarded in each scale; a threshold value is constructed dependent to coefficients of the scale then, the noise coefficients are discarded by thresholding the coefficients with the proper values. Proportional threshold values can be used to remove white and 1/f noise from time series. After that, a new signal is constructed using clean coefficients. This method is widely used in various fields of sciences from image processing to seismic studies. This work tried to show the effectiveness of this technique in decreasing pervasive noise from magnetotelluric signals. The results emphasized the advantageous effect of wavelet techniques in magnetotelluric data noise removing process.Keywords: Wavelets, Magnetotelluric, time series, Denoising
-
یک مسئله کلیدی در پردازش داده های لرزه ای و تصویرسازی با استفاده از این داده ها، تخمین درست سرعت انتشار امواج لرزه ای است. وارون سازی شکل موج کامل، یک روش نوین جهت تخمین پارامترهای زیرسطحی مانند سرعت، چگالی و فاکتور کیفیت است. این روش، یک مسئله وارون در ژئوفیزیک محسوب می شود؛ که در آن به دنبال پارامترهایی می باشند که شکل موج های ثبت شده (لرزه نگاشت ها) را توصیف نمایند. فرآیند وارون سازی شکل موج کامل در قالب یک مسئله بهینه سازی از طریق تعریف یک تابع هزینه به صورت اختلاف بین شکل موج های مشاهده ای (ثبت شده) و شکل موج های محاسبه ای بیان می شود. در تصویرسازی دوبعدی و سه بعدی، اندازه فضای پارامترهای مدل و همچنین پیچیدگی های محاسباتی مدل سازی مستقیم اجازه ی استفاده از روش های بهینه سازی سراسری را نخواهد داد و باید تابع هزینه از طریق روش های بهینه سازی محلی، کمینه شود. مسئله حداقل سازی در این روش اغلب به صورت یک فرآیند تکراری از طریق روش های گرادیانی که بر پایه مشتقات مرتبه اول و دوم تابع هزینه عمل می کنند، انجام می شود. در این مقاله به مقایسه عملکرد دو الگوریتم بهینه سازی، الگوریتم گاوس-نیوتن با حضور قطر اصلی شبه هشین و الگوریتم شبه نیوتنی L-BFGS پرداخته می شود. الگوریتم گاوس-نیوتن با حضور قطر اصلی شبه هشین، یک الگوریتم استاندارد گاوس-نیوتن به شمار می رود؛ که با اصلاحاتی بر محاسبه ی صریح ماتریس هشین جهت کاهش بار محاسباتی همراه شده است و الگوریتم L-BFGS، یک الگوریتم شبه نیوتنی است که در آن نیازی به محاسبه صریح ماتریس هشین ندارد. در این مقاله، عملکرد و کارایی این دو الگوریتم بهینه سازی از لحاظ کیفی و کمی روی یک مدل سرعت مصنوعی بررسی می شود. مطابق نتایج ملاحظه خواهد شد که هر دو روش موفق شده اند مدل صحیح را به لحاظ کیفی به خوبی بازسازی نمایند. همچنین با بررسی کمی عدم تطابق بین مدل صحیح و مدل وارون شده برای هر دو الگوریتم، این جمع بندی به دست می آید که عملکرد الگوریتم گاوس-نیوتن با حضور قطر اصلی شبه هشین، در 40 تکرار صورت گرفته در این مطالعه، عملکرد نسبتا بهتری داشته است.کلید واژگان: تصویرسازی لرزه ای, وارون سازی شکل موج کامل آکوستیک, بهینه سازی غیرخطی, روش بهینه سازی گاوس, نیوتن, روش بهینه سازی شبه نیوتنی, Limited, Memory BFGS, شبه هشینSummary: Today, the significance of the velocity model estimation in seismic migration as well as time to depth conversion of seismic sections is very clear to every geophysicist. From a practical point of view, approaches such as well logging techniques or direct observations cannot provide a reliable description of regional scale physical properties of the earth. Seismic imaging is a tool to describe the earth physical properties. Today, one of the imaging techniques, which is highly welcomed by geoscientists is the full waveform inversion (FWI). The full waveform inversion - an efficient method in seismic imaging - uses all available information in the seismograms, including the amplitude, phase, and the first arrival time so as to judge the physical properties of the earth. In fact, FWI is represented as an optimization problem by defining a misfit function, which measures differences between the observed waveforms and the computed waveforms. In FWI, the size of the model space parameters will not allow us using global minimization algorithms. Therefore, we minimize the misfit function by local optimization methods. A waveform inversion problem is usually solved repetitively by the gradient-based solutions. In this paper, we have compared two gradient optimization algorithms of Gauss-Newton algorithm based on the main diagonal of the pseudo Hessian matrix (GN-DPH) and the limited memory-BFGS (LBFGS). Although the L-BFGS does not need the computation of Hessian matrix, but according to the numerical tests on synthetic models, we have found that GN-DPH algorithm results are more accurate than L-BFGS after 40 iterations.IntroductionFrom a general point of view, in oil and gas exploration, approaches for delineating physical properties of the earth can be divided into direct solutions and indirect solutions. In large scale seismic problems, we have to focus on indirect solutions. Consequently, we use imaging techniques to extract desired parameters of the earth. There are various approaches for the imaging and estimations of the earth elastic parameters. FWI is one of efficient imaging techniques, which has become popular recently in seismic communities.
Methodology and Approaches: We have compared one of Newton-based methods, namely Gauss-Newton based on the main diagonal of the pseudo Hessian Matrix (GN-DPH), and the limited memory-BFGS (L-BFGS). For the forward modeling, we have solved 2D acoustic wave equation using finite-difference approximation equipped with a second order absorbing boundary condition in the frequency domain. To estimate the gradient vector efficiently, the adjoint-state method (Plesseix, 2006) has been used. In the GN-DPH algorithm, a method developed by Shin et al. (2006) has been applied in order to compute the Hessian matrix, and in the L-BFGS algorithm, we have followed a preconditioned L-BFGS approach developed by Nocedal (1980). In the numerical example, the performance of the aforementioned algorithms has been examined.
Results andConclusionsBy comparison of the two reviewed optimization methods in terms of the gradient pre conditioners, we found that both methods succeeded to estimate the true velocity model. Analyzing the model estimation error revealed that the GN-DPH method was more efficient than the L-BFGS during the first 40 iterations performed in the study. Decreasing the normalized misfit function values showed the advantage of GN-DPH algorithm over the L BFGS. There is two justifications for this fact: First, the convergence rate for the GN-DPH is of a second order, whereas that of the L-BFGS is super-linear. Second reason is valuable information, which exists on the main diagonal of the linear part of the Hessian matrix that acts as pre-conditioner in the GN-DPH algorithm. On the other hand, it should be noted that by increasing the number of iterations, better results can be expected due to the fact that the Hessian matrix includes both linear and nonlinear parts and is more accurate than the other investigated method. It is also interesting to note that by increasing the complexity of a true model, we will definitely encounter a more nonlinear misfit function, which needs a more efficient Hessian for preconditioning the gradient vector where the importance of L BFGS algorithm will be highlighted there.Keywords: GNSeismic, Imaging, Acoustic Full Waveform Inversion, Nonlinear Optimization, Gauss-Newton Optimization, Quasi-Newton Optimization, Limited Memory-BFGS, Pseudo Hessian -
تحلیل زمان- بسامد داده های لرزه ای به دلیل ناایستا بودن آنها، یک روش مهم در پردازش و تفسیر داده لرزه ای است. دانستن اینکه چگونه محتوای بسامدی یک سیگنال با زمان تغییر می کند، مهم است، چرا که امکان تعیین بسامد قله (بسامد با بیشینه دامنه) محلی داده های لرزه ای را فراهم می سازد. در این مقاله روشی برای انجام تبدیل زمان- بسامد به صورت محلی و بر پایه مسائل وارون استفاده شده است، که نیاز به انتخاب پنجره زمانی را برطرف می کند. از آنجایی که حل مسائل وارون همواره همراه با نایکتایی و ناپایداری است، در این مطالعه از روش منظم سازی شکل ده که یک منظم سازی درجه دو است، برای نمایش سیگنال لرزه ای ناایستا در حوزه زمان- بسامد استفاده می شود. تبدیل مورد نظر در این مطالعه بر پایه تبدیل فوریه می باشد و ضرایب فوریه به کمک روش وارون تعیین می شود که در آن شعاع هموارسازی، عامل کنترل کننده منظم سازی شکل ده می باشد. ایده کلیدی در این روش این است که خطای میان سیگنال ورودی و همه مولفه های فوریه سیگنال ورودی را به طور هم زمان و با کنترل تفکیک پذیری زمانی، کاهش می دهد و این کار سبب نمایش هموارتری در حوزه زمان- بسامد می شود. کارایی روش با اعمال آن روی داده های لرزه ای مصنوعی و واقعی مورد بررسی قرار گرفت و به عنوان کاربرد لرزه ای از این روش برای برآورد ضریب جذب به کمک تعیین جابه جایی مرکز ثقل بسامد در داده های لرزه ای استفاده شده است.کلید واژگان: تفکیک طیفی, منظم سازی درجه دو, ضریب جذب, جابه جایی مرکز ثقل بسامدTime-frequency analysis is an important technique in the seismic data involving nonstationary signal processing and interpretation. Due to the limitations of the Fourier transform in analyzing nonstationary signals, it cannot be used for time-frequency representation. Time-frequency transforms such as short-time Fourier transform, wavelet transform and S-transform are common tools in the study of nonstationary characteristics in the seismic data. Based on regularized least-squares inversion, Liu et al. (2009) have recently proposed a new method of time-varying frequency characterization of nonstationary seismic signals. In this paper, we applied the method of Liu et al. (2009) by designing an invertible nonstationary time-frequency transform called local time-frequency (LTF) transform (Liu and Fomel, 2010). This method generates time-frequency characterization without sliding windows. The LTF transform aims at depicting the nonstationary character of seismic data. This transform uses a Fourier basis to match the target signal under the regularized least-squares norm and provided an invertible time-frequency representation where are the Fourier coefficients and . The use of a non-stationary regression makes it possible for the coefficients to vary with x. In the linear case, they can be obtained by solving the least-squares problem: The solution where denotes the least-squares estimate of m and LT denotes the adjoint operator and S is a smoothing (shaping) operator, was introduced by Fomel (2007) using shaping regularization. The λ scaling in this solution controls the relative scaling of the forward operator L. The key idea is to minimize the error between the input signal and all of its Fourier components simultaneously using a regularized nonstationary regression (Fomel, 2009) with control on time resolution. The transform can provide LTF representations for common seismic data interpretation tasks such as Q factor estimation. Seismic waves lose energy by traveling through the earth due to a variety of phenomena such as attenuation. Attenuation refers to the loss of energy caused by phenomena other than geometrical spreading, and depends on the characteristics of the transmitting medium. Generally, attenuation is expressed in terms of quality factor (Q) which is a dimensionless parameter and is inversely proportional to attenuation coefficient. Experiments show that the quality factor is controlled by the elastic properties of formation and its fluid content. Hence, as one of the most important attributes in seismic exploration, it is used to directly identify hydrocarbon reserves on seismic sections (Hedlin et al., 2001). In this paper, we present a procedure using the differences in seismic reflection time-frequency spectra to estimate relative seismic attenuation in a reservoir of carbonate rock with fractures and voids. It is difficult to determine the seismic reflections at the top and bottom of the reservoir, required by the conventional amplitude ratio and the frequency spectra ratio methods. But in this study, we use the difference of seismic reflection time-frequency coefficients to estimate the relative seismic attenuation in such reservoirs. Gu and Stewart (2006) considered a special case where the incident spectrum has a Gaussian distribution, and discussed the reflection centroid of frequency downshift. Here, we give the analytical expression of the differences of reflection spectra corresponding to two frequencies with an attenuation coefficient. We show that the differences of reflection spectra corresponding to the two frequencies that are symmetrical to the centroid of frequency and separated by twice the incident signals standard deviation can be used to calculate the attenuation coefficient. As a time-frequency representation tool, the LTF transform of Fomel et al. (2010) is used to study seismic wave attenuation coefficient in synthetic and real field data examples.Keywords: attenuation coefficient estimation, regularized least, squares inversion, centroid of frequency, local time, frequency transform
-
در حال حاضر عمده روش های ازدیاد برداشت به کار برده شده در مخازن نفتی کشور، تزریق آب و تزریق گاز هستند. تزریق گاز، به عنوان مثال CO2، یکی از کاربردی ترین روش های افزایش ضریب بازیافت میادین نفتی است. در راستای مطالعه نشانگر های لرزه ای، میزان سازگاری تئوری گاسمن و روابط گرینبرگ- کاستاگنا با شرایط اشباع محیط توسط دی اکسیدکربن، با استفاده از داده های آزمایشگاهی بررسی گردید. اگرچه این روابط سال هاست مورد استفاده اند و نتایج نسبتا معتبر و قابل قبولی ارائه داده اند، اما معمولا بر اساس فرضیاتی هستند که گاهی نتایجی دور از واقعیت بروز می دهند. در این تحقیق، گاز CO2به صورت فاز محلول در مقیاس آزمایشگاهی به نمونه هایی ماسه سنگی تحت فشار مخزن تزریق شده و روند تزریق توسط انتقال و ثبت امواج الاستیک مطالعه شده است. با تغییر پارامترهای موثری همچون فشار محصوری (در حد فشار مخزن)، فشار منفذی (در حد فشار مخزن)، فرکانس موج عبوری، غلظت CO2، در هر مرحله از آزمایش ها، تاثیر آن ها بر سرعت و دامنه موج لرزه ای بررسی شده است. در نهایت بر اساس نتایج طیف متنوعی از آزمایش ها، روابطی توسعه داده شده که سرعت و دامنه موج لرزه ای را بر اساس فشار موثر، فرکانس و غلظت تخمین می زند. اعتبارسنجی نشان می دهد که سازگاری روابط ارائه شده با مشاهدات آزمایشگاهی بیش از 90 درصد بوده و مطلوب تر از دیگر روابط مورد استفاده است.کلید واژگان: فیزیک سنگ, سرعت امواج الاستیک, دامنه امواج الاستیک, شرایط مخزن, روابط گاسمن, گرینبرگ - کاستاگناSummary Water and Gas injection are two major enhanced oil recovery methods in Iran. Gas (e.g. CO2) injection, is one of the most applicable methods of enhancing oil recovery in oil fields. In order to study the behaviour of seismic attributes, the compatibility of the prediction made by Gassman theory and GreenbergCastagna equations in the situation of CO2 saturated environment is investigated using lab data. It should be noted that the mentioned equations are based on some assumptions that are not always represent the real situation, thus, some incompatibility is anticipated. Therefore, their predictions are liable to be incompatible with real world wave behaviour. In this research, CO2 in dissolved phase is injected into pressurized sandstone samples in laboratory scale, and elastic waves are utilized in order to investigate the injection process. The variation of the propagation velocity of seismic waves and their amplitudes are studied versus variation of effective parameters e.g. confining pressure (close to reservoir pressure), pore pressure (close to reservoir pressure), transmission wave frequency, and CO2 density and phase. We have also used the collected laboratory data for wave propagation at supercritical saturation state to investigate the compatibility of the prediction made by Gassman theory and Greenberg-Castagna equations. Based on the results of various laboratory experiments, we can conclude that some of the developed equations are useful for estimation of velocity and amplitude of seismic waves. Verifications confirm that compatibility of the developed equations with laboratory results are more than 90 percent, and thus, the developed equations can be preferred to other related popular equations.
Introduction Rock physics has an effective role in the estimation of petrophysical and reservoir parameters e.g. porosity, permeability, rock type, saturation, pore pressure, and fracture density using seismic attributes. As a result, various seismic attributes such as velocity, frequency and phase are used in order to estimate the above-mentioned petrophysical and reservoir parameters (Dodds et al. 2007, Adam 2006, Ruiping et al. 2006, Avseth 2005, Gray et al. 2002). In recent decades, various empirical relations are also developed for this purpose (pennebake 1968, Eaton 1972, Reynolds 1970, Domenico 1977, Castagna et al. 1985, Greenberg and Castagna 1992, Castagna et al. 1993, Krzikalla and Muller 2007, Toms et al. 2007, Lebedev et al. 2009, Han et al. 2010, Han et al. 2014, Liu et al. 2010, Eftekharifar and Han 2011). Gassman theory and Greenberg-Castagna equations are widely utilized as basic rock physics equations in world oil fields. However, these equations are based on some unreal hypotheses which cause their results to be not fully compatible with real situations, for example these hypotheses do not consider the distribution of fluids, and also, do not pay attention to the real situation of the rock and fluids; e.g. pore size, pore shape, fracture density, fracture aperture, heterogeneity, the fabric of matrix, pore pressure, confining pressure, fluid type, saturation, fluid distribution, viscosity, compressibility index, etc. Therefore, these equations need reform, especially when rock type, fluid type and reservoir situation varies. In this paper, based on laboratory experiments, some empirical rock physics equations are developed that are more compatible with reservoir conditions, and present a new approach for estimation of velocity and amplitude of seismic waves.
Methodology and Approaches In this research, a core holder has been designed in which measuring and controlling the confining, radial, axial and pore pressures have been feasible. Two transducers are put around the caps of core holder, in order to send and receive seismic waves. Transducers are in contact with plugs. The studied plugs having various grain sizes are taken from Berea sandstone formation in southwest of Australia. Three different CO2 densities or concentrations are injected and dissolved into plugs saturated with distilled water. In the next stage, elastic waves having different frequencies are passed plugs under various pressure/density situations, and consequently, velocities of the waves are recorded. Based on these laboratory experiments, some equations are developed using multi-regression method that are more compatible with reservoir conditions
Results and Conclusions In this paper, some novel equations, based on laboratory experiments, have been developed that not only are accurate but also are generalized. These equations or relations present a new approach in estimation of the velocity and amplitude of seismic waves. Based on the results, the authors of this paper prefer to give some practical recommendations as follow: All laboratory tests have been carried out in room temperature, and thus, the authors suggest that a similar research to be repeated in the reservoir temperature, and new equations to be developed. In the current research, just two seismic attributes (velocity and amplitude) are studied. The authors suggest that other seismic attributes are also investigated in future studies. The majority of Iranian oil fields are carbonate reservoirs while this research has been carried out on sandstone. Thus, the authors suggest that similar studies have been carried out on carbonate plugs. Repeating laboratory tests with reflected waves will help to approach more real results. In all laboratory tests, the authors have investigated the effect of pressure growth on the tests. It will be useful that other researchers investigate the effect of pressure drop on the results.Keywords: Rock Physics, Velocity of Elastic Waves, Amplitude of Elastic Waves, Reservoir Conditions, Gassman Theory, Greenberg-Castagna Equations, CO2 Injection, Sandstone -
حضور نوفه تاثیر نامطلوبی روی داده های لرزه ای می گذارد. یکی از مراحلی که در پردازش و تفسیر داده های لرزه ای اهمیت دارد، تضعیف مطلوب نوفه ها می باشد. حضور نوفه و حذف نامطلوب آن ها مانع از ایجاد تصویر صحیح از ساختارهای زمین شناسی منطقه جهت تفسیر داده های لرزه ای می شود. نوفه های تصادفی اغلب توزیع گوسی دارند. ولی در بعضی از گیرنده ها این نوفه ها مقادیر قابل ملاحظه ای دارند که از توزیع گاوسی هم پیروی نمی کنند که در این مقاله به آن ها نوفه های آشفته (Erratic) گفته می شود. نوفه های آشفته می تواند بر اثر وزش باد، وارونگی قطبی ناصحیح، شرایط سطحی ضعیف، ماشین آلات و... تولید شوند. هر چند فیلترهای بر پایه حداقل مربعات برای حذف نوفه های تصادفی بهینه است، اما به دلیل غیر گوسی بودن نوفه های آشفته، نتایج مطلوبی نمی دهد. به منظور رفع این مشکل، فیلتر جدید بر پایه کاهش رتبه ماتریس هنکل را معرفی می کنیم. در این روش بعد از انتقال داده ها به حوزه فرکانس- مکان، برای تک تک برش های فرکانسی ماتریس هنکل ساخته و رتبه آن را کاهش می دهیم و سپس با استفاده از الگوریتم تکراری و توابع وزنی، تا زمانی که همگرایی مطلوبی حاصل شود، ترکیب وزن داری از مقادیر ماتریس اولیه و ماتریس کاهش رتبه یافته را بدست می آوریم که با دادن وزن صفر به نوفه های آشفته آن ها را حذف می کنیم.
این روش قابل اعمال به داده های لرزه ای دو بعدی و سه بعدی با شیب های متقاطع می باشد. عملکرد این فیلتر بر روی داده های لرزه ای واقعی و مصنوعی بررسی شد و ملاحظه شد که روش به خوبی نوفه های آشفته و تصادفی را تضعیف می کند.کلید واژگان: کاهش رتبه ماتریس, تجزیه مقدار تکین, ماتریس هنکل, فیلتر کادزو, تضعیف نوفه های تصادفی, تضعیف نوفه های آشفتهThe presence of noise in geophysical measurements has undesirable effects on the seismic data. One of the important problems in seismic data processing is attenuation of the noise to the desired form and keeping the original signal. Contamination of seismic data with noise prevents obtaining a proper image of geological structures and seismic data interpretation. In some of the receivers the noise has erratic values and the amplitude is large in relation to other receivers, surprising and do not follow a Gaussian distribution. In reality, not all observed data follow the Gaussian distribution. There may be a group of atypical data that are far away from the majority of data. Atypical data are referred to as outliers or gross errors, which follow other distributions or there is no clear distribution to describe them. These are called erratic noises that do not follow the Gussian distribution. Conventional methods for noise suppression assume Gaussian noise distribution and their performance decreases in the case of erratic noise. The rank reduction based techniques are applied to attenuate weak random seismic noise in a least squares sense. The rank reduction methods are very sensitive to erratic noises and the different results provide. Even a little of erratic noises extremly degrades the performance of the rank reduction methods. More robust estimates are needed such that they are acceptable even when the data do not strictly follow the given distribution. The non-Gaussian and erratic noise are usually produced by wind, incorrect polarity, cultural and traffic noises and so on. In order to solve this problem a new filter based on repeating the reduction of the rank of Hankel matrix is introduced. The method is called iteratively reweighted rank reduction (IRRR). This method is combination of iterative weighted least squares procedure (IRLS) and weighting low-rank approximations (WLRA). In this method after transferring data into the frequency domain, for each constant frequency slice an individual Hankel matrix is created and then by using singular value decomposition (SVD) a rank reduced matrix is obtained. Later on using the iterative algorithm, until the desired convergence is achieved, the combined weight values are obtained from the original matrix and rank reduced matrix. Parameter that controls the convergence of the method is the weighting function. The role of weighting function is reducing or completely removing of the erratic noise from data. Here the weighting function we used was Tukeys Biweight function. In order to maintain the statistical performance and the ability of the method we define regulation parameter τB. Regulation parameter is calculated based on the estimates to the median and the median absolute deviation. These two estimates are not sensitive to erratic noise. The advantage of this method in comparison to the other rank reducing methods is the attenuation of erratic noise and at the same time random noise. This method is application to 2D and 3D seismic data. Performance of the method was tested on synthetic and real seismic data. The results showed superior performance of the method in attenuating erratic noises.Keywords: Matrix rank reduction, Singular value decomposition, Hankel matrix, Random noise attenuation, Erratic noise attenuation -
Buried channels are one of the stratigraphic hydrocarbon traps. They are often filled with a variety of porous and permeable sediments so they are important in exploration of oil and gas reservoirs. In reflection seismic data, high-frequency components are sensitive to the channel thickness, whereas, low-frequency components are sensitive to the channel infill materials. Therefore, decomposition of seismic data to its spectral components provides useful information about both thickness and infill materials of buried channels. A 4D spectral data is produced by applying spectral decomposition to a 3D seismic data cube which is decomposed into several single frequency 3D cubes. Since different frequencies carry different types of information, each single frequency cube cannot show all subsurface information simultaneously. Therefore, we used color stacking method and constructed RGB plots, which represent more information than single frequency volumes. In this paper, we applied three methods of Deconvolutive Short Time Fourier Transform (DSTFT), S Transform (ST) and Short Time Fourier Transform (STFT) to a land seismic data from an oil field in south-west of Iran. We used the resulting spectral volumes to create RGB color stacking plots for tracing buried channels. According to the results, the RGB plots based on the DSTFT method revealed more details than the ST and STFT methods.Keywords: buried channels, spectral decomposition, deconvolutive short time Fourier transform, color stacking method
-
در مواردی که توان چشمه متناسب با طول پروفیل لرزه ای یا عمق هدف عملیات اکتشافیشکست مرزی نباشد یا تضعیف موج چشمه زیاد باشد،معمولا تشخیص اولین رسیدها، به خصوص در دورافت های بلند، مشکل خواهد شد.برای تقویت اولین رسیدها روشی براساس تداخل سنجی فوق مجازی (Supervirtual interferometry - SVI) معرفی می شود. اساس این روش بر اختلاف هندسه سیر امواج شکستی با سایر امواج لرزه ای استوار است. تقویت اولین رسیدها با این روش،نسبت سیگنال به نوفه داده های شکست مرزی را متناسب با مجذور تعداد چشمه های به کاررفته در روند تداخل سنجی فوق مجازی، افزایش می دهد. در این مطالعه روش تداخل سنجی فوق مجازی بر روی داده های ساختگی و حقیقیدوبعدی شکست مرزیاعمال گردیده است. از مهم ترین امتیازات این روش می توان به افزایش نسبت سیگنال به نوفه اشاره کرد.کلید واژگان: تداخل سنجی فرامجازی, امواج شکستی, نسبت سیگنال به نوفهRefraction profiles usually need to be between five and ten times as long as the required depth of investigation (Keary, 2002). By increasing the investigation depth seismic source waves travel long trajectory and because of attenuation cannot record strong head-waves. Sometimes it is not possible to use the appropriate source according to profile length. Whenever the seismic source energy is underestimated for long seismic refraction profiles/deeper targets or in case of highly absorptive media because of wave attenuation, the identification of first breaks becomes complicated particularly at far offsets. This can lead to inaccurate estimates of the deeper velocity distribution. Supervirtual refraction interferometry (SVI) based method is proposed to improve first arrival detection. The benefits are that traveltime picking errors can be greatly reduced for noisy head-wave arrivals, velocity and depth calculations will be correct subsequently. Application of the method will improve the signal to noise ratio of the first arrivals by a factor of ,where  is the number of postcritical source points used in SVI. In this study, supervirtual interferometry is applied to a numerical example and the field data from the south-west of Iran and its efficiency in enhancing of signal-to-noise ratio (SNR) of the first arrivals is presented.    Interferometry formula in x-Ï domain in crroscorrelation method, based on wavefield reciprocity and time reverse invariance principle (Wapenaar and Fokkema, 2006), is: where  is position vector of source/receiver in Cartesian coordinates, A and B are sources in the media surrounded by  and  are Greenâs function between the receiver A and the source B, Greenâs function between receivers AB and source x with angular frequency Ï respectively. C, Ï,  and  denote compressional wave velocity, density, real part and complex conjugate, respectively.    The product of Greenâs function complex conjugate at  and Greenâs function at  in frequency domain is equivalent to correlation in time domain. Thus by integrating cross correlation of recorded wavefield at  and  around the closed surface , which the sources are located on it,  becomes recoverable. There are two stages to apply SVI: Correlation every receiver record with the first to generate virtual refracted traces (correlations gather) and stacking the result for all post critical sources to enhance their quality.    Convoluting the first receiver data with the virtual traces to create supervirtual head- wave arrivals for each source. Since the arrival time as travel geometry of virtual head-waves for all post critical sources is the same, stacking can improve them and attenuate other waves. Because of random nature of incoherent noise, they are reduced by summation of virtual refracted traces. The difference in arrival times makes coherent noise weaker and migrates.  Thecomparison the SNR graphs of artificial and real data before and after using SVI provides the evidence for the success of the method. All the supervirtual traces SNRs are bigger than one and have sensible difference with initial traces especially for far offsets receivers.    In the interferometry formula the sources/receivers take place all around the closed surface , but in this work the sources are linear. The linear sources arrangement and their limited number cause unwanted effect on supervirtual records.Keywords: Supervirtual interferometry, refraction waves, signal, to, noise ratio (SNR)
-
داده های لرزه ای برداشت شده عموما با نوفه همراه هستند. یکی از مهم ترین نوفه ها در عملیات خشکی، نوفه زمین غلت است. نوفه زمین غلت از گروه نوفه های همدوس بوده و دارای سرعت کمتر، فرکانس پایین تر و دامنه قوی تری نسبت به سیگنال هاست و به دلیل پاششی بودنش، در دورافت های نزدیک در اعماق کم و در دورافت های دورتر در اعماق زیاد، سیگنال های بازتابی را در یک منطقه بادبزنی شکل می پوشاند. روش های متعددی برای تضعیف این نوفه وجود دارد که متداول ترین آنها، استفاده از فیلترهای فرکانسی یک بعدی، دو بعدی و فیلترهای سرعتی است. در این مقاله از روش فیلتر کردن جدیدی با استفاده از ترکیب تبدیل S تعمیم یافته و تجزیه مد تجربی برای تضعیف نوفه همدوس زمین غلت استفاده شده است. بدین ترتیب که داده لرزه ای از حوزه t-x با استفاده از تبدیل S تعمیم یافته به حوزه t-f-x انتقال داده می شود. سپس در این حوزه تجزیه مد تجربی در راستای مکان برای یک زمان و فرکانس خاص اعمال می گردد. برای تضعیف نوفه همدوس زمین غلت، اولین تابع مد ذاتی که مربوط به محدوده عدد موج بالا یا رویدادهای لرزه ای همدوس با شیب تند هستند، حذف می گردد و سپس با استفاده از معکوس تجزیه مد تجربی و تبدیل معکوس S داده ها به حوزه t-x منتقل می شوند که نوفه زمین غلت آن تضعیف شده است. کارآیی این روش بر روی داده های لرزه ای واقعی مورد بررسی قرار گرفته است و با روش تجزیه مد تجربی در حوزه f-x مقایسه گردیده است. نتایج، برتری روش معرفی شده در این مقاله را نشان می دهد.
کلید واژگان: نوفه همدوس, نوفه زمین غلت, تبدیل S تعمیم یافته, تجزیه مد تجربی و داده لرزه ایSeismic data are commonly contaminated by noise. Ground roll is one of the most important noises in onshore reflection seismic data acquisition. It is a coherent noise with a lower speed, lower frequency, and stronger amplitude than reflection seismic signals. Due to the dispersive property of the ground roll, it is distributed in fan-shaped zones at near offsets about the source. There are several methods for ground roll suppression such as 1D and 2D frequency filtering and velocity filters. In this paper, a new filtering method based on the generalized S transform and the empirical mode decomposition has been used to attenuate the ground roll coherent noise. In the proposed method, the x-t domain seismic data transformed to t-f-x domain using a generalized S transform. Then, the empirical mode decomposition is applied to the transformed data along the X-direction. The first intrinsic mode function related to high wave numbers is removed to attenuate the ground roll. The denoised seismic data can be obtained by stacking up the remaining intrinsic mode functions and transforming back into the t-x domain using an inverse S transform. The efficiency of the mentioned method has been tested on real seismic data and compared with empirical mode decomposition in the f-x domain. The obtained results show that the proposed method attenuates the ground roll better than the combination of empirical mode decomposition and Fourier transform.Keywords: Coherent Noises_Ground Roll Noise_Generalized S Transform_Empirical Mode Decomposition_Seismic Data -
انحراف استاندارد پنجره های گوسی مورد استفاده در تبدیل S برای هر مولفه بسامدی به صورت وارون بسامد تعریف می شود. در این مقاله الگوریتمی پیشنهاد می شود که برای هر مولفه بسامدی، انحراف استاندارد پنجره گوسی مورد استفاده در تبدیل S به وسیله یک فرایند بهینه سازی و از طریق استفاده از یک معیار تمرکز انرژی به صورتی پیدا شود که نقشه زمان- بسامد حاصل، بیشترین تمرکز انرژی را داشته باشد. آزمایش روی یک سیگنال ناپایا، برتری عملکرد روش پیشنهادی را در مقایسه با روش های STFT و SST به لحاظ کیفی و کمی نشان می دهد. همچنین در این مقاله تعدادی نشانگر طیفی محلی از تحلیل زمان- بسامد مجموعه ای داده لرزه ای مربوط به یک مخزن گازی در ایران استخراج و از آن ها در آشکارسازی نواحی گازدار و سایه های کم- بسامد استفاده می شود. نشان داده می شود که نشانگرهای به دست آمده از روش زمان- بسامد پیشنهادی در این مقاله تفکیک پذیری و تمرکز انرژی بیشتری در مقایسه با نشانگرهای حاصل از تبدیل S دارند و بنابراین با روش پیشنهادی، تعبیر و تفسیر نواحی گازدار و سایه های کم-بسامد با دقت بیشتری انجام می گیرد.
کلید واژگان: بهینه سازی, تبدیل S, تحلیل زمان, بسامد, تمرکز انرژی, مخزن گازی, نشانگرSeismic attribute is a quantitative measure of an interested seismic characteristic. There are several seismic attributes. In recent years, time-frequency (TF) attributes have been developed which to reach them, TF analyzing of seismic data is required. A high resolution TF representation (TFR) can yield more accurate TF attributes. There are several TFR methods including short-time Fourier transform, wavelet transforms, S-transform, Wigner-Ville distribution, Hilbert-Huang transform and etc. In this paper, the S-transform is considered and an algorithm is proposed to improve its resolution. In the Fourier-based TFR methods, the width of the utilized window is the main factor affecting the resolution. The standard S-transform (SST) employs a Gaussian window which its standard deviation, controller the window width, changes inversely with frequency (Stockwell et al., 1996). It was an idea to use a frequency dependent window for TF decomposition. However, the TF resolution of SST is far from ideal; it demonstrates weak temporal resolution at low frequencies and weak spectral resolution at high frequency components. Later on, the generalized S-transform was proposed using an arbitrary window function whose shape is controlled by several free parameters (McFadden et a., 1999; Pinnegar and Mansinha, 2003). Another approach to improve the resolution of a TFR is based on energy concentration concept (Gholami, 2013; Djurovic et al., 2008). According this approach, in this paper, an algorithm is proposed to find the optimum windows for S-transform to get a TFR with maximum energy concentration. To reach this aim, an optimization problem is defined where an energy concentration measure (ECM) is employed to condition the windows so as the TFR would have the maximum energy concentration. Here, we utilize a Gaussian as the window function. Then different windows are constructed by a range of different values of standard deviations in a non-parametric form. Different TFRs are constructed by different windows. The optimum TFR is one with maximum energy concentration. The optimization is performed for each frequency component, individually, and hence, there would be an optimum window width for each frequency component. There are several ECMs which they are used in different applications (Hurley and Rickard, 2009). In this paper, we employ Modified Shannon Entropy as the ECM. As one knows, SST algorithm needs to be implemented in frequency domain (Stockwell et al., 1996). It is due to the dependency of the standard deviation of Gaussian window on the frequency. However, the proposed method of our paper can also be implemented in time domain where the optimum windows would be found, adaptively, for each time sample of the signal. We apply the proposed method to a synthetic signal to compare its performance with some other TF analysis methods in providing a well-concentrated TF map. The comparison of the results shows the superiority of the proposed method rather than STFT and SST. We also perform a quantitative experiment to evaluate the performance of the TFRs. The results confirm the best performance by the proposed method compared with STFT and SST. Then the proposed method is employed to detect gas bearing zones and low-frequency shadows on a seismic data set related to a gas reservoir of Iran. For this aim, some TF seismic attributes are extracted. The attributes include instantaneous amplitude, dominant instantaneous frequency, sweetness factor, single-frequency section and cumulative relative amplitude percentile (C80). The attributes are also extracted by SST to compare with those of the proposed method. The results show that the attributes obtained by the proposed method have more resolution; so that gas bearing zones and low-frequency shadows are better localized on the attribute sections obtained by the proposed method.Keywords: time, frequency analysis, S, transform, Optimization, energy concentration, attribute, gas reservoir -
در این مقاله با استفاده از روش تجزیه مد تجربی، طیف فرکانس لحظه ای یا طیف هیلبرت معرفی و به منظور حذف امواج زمین غلت، حذف نوفه تصادفی و تجزیه طیفی مقاطع لرزه ای به کار گرفته شده است. طیف هیلبرت پیشنهادی کاستی های فرکانس لحظه ای حاصل از تبدیل هیلبرت مرسوم را ندارد. برتری نتایج استفاده از طیف هیلبرت در پردازش و تفسیر با اعمال روش روی داده های لرزه ای مصنوعی و واقعی نشان داده شده است. همچنین در زمینه تجزیه طیفی مقاطع لرزه ای برای اولین بار از مقاطع فرکانس لحظه ای ثابت برای تشخیص نواحی کم فرکانس مقطع لرزه ای استفاده شده است که نتایج نشان دهنده عملکرد قابل قبول مقاطع فرکانس لحظه ای ثابت در امر تجزیه طیفی داده لرزه ای است.
کلید واژگان: تجزیه مد تجربی, سایه کم فرکانس, طیف هیلبرت, فرکانس لحظه ای, موج زمین غلت, نوفه تصادفیIn this paper some new applications of empirical mode decomposition (EMD) and Hilbert spectrum in seismic ground-roll attenuation، random noise attenuation and spectral decomposition are introduced. Hilbert spectrum is a time-frequency representation for Hilbert-Huang transform which is obtained by combination of instantaneous frequency (IF) concept and intrinsic mode function of empirical mode decomposition. This time-frequency representation method has a suitable characteristic in analyzing non-stationary data. The advantages and the performance of the spectrum in seismic random noise attenuation and ground-roll removal are tested here by applying on real and synthetic seismic data and the results were satisfactory. In attenuation of random noise the instantaneous frequency filtering operation is different from other time-frequency decomposition methods and the characteristics of this type of filtering also are discussed. In the case of spectral decomposition we introduced a new method. We can extract constant frequency section by using empirical mode decomposition and Hilbert-Huang transform. In addition we have used instantaneous frequency separately to construct constant instantaneous frequency sections to detect low frequency shadow zone beneath the reservoir. Spectral decomposition using constant instantaneous frequency section in comparison with other conventional time-frequency decompositions methods has some advantage. Constant frequency sections which are obtained through Hilbert-Huang transform is a time consuming process while by using instantaneous frequency separately، the massive calculation process of empirical mode decomposition is omitted and the results have no difference in comparison with Hilbert-Huang transform. Here we explain how instantaneous frequency spectrum can be obtained from intrinsic mode functions (IMF). The empirical mode decomposition method developed by Huang et al. (1998) is a powerful signal analysis technique which known to be highly suitable for analysis of the non-stationary and non-linear signals، such as seismic data. EMD decompose data into functions which is called intrinsic mode functions. But EMD has a problem which is called mode mixing. Wu and Huang (2009) proposed the ensemble empirical mode decomposition (EEMD) to solve the mode mixing problem of EMD. However، not only the EEMD is not a complete decomposition method but also it is not reversible by summing all IMFs. Torres et al. (2011) proposed the complete ensemble empirical mode decomposition (CEEMD) algorithm. CEEMD overcome the mode mixing and provides an exact reconstruction of the original signal. In this paper we used CEEMD algorithm combined with Hilbert transform and analytic signal to evaluate instantaneous frequency. There are other methods to calculate IF from signals (for more information refer to Huang etal.، 2009). Analytic signal is obtained from signal and its Hilbert transform، we can write: Where is the Hilbert transform of and is the analytic signal then its IF can be computed from is the instantaneous phase and is the instantaneous frequency. In addition، for any given time in a signal we can obtain instantaneous amplitude of signal x (t) using Having time and its corresponding frequency and instantaneous amplitude we can show 3D plot of time-frequency-amplitude، which is a time-frequency representation (TFR) similar to STFT and S spectrum. This TFR representation is called instantaneous frequency spectrum or Hilbert spectrum. If we calculate instantaneous frequency from IMFs the time-frequency analysis method is called Hilbert-Huang transform. Here we demonstrated the performances of the Hilbert spectrum in attenuating random and coherent seismic noise as well as identifying low frequency shadow zone on seismic sections. The results were acceptable with no evidences of the negative frequency or spikes which are common in conventional instantaneous frequency.Keywords: Empirical mode decomposition, Hilbert spectrum, Instantaneous frequency, noise, Ground, roll, Low frequency shadow -
International Journal of Mining & Geo-Engineering, Volume:49 Issue: 1, Winter and Spring 2015, PP 1 -18This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp boundaries of model parameters. A pre-selected basis namely wavelet can recover the region of smooth behaviour of susceptibility distribution while Haar or finite-difference (FD) domains yield a solution with rough boundaries. Therefore, a regularizer function which can benefit from the advantages of both wavelets and Haar/FD operators in representation of the 3D magnetic susceptibility distributionwas chosen as a candidate for modeling magnetic anomalies. The optimum wavelet and parameter β which controls the weight of the two sparsifying operators were also considered. The algorithm assumed that there was no remanent magnetization and observed that magnetometry data represent only induced magnetization effect. The proposed approach is applied to a noise-corrupted synthetic data in order to demonstrate its suitability for 3D inversion of magnetic data. On obtaining satisfactory results, a case study pertaining to the ground based measurement of magnetic anomaly over a porphyry-Cu deposit located in Kerman providence of Iran. Now Chun deposit was presented to be 3D inverted. The low susceptibility in the constructed model coincides with the known location of copper ore mineralization.Keywords: Magnetic anomaly, Sparsity constraints, Wavelet transform, 3D modeling
-
حضور نوفه های همدوس در داده های لرزه ای همواره با همراهی رویدادهای اصلی باعث کاهش نسبت سیگنال به نوفه می شود و مشکلات بسیار زیادی در پردازش و تفسیر داده های لرزه ای ایجاد خواهد کرد. تبدیل ردلرزه شعاعی با برجسته کردن تفاوت در محتوای بسامدی بین رویداد اصلی و نوفه های همدوس روشی مناسب برای تضعیف نوفه های همدوس به خصوص نوفه زمین غلت از داده های لرزه ای است. در این مقاله با استفاده از تبدیل ردلرزه شعاعی، ردلرزه شعاعی محلی تعریف می شود. سپس با ترکیب این تبدیل با ابزار فیلتر میانگین نوفه زمین غلت از داده های لرزه ای مصنوعی و واقعی تضعیف می شود. به منظور بررسی کارایی این روش، مقایسه می شود. نتایج به دست آمده در این تحقیق نشان می دهد که روش ترکیب ردلرزه شعاعی f-k نتایج با روش فیلتر سرعتی محلی و فیلتر میانگین ابزاری مناسب برای تضعیف نوفه زمین غلت است.
کلید واژگان: تبدیل ردلرزه شعاعی, ردلرزه شعاعی محلی, فیلتر میانگین, نوفه همدوس خطی, نوفه زمین غلتReflection seismic data is often contaminated by a variety of coherent and incoherent noises. Ground-roll is a type of surface wave. It is a form of coherent linear noise، which propagates at the surface of the earth، at low velocity and low frequency. This kind of noise will often obscure the seismic reflection data. Various methods have been introduced by many authors to attenuate the ground-roll in reflection seismic data. Most of these methods are based on a transform. Deighan and Watts (1997) tried to eliminate the ground-roll by using the wavelet transform. A number of researchers used Karhunen-Loeve transform to suppress the ground-roll noise from seismic data (Liu، 1999; Montagne and Vasconcelos، 2006). Porsani et al. (2009) presented the singular value decomposition (SVD) filtering method for attenuation of the ground-roll. In this method، the seismic data must first be NMO corrected. Naghizadeh and Sacchi (2011) eliminated the ground-roll noise by scale and direction guided curvelet transform. One of the most important and common methods to attenuate coherent linear noise such as ground-roll in a prestack gather is based on the radial trace transform. It is a simple coordinate transform of offset-time (X-T) into the radial-time (R-T) coordinate. This transform was introduced and developed in the Stanford Exploration Project by Clearbout (1975)، Ottolini (1979، 1982) and Taner (1980). They used the radial trace transform for migration of seismic data and multiple attenuation. Clearbout (1983) and Henley (1999، 2000 and 2003) attenuated the coherent linear noise by the radial trace transform. We can separate the coherent linear noise from reflection seismic data when the transform coordinate paths are appropriately aligned with the coherent noise direction in the X-T domain. Applying the radial trace transform on a seismic shot gather causes coherent linear noise which is distributed across many traces in the X-T domain maps into relatively few radial traces and reduces their frequency. The most straightforward technique to reduce coherent noise in the R-T domain is to apply a high-pass (low-cut) filter to the radial traces. In practice، interpolation is an important stage in the radial transform and can leave a damaging impact on performance. Therefore، application of the transform will result in missing data. However، advanced interpolation methods can reduce the omitted data problem. In this study، we used a different method based on radial trace filtering، named local radial trace mean filtering that does not require a forward/inverse radial trace transform. Consecquently، it does not require interpolation algorithms. To investigate the efficiency of the proposed method، we test the local radial trace mean filtering on both synthetic and real seismic data. We also compare the obtained results by those from the f-k filtering for seismic data ground-roll suppression. To investigate further، we apply the method on synthetic seismic contaminated by random noise and find that the method is not sensitive to the presence of random noise. Also، in cases where the receiver intervals are not the same، unlike the f-k filtering، local radial trace mean filtering is very effective. Therefore، we can say that the local radial trace mean filtering method is a suitable tool for coherent linear noise attenuation، especially the ground-roll.Keywords: Radial trace transform, local radial trace, mean filter, coherent linear noise, ground, roll -
تصویرسازی لرزه ای به شدت به کیفیت داده های لرزه ای وابسته است. تفسیر ساختاری و چینه ای مقاطع لرزه ای که حاوی کمترین میزان نوفه تصادفی و همدوس هستند،به مراتب راحت تر است. نوفه های متعددی در مقاطعل رزه ای دیده می شود که دسته مهمی از آنها، نوفه های تصادفی اند. تضعیف این دسته از نوفه ها، به خصوص زمانی که نسبت سیگنال به نوفه کم باشد،بسیار دشوار است تاکنون روش های گوناگونی برای تضعیف نوفه های تصادفی عرضه شده است که هرکدام مزایا و معایب مربوط به خود را دارد. در این مقاله سعی شده است که از روش جدیدی برای تضعیف نوفه های تصادفی در حوزه t-f-xاستفاده شود. برای این کار با استفاده از تبدیل Sتعمیم یافته مقطعلرزه ای از حوزه t-xبه حوزه t-f-xانتقال داده می شود و سپس در این حوزه تجزیه مد تجربیEmpirical Mode Decomposition، EMDدر راستای مکان برای یک زمان و بسامد خاص اعمال می شوداولین تابع مد ذاتی به دست آمده از روش تجزیه مد تجربی، سریع ترین نوسانات را در داده ها نشان می دهد. به این معنی که بزرگ ترین مولفه های عدد موج (مربوط به نوفه) در اولین تابع مد ذاتی موجود است. بنابراین برای افزایش نسبت سیگنال به نوفه می توان اولین تابع مد ذاتی را از سیگنال اصلی کسر کرد. نوفه تصادفی شامل همه بسامد ها است و برای این که بتوان نوفه تصادفی را در مقاطع لرزه ای تضعیف کرد، می توان برای هر بسامدی روش حذف اولین تابع مد ذاتی را اعمال کرد. در روشی جدید با استفاده ازالگوریتم مبتنی بر آستانه گذاری بازه ای روی تابع های مد ذاتی که به نوفه تصادفی آلوده هستند پاک می شودوبعد از تضعیف نوفه مقاطع لرزه ای با استفاده از معکوس تبدیل Sبه حوزه t-xبرگردانده می شوند. در این مقاله الگوریتم پیش گفته روی مقاطع لرزه ای مصنوعی و واقعی اعمال و با روش ترکیب تبدیل فوریه و تجزیه مد تجربیمقایسه می شود.
کلید واژگان: نوفه های تصادفی, تبدیل Sتعمیم یافته, تجزیهمد تجربی, تابع های مد ذاتی, آستانه گذاری بازه ایSeismic records are generally affected by various types of noise, such as ground rolls, multiples, random noise, reflection and reflected refraction from near surface structures.Random noise resulting from random oscillation during data acquisition is one of the most important and harmful noises that exist in seismic data over all times and frequencies. Much effort has been made to attenuate this type of noise from seismic data. The predictive filter is commonly used for random noise attenuation from seismic data. This filter can be used in various domains such as the f-x domain (Haris and White, 1997) and the discrete Cosine domain (Lu and Liu, 2007). Bekara and van der Ban (2009) attenuated the coherent and random noises using a combination of empirical mode decomposition (EMD) and Fourier transform. EMD decomposes a time series into a finite set of signals called intrinsic mode functions (IMFs). They represent different oscillations embedded in the data. They are built to satisfy two conditions: (1) The number of extrema and the number of zero crossings must be equal or differ at most by one, and (2) at any point the mean value ofthe local maxima envelope and the local minima envelope must be zero. Bekara and van der Ban (2009) transformed seismic section from the t-x domain to an f-x domain by Fourier transform. Then, they applied EMD to a constant-frequency slice in the f-x domain and removed the first IMF. In the FX-EMD method, the denoised seismic section can be obtained by reversing the EMD and Fourier transform, respectively. The time–frequency transform of a signal shows a variation in the frequency contents of a signal with time. Ideally, the time-frequency representation only provides information about the frequency moments without mutual information about the adjacent instants. Stockwell (1996) introduced the S-transform which is a combination of shorttime Fourier (STFT) and wavelet transforms. It uses a variable window length and the Fourier kernel. However, the S-transform suffers from a poor energy concentration in the time-frequency domain. It has a poor time resolution at lower frequencies and a poor frequency resolution at higher frequencies. Sahu et al. (2009) proposed a modified Gaussian window which scales with the frequency in an efficient way to obtain a better energy concentration of the S-transform. Han-peng et al. (2011) used the time-frequency transform instead of Fourier transform (TFX-EMD) for considering the nonstationary property of the seismic data. They obtained the denoised seismic section by applying the EMD to a constant-frequency slice and removing the first IMF in t-f-x domain. Due to the presence of random noise in the other IMFs, removing the first IMF is not always an appropriate approach. One of the best algorithms in EMD-based noise attenuation is the interval thresholding of the IMFs. The main idea of this algorithm is to determine an aproperiate threshold value and to apply it to the considered IMFs. In this study, we modified the TFX-EMD algorithm by changing the EMD denoising strategy. We used the interval thresholding of IMFs instead of removing the first IMF in the t-f-x domain. We evaluated the efficiency of our method on both synthetic and real seismic sections and compared the obtained results with those of the FX-EMD and traditional TFX-EMD methods. The comparison shows that the new EMD denosing strategy in the t-f-x domain can effectively suppress random noise and has a better performance than the other two approaches. Also, in our method, unlike the FX-EMD and traditional TFX-EMD, the steep events are preserved.Keywords: Random noises_generalized S transform_empirical mode decomposition_intrinsic mode function_interval thresholding -
امروزه تحلیل سری های زمانی در مطالعات لرزه ای دارای کاربردهای فراوانی هستند. با توجه به اینکه زمین در مقابل انتشار امواج لرزه ای مانند یک فیلتر پایین گذر عمل می کند، موجب تغییر محتوای بسامدی امواج لرزه ای با زمان می شود. نمایش زمان – فرکانس یکی از ابزارهای مناسب تحلیل سیگنال های لرزه ای با محتوای فکانسی متغیر با زمان می باشند. تجزیه طیفی یک داده سه بعدی لرزه ای، به ازای هر بسامد مکعبی هم بعد با خود داده لرزه ای ایجاد می کند. استفاده هم زمان از این حجم زیاد داده، هم از نظر محاسباتی و هم از نظر تفسیری بسیار مشکل است. روش های گوناگونی برای کاهش حجم داده ها و در عین حال استفاده از تمامی اطلاعات موجود در هر مکعب های تک بسامد وجود دارد. در روش برانبارش رنگی، تصاویر RGB با استفاده از سه مقطع تک بسامد مجزا تهیه و اطلاعات مورد استفاده در نمایش، نسبت به روش تک بسامد سه برابر می شود. با این وجود، حجم زیادی از اطلاعات هنوز نادیده گرفته شده است. به منظور حل این مشکل روش آنالیز مولفه های اصلی برای کاهش حجم داده ها و استفاده از تمام اطلاعات موجود، پیشنهاد گردید. در این مقاله به منظور شناسایی کانال های مدفون در داده های لرزه ای سه بعدی ابتدا با تولید تصاویر تک بسامد و برانبارش رنگی آنها وابستگی کیفیت تصاویر تولیدی آنها به مولفه های بسامدی انتخابی نشان داده می شود و در ادامه با اعمال روش تحلیل مولفه های اصلی روی تصاویر تک بسامد فقط با تولید یک تصویر که شامل همه مولفه های بسامدی است، این وابستگی برطرف می شود. نتایج به دست آمده نشان داد که تصاویر حاصل از تحلیل مولفه های اصلی، جزئیات بیشتری دارند و شاخه های کانال را دقیق تر از سایر روش ها نشان می دهند.
کلید واژگان: کانال های مدفون, تجزیه طیفی, تبدیل فوریه زمان کوتاه, تبدیل S, روش تحلیل مولفه های اصلی, برانبارش رنگیSpectral decomposition of time series has a significant role in seismic data processing and interpretation. Since the earth acts as a low-pass filter، it changes frequency content of passing seismic waves. Conventional representing methods of signals in time domain and frequency domain cannot show time and frequency information simultaneously. Time-frequency transforms upgraded spectral decomposition to a new level and can show time and frequency information simultaneously. Time-frequency transforms generate high volume of spectral components، which contain useful information about the reservoir and can be decomposed into single frequency volumes. These single frequency volumes can overload the limited space of computer hard disk and are not easy for an interpreter to investigate them individually; therefore، it is important to use methods to decrease volume with no information lost، so frequency slices are separated from these volumes and used for interpretation. An expert interpreter can achieve some information about channel content and lateral variation is of thickness. Since different frequencies contain different types of information (low frequencies are sensible to channel content and high frequencies are sensible to channel boundaries)، these slices cannot show this information simultaneously. Therefore RGB images can be produced by plotting three different frequency slices against red، green and blue components. An RGB image، sometimes referred to as a true color image، it is an image that defines red، green، and blue color components for each individual pixel and has intensity between 0 and 1. Although this method obviates some drawbacks of single frequency plots، but it uses only three slices and practically ignores a big part of information and the frequency choice is not clear، so different choices will result to different images. Principal component is a statistical method for identifying patterns in data and expressing them in a way to highlight their similarities and differences. In order to find major patterns in data this technique reduces the number of dimensions of data without the loss of information. Principal component analysis introduces new set of orthogonal axes through data set called “eigenvectors” which data variance along them is maximized and have the importance proportional to their corresponding eigenvalues. The projection of single frequency slices onto eigenvectors is called “principal component (PC) bands”. The amount of total variance that each PC band represents is proportional to its eigenvalue، thus after normalizing the total sum of all eigenvalues، each eigenvalue represents the percentage of total spectral variance that its corresponding principal component can represent. So the first PC band (having largest eigenvalue) best represents the spectral variance in data، the second PC band (having the second largest eigenvalue) best represents the spectral variance in data which is not represented by the first PC and so on. Therefore PC bands with the smallest eigenvalues will represent a small portion of variance and can be deduced as random noise. So choosing the PC bands with the largest eigenvalues can be an effective way for data denoising، image processing and in our case determining the major trends in data set. We can represent more than 80 percent of spectral variation by plotting three largest principal components against red، green and blue components in a RGB image. In this paper، we applied spectral decomposition on land seismic data of an oil field in south-west of Iran using short time Fourier transform (STFT) and S transform. Then we constructed single frequency slices and investigated them. We produced RGB images by color stacking method and improved interpretation. Finally we used principal component analysis to use all the frequency bandwidth. Our results showed that PCA based images showed channel and its branches in a more precise manner than the other methods.Keywords: Buried channels, Spectral decomposition, Short time Fourier transform, S transform, Principal component analysis, Color stacking method
- در این صفحه نام مورد نظر در اسامی نویسندگان مقالات جستجو میشود. ممکن است نتایج شامل مطالب نویسندگان هم نام و حتی در رشتههای مختلف باشد.
- همه مقالات ترجمه فارسی یا انگلیسی ندارند پس ممکن است مقالاتی باشند که نام نویسنده مورد نظر شما به صورت معادل فارسی یا انگلیسی آن درج شده باشد. در صفحه جستجوی پیشرفته میتوانید همزمان نام فارسی و انگلیسی نویسنده را درج نمایید.
- در صورتی که میخواهید جستجو را با شرایط متفاوت تکرار کنید به صفحه جستجوی پیشرفته مطالب نشریات مراجعه کنید.