Introduction to the first author: Zhang Baojin, born in 1973, doctor of science, mainly engaged in seismic data processing and research on forward and inverse methods of wave theory.
(1. Guangzhou Marine Geological Survey Guangzhou 510760; 2. Department of Earth Sciences, Sun Yat-sen University, Guangzhou 5 10275)
Visual effect is one of the important criteria to judge the quality of seismic data processing. This paper discusses several aspects that affect the visual effect of seismic data processing from three aspects: signal processing, focusing quality and signal-to-noise ratio. Good signal processing is reflected in the fact that the number of wavelet periods in the profile is small, the sidelobe amplitude is small, the frequency is rich, and the energy relationship between frequency components is coordinated. Good focusing quality is equivalent to obtaining close-up images of strata, and the key is appropriate imaging method and corresponding velocity analysis quality. This paper discusses the basic principle of velocity analysis. To improve the signal-to-noise ratio and ensure the reliability of imaging, the key is to balance the effective signal and noise in the process of noise attenuation, and pay attention to the pertinence of denoising.
Seismic data processing; Wavelet processing; Deconvolution; Velocity analysis; Denoising
1 preface
How to evaluate the quality of seismic profile is sometimes confusing. For geologists, facing an earthquake profile, they may have a deep understanding of whether the profile is good or not. If they think the seismic profile is not good, they will naturally think of data processing, but it may not be clear what went wrong in the processing. It may be one of the basic skills of the processor to analyze the deficiency of modules and parameters from the effect of processing profile, but the interpreter may not fully understand the key content of the processing link. To investigate the processing effect of seismic data, we should first pay attention to its visual effect, and geologists also rely on pictures to conduct in-depth geological analysis first, so it is very necessary to ensure good visual effect of the section. In this paper, the author preliminarily summarizes some experiences and understandings accumulated in practice, and discusses several key technologies in data processing from the perspective of visual effect of seismic data, which may have certain reference significance for the processing and interpretation personnel to analyze and judge the profile quality.
We can examine the visual effect of the section from three aspects. The first is the quality of signal processing, the second is the focusing quality of reflected information, and the third is the signal-to-noise ratio of data.
2. Several main technical aspects of evaluating the visual effect of profile processing.
2. 1 signal processing quality
From the point of view of signal processing quality, it is to investigate the waveform of seismic trace. Usually from the perspective of frequency component and phase state. Because the seismic trace is assumed to be the convolution of wavelet and reflection coefficient, and the reflection coefficient can be assumed to be white noise, which has the characteristics of infinite bandwidth and flat amplitude spectrum, the signal of seismic trace is basically reflected by wavelet signal. Generally, the signal processing of seismic trace is considered as wavelet processing.
Generally, the wavelet after signal processing should be short in period, small in sidelobe amplitude, rich in frequency, harmonious in energy relationship among frequency components and sharp in waveform. For example, typical Yu Xiaobo (Yu Shoupeng, 1996). Generally speaking, the wider the frequency band, the sharper the wavelet waveform. If the low-frequency information of the profile is relatively lacking, the profile lacks a sense of hierarchy, which can not well reflect the large set of strata, and the hierarchical comparison of wave groups is not clear enough. Deep reflection wave and profile wave are sensitive to low frequency. If the high frequency components are relatively lacking, the profile details are not rich enough. The coordination of energy relations is also very important, for example, individual frequency components are prominent (reverberation such as short-period multiple waves will produce this phenomenon), and the phases of the in-phase axis will be more, which is equivalent to the dominance of harmonic components, similar to the narrow frequency band. There are two reasons for the long duration of wavelet, one is the imbalance of amplitude spectrum energy, and the other is the complexity of phase spectrum. It can be understood analogously, for example, when the amplitude spectrum is unbalanced, even if it is converted into a zero-phase signal (equivalent to its autocorrelation), its duration is long; On the other hand, the amplitude spectrum of seismic wavelet and seismic trace are the same, but the phase spectrum is different, and the duration of seismic trace is longer than sub-wavelength.
The expectation of signal processing determines the main purpose and content of signal processing:
(1) wavelet unified processing. The frequency spectrum and energy changes caused by field acquisition parameter changes (such as the change of source wavelet caused by an air gun fault) are corrected to make the wavelet frequency spectrum energy relationship between different guns consistent. In the routine processing of marine data, sometimes this influence can be ignored.
(2) Broaden the frequency band and highlight the dominant frequency band (frequency band with high signal-to-noise ratio). This is a contradiction in itself. Broadening the frequency band requires the energy of different frequency components to be as consistent as possible (the energy relationship is relatively balanced), while suppressing noise and highlighting the dominant frequency band will weaken the energy of the low signal-to-noise ratio band and make the high signal-to-noise ratio band relatively prominent. The high-frequency and low-frequency components in the original data usually have low signal-to-noise ratio and weak energy. It is contradictory to ensure a certain signal-to-noise ratio and enhance their energy to a certain extent. Usually, the balance between them is sought by noise attenuation.
(3) Phase processing. There are three aspects in phase processing: one is to transform the wavelet into the minimum phase to meet the assumptions of various deconvolution processing and prepare for the subsequent processing; The second is to make the wavelet zero phase to obtain higher resolution; The third is to reduce the phase number of wavelet.
The above contents can be dealt with separately, such as pure phase filtering or amplitude filtering, but more common is the comprehensive effect. In processing, wavelet processing is mainly carried out by various signal processing means including deconvolution (some people further classify deconvolution as signal processing by using "deterministic" characteristics). Because deconvolution processing is generally uncertain, usually improper processing will cause more negative effects.
There are many modules in signal processing. The methods that can broaden the bandwidth include: zero-phase amplitude filtering, pulse deconvolution, predictive deconvolution (small prediction step), spectrum equalization, wavelet shaping, anti-Q filtering, time-varying spectrum whitening and so on. Filter module is the most commonly used signal processing module, which can eliminate the frequency components with low signal-to-noise ratio and make the effective wave more prominent, but it must ensure sufficient bandwidth (the main frequency bandwidth is more than 2.5 octaves, Zhou, 2003). Predictive deconvolution is the most commonly used deconvolution module, which mainly relies on predictive deconvolution to reduce wavelet phase number (eliminating interlayer reverberation, shallow water multiple energy, etc.). ). Predictive deconvolution does not necessarily need to broaden the bandwidth (when the prediction step is large), but to adjust the energy relationship between frequency components. When the frequency components are unbalanced, the resolution will not be very high. Problems that should be paid attention to in deconvolution prediction (translated by Yilmaz and Huang Xude, 1993): amplitude processing (amplitude compensation must be done first), estimation of the time window position of the operator (choose the interval with good effective wave, which is at least 5 times as long as the operator), prediction step size (using the length of the first or second zero of autocorrelation), and operator length (corresponding to the length before wavelet compression). Besides analyzing spectrum and observing waveform, autocorrelation is an effective means to evaluate deconvolution effect. Predictive deconvolution is actually to subtract the energy after the autocorrelation prediction step, so that the wavelet spectrum retains some features before the autocorrelation prediction step. When the prediction step size of predictive deconvolution is one sampling point, it is pulse deconvolution. Because of their different "optimal expected goals", they have different emphases in eliminating short-period multipotential and broadening spectrum.
2.2 Focus quality
On the focusing quality of reflected wave energy. This paper mainly focuses on the imaging method and the corresponding speed analysis quality. The consideration of stratum structure is mainly reflected in the velocity analysis stage, and a clear geological understanding will be helpful to the velocity analysis in processing.
The imaging methods of conventional processing (time domain processing) include horizontal stacking, DMO (also known as prestack partial migration, which is suitable for inclined layers, and the smaller the velocity, the more obvious the improvement effect compared with horizontal stacking) and prestack time migration (which is suitable for areas with little lateral velocity change but complex structure). The basic principle of conventional machining speed analysis: the root mean square speed should not change too fast in time, but should be as smooth as possible in the horizontal direction (if there is a sudden change, the reason should be analyzed, Zhou, 2003); The shape of velocity field is basically consistent with the shape of seismic section; Velocity points should be dense, neither too dense nor too few, and it is best to point to the marker horizon. Generally speaking, the method of velocity analysis should be consistent with that of imaging, because different imaging methods need different velocities. For example, when the stratum is inclined, the DMO velocity is less than the stacking velocity. If the stacking velocity is used for DMO stacking, the in-phase axis in the inclined formation may not be corrected enough, resulting in the degradation of imaging quality, sometimes the in-phase axis is not soft enough and the details are not rich enough. For prestack time migration, the velocity required to tilt the in-phase axis is not only less than the stacking velocity, but also the spatial position will be shifted. The application of prestack time migration will significantly improve the quality of velocity analysis and imaging in complex structural areas.
Speed analysis is the most workload link in processing, and it is also one of the links that reflect the processor's experience and knowledge. If the accuracy of velocity analysis is not high enough and detailed enough, the final profile will be "mixed" and the high-frequency information will be seriously attenuated. Just like a focus camera is very different from a fool camera, a good speed analysis is equivalent to obtaining a "close-up" image of the target layer with a small "depth of field". When dealing with speed-dependent noise attenuation (such as multiple attenuation), if the accuracy of speed analysis is poor, it will cause serious effective wave loss.
Generally speaking, speed analysis is a process of comprehensive analysis and judgment, and it is also a process of gradual attempt. To do a good job of velocity analysis, we should pay attention to the following contents: (1) The signal processing quality is better (the signal processing is good, and the energy cluster of velocity spectrum is more concentrated); Do a good job in velocity spectrum (this is the basis of velocity analysis and judgment, including cutting, filtering, equalization, a small number of superimposed traces, and properly eliminating interference waves, etc. ); Comprehensively consider all kinds of information (including: gathers, stacked profiles, small stacked velocity scans and velocity fields); Choose a good imaging method (preferably prestack time migration data).
2.3 SNR
The balance between noise and effective signal is the main factor affecting visual effect. The most important thing of noise attenuation is pertinence, otherwise it will easily lead to excessive denoising, which will not only blur the contact relationship between strata, but also reduce the credibility.
Coherent noise of marine seismic data has the greatest influence and is the most difficult to attenuate. Coherent noise of marine data mainly includes: active interference such as ship interference; Multiples (shallow water multiples can be attenuated to some extent in wavelet processing); Linear interference in water (generally this kind of interference is developed); Other coherent noise.
FK can be used to attenuate linear noise, but generally, it is not necessary to keep only effective information, but to eliminate noise for targeted attenuation with moderate attenuation intensity. FK is a global processing method. If the reflection within the effective speed range is retained, the noise will be attenuated. It shows that random noise produces coherence, and at the same time it will cause the loss of effective local information. It is better to identify and eliminate coherent noise with higher fidelity. For nonlinear coherent noise such as ship interference, coherent noise can be transformed into linear coherent noise by identifying first and then static correction, and attenuated by attenuating linear noise. For multiples, there are many methods, such as high-precision Radon transform to separate parabolic effective waves from multiples, and the accuracy of velocity analysis becomes the key.
Random noise at sea is generally strong in low frequency, especially when the sea conditions are poor. Relatively speaking, the reliability of low-frequency denoising is better than that of high-frequency denoising. For depth imaging, we should pay more attention to low frequency components. Pre-stack noise can be reduced by frequency division, only for the components with strong noise, and for the frequency components with weak noise, the intensity is small or even noise attenuation is not carried out. In post-stack denoising, the denoising method with high denoising efficiency should be considered, and the denoising intensity can generally be adjusted according to the credibility (Zhang Baojin et al., 2002).
3 a few examples
Here are a few examples to illustrate the previous discussion. Due to the wide scope and many details, limited by space, only a few typical cases can be selected for brief explanation.
Graph 1 is a comparison graph of predicted deconvolution and non-predicted deconvolution. This data is a single shot data of shallow sea reflection earthquake, and the water depth is about 50 m. Because there are a large number of short-period multiples in shallow sea bottom data, the non-main lobe energy of wavelet is attenuated and reverberation is reduced after deconvolution. Fig. 2 is a frequency spectrum of data corresponding to graph 1. Due to the influence of short-period multiples, the spectrum energy is unbalanced. After deconvolution, the spectrum energy is balanced and the bandwidth is basically unchanged. Because the autocorrelation of seismic trace is basically equivalent to the autocorrelation of wavelet, deconvolution parameters are designed by autocorrelation to evaluate the effect of deconvolution on eliminating wavelet sidelobes. Fig. 3 is the autocorrelation of data corresponding to 5438+0 in fig. 6. It can be seen that the sidelobe energy of wavelet is effectively attenuated after predictive deconvolution. Fig. 4 is a comparison of the influence of statistical wavelet deconvolution on wavelet shaping and bandwidth broadening. By broadening the frequency band, the wavelet will be clearer and the ability to reflect a large number of strata and details will be stronger.
Figure 1 Predicting pre-stack data before and after deconvolution
Figure 1 Predicting data before and after deconvolution
Fig. 2 corresponds to the frequency spectrum of 438+0 data in fig. 6.
Fig. 2 Spectrum corresponding to data in Figure 1
Fig. 3 corresponds to the autocorrelation of 438+0 data in fig. 6.
Fig. 3 autocorrelation of data in figure 1
Figure 5 shows various information of comprehensive velocity analysis, including velocity spectrum, gathers, small stack, stack profile, velocity isoline, etc. The reference information is rich and reliable, which can better grasp the principle of velocity analysis and get a more suitable velocity field.
Figs. 6 and 7 are examples of eliminating coherent noise and random noise respectively. FK method is suitable for attenuating coherent noise without enhancing the coherence of random noise. As long as coherent noise is weaker than effective wave, artifacts may occur. In the data of fig. 7, there are many low-frequency noises, but this frequency overlaps the frequency band of the data. By applying low-cut filtering, the low-frequency components of the effective wave can be removed. If the random noise attenuates in the whole frequency band, it will cause great damage to the effective wave. If the low-frequency noise is reduced by frequency division, the negative influence can be introduced as little as possible.
Fig. 4 Data spectrum before and after statistical wavelet deconvolution
Fig. 4 spectra before and after deconvolution of statistical features
4 Discussion and conclusion
In essence, the visual effect of profile processing is consistent with the processing quality. Compared with further depth domain processing and lithology processing, conventional processing is the most basic requirement for obtaining good visual effects. Seismic data processing should be faithful to the original data, and on the basis of keeping the characteristics of the original data as much as possible, it should be improved and destroyed less. Inexperienced processors are likely to cause minor improvements and great damage, or even lose the characteristics of the original data. Usually, there are some shortcomings in seismic profile, the main reason is that the processing is not targeted, which leads to more negative effects. It can be said that any kind of treatment method has certain side effects. As the saying goes, "Three poisons do not invade", and reasonable and accurate "medication" is the key to achieve good results. Sometimes it may not be difficult to know what to do, but it is difficult to know what not to do, which requires in-depth knowledge, rich experience and accurate analysis of problems. To achieve good pertinence, we must first have a clear understanding of the quality and characteristics of materials, which requires detailed investigation, analysis and evaluation of materials, which is the first step to achieve pertinence. Secondly, the processing flow should be designed according to the data characteristics and specific geological objectives. Just like the diagnosis of old Chinese medicine practitioners, we should "prescribe drugs" after listening, listening and asking, that is, choose targeted modules and their combinations for treatment. At this time, the processor is required to have a deeper understanding of the characteristics of many modules. The deeper the understanding, the more targeted modules can be selected from many modules. The third is to test parameters for specific modules and module combinations. Because there are many parameters, in order to avoid the blindness and waste of the experiment, we should grasp the key parameters and evaluation criteria at this time. There are many literatures about treatment technology (Li,1994; Xiong Yong, 1993, 1995, 2002; Yu Shoupeng,1993; Zhou, 2003), these systematic discussions reflect the author's rich experience and deep understanding of processing technology.
Velocity spectrum, gathers, small stack, stack profile and velocity isoline used for comprehensive velocity analysis.
Fig. 5 Velocity spectrum, data trace set, small stack, stack profile and velocity contour map.
This paper discusses the main factors affecting the visual effect of marine seismic data processing from three aspects: signal processing, imaging quality and signal-to-noise ratio. Generally speaking, signal processing has fewer wavelet periods and sharper waveforms. The velocity analysis accuracy is high, and the imaging method is applicable; Improving the signal-to-noise ratio and the pertinence of processing can make the visual effect of the section better.
Fig. 6 Pre-stack data before and after coherent noise attenuation
Figure 6 Pre-planning? Superimposed data before and after correlated noise attenuation
Fig. 7 Difference before and after random noise attenuation
Fig. 7 Before and after random noise attenuation and their differences
refer to
Li Zhongqing. 1994. the road of accurate exploration-engineering analysis of high resolution exploration system. Beijing: Petroleum Industry Press.
Xiong Zhu 1993. application technology of seismic data digital processing. Beijing: Petroleum Industry Press.
Xiong Zhu 1995. Systematic thinking on seismic data processing methods. Beijing: Petroleum Industry Press.
Xiong Zhu 2002. Thoughts on seismic data processing in complex areas. Beijing: Petroleum Industry Press.
Yu Shoupeng. 1993. High resolution seismic exploration. Beijing: Petroleum Industry Press.
Yu Shoupeng. 1996. Wideband Ricker wavelet. Petroleum Geophysical Exploration, Volume 3 1 (1): 606 ~ 6 15.
Zhang Baojin, Cheng Gu, Wang Yunjuan et al. 2002. Denoising intensity, denoising efficiency and amplitude fidelity. Petroleum Geophysical Exploration, Volume 37 (1): 1 ~ 6.
Zhou, Xiong Yong. 2003. Fine processing of seismic data. Beijing: Petroleum Industry Press.
Yil maz, translated by Huang Xude. 1993. seismic data processing. Beijing: Petroleum Industry Press.
Discussion on seismic data processing from visual effect
Zhang Baojin 1 Cheng Gu 2 Feng Zhenyu 1 Wen Pengfei 1 Chen Cheng 1
(1. Guangzhou Marine Geological Survey, Guangzhou, 510760; 2. Department of Earth Sciences, Sun Yat-sen University, Guangzhou, 5 10275)
Abstract: Visual effect is one of the criteria for evaluating the quality of seismic data processing. Therefore, this paper discusses three aspects that affect the visual effect of seismic profile, namely signal processing, focusing quality and signal processing. Where to? Noise ratio. Finer signal processing will result in wavelets with fewer periods and smaller side amplitude. Lode, more complete frequency components and more harmonious energy relationship between different frequency components. The finer the focus, the closer the quality will be. Formation imaging, suitable imaging methods and the quality of velocity analysis are the key points. This paper discusses the basic principle of velocity analysis. Furthermore, improve the signal while ensuring the reliability of imaging? Where to? Noise ratio, in the process of noise attenuation, we must emphasize pertinence and maintain the balance between noise and signal.
Keywords: seismic data processing, wavelet processing, deconvolution velocity analysis, noise attenuation
Selected model essays on objection application in 2022 1
Applicant: _ _ _ _ _ _, male, Han nationality, bor