-
PDF
- Split View
-
Views
-
Cite
Cite
Yunus Levent Ekinci, Çağlayan Balkaya, Gökhan Göktürkler, Hanbing Ai, 3-D gravity inversion for the basement relief reconstruction through modified success-history-based adaptive differential evolution, Geophysical Journal International, Volume 235, Issue 1, October 2023, Pages 377–400, https://doi.org/10.1093/gji/ggad222
- Share Icon Share
SUMMARY
A gravity inversion procedure using the success-history-based adaptive differential evolution (SHADE) algorithm is presented to reconstruct the 3-D basement relief geometry in sedimentary basins. We introduced exponential population size (number) reduction (EPSR) to reduce the computational cost and used self-adaptive control parameters to solve this highly nonlinear inverse problem. Model parametrization was carried out by discretizing the sedimentary cover via juxtaposed right prisms, each placed below each observation point. Resolvability characteristics of the 3-D inverse problem were revealed through some cost function topography landscapes. The fine-tuned control parameter namely, population number allowed us to get best benefit from the algorithm. Additionally, a stabilizing function as a relative constraint was used to avoid undesired effects originated from the ill-posedness of the problem. In the synthetic data cases, the strategy we propose outperformed the linear population number reduction strategy which has won various IEEE–CEC competitions so far. Thorough uncertainty assessments via probability density function and principal component analysis demonstrated the solidity of the obtained inverse models. In the real data case, residual gravity anomalies of two well-known major grabens of Aegean Graben System (Türkiye), calculated thanks to the finite element method, were inverted. It was determined that the inverse solutions obtained for these basement reliefs, whose depths are still controversial, are statistically reliable. Moreover, these depths were found to be less than the depths reported in most previous studies. We conclude that the SHADE using EPSR strategy that we propose is a powerful alternative inversion tool for highly nonlinear geophysical problems.
1 INTRODUCTION
Gravity method applications are frequently used in geophysical exploration studies. These applications range from small-scale studies such as cavity explorations or archaeo-geophysical works to large-scale regional geology investigations (Reynolds 1997). One of these applications is the determination of the sedimentary basin/basement relief interface. These studies provide vital information for various posterior researches such as fossil and geothermal energy explorations as well as hydrological studies (Bohidar et al. 2001; Chakravarthi et al. 2007; Silva et al. 2014; Roy et al. 2021a). Deposition of eroded continental material with precipitation of chemical/organic remains in the aquatic environment leads to the formation of these sedimentary basins (Onajite 2014; Weissmann et al. 2015). The formation of oil, coal, gas and geothermal energy sources is the results of these reactions that occur over millions of years between organic materials, sediments and water. The sediments are deposited on a metamorphic or igneous basement in various shapes and sizes. Inversion or inverse modelling studies, namely model parameter estimation procedures are widely carried out in sedimentary basin/basement relief studies to reconstruct the interface between them using gravity data sets (Barbosa et al. 1997; Chakravarthi & Sundararajan 2004; Nabighian et al. 2005; Silva et al. 2006, 2010; Lima & Silva 2014; Pallero et al. 2015; Timur et al. 2019; Ekinci et al. 2021a). This inverse problem is solved using some linear system of equations when the density of the predefined subsurface model geometry is estimated. Contrarily, the inverse problem is pronounced nonlinear if the geometry of the subsurface model is tried to be estimated with the known density value/s. Thus, from a mathematical perspective, estimating the basement relief geometry using gravity anomalies is a nonlinear inverse problem. Derivative-based local optimization algorithms that exhibit fast convergence rates are commonly used for solving these nonlinear basement relief gravity problems (Rao & Babu 1991; Barbosa et al. 1999; Sarı & Şalk 2006; Chakravarthi & Sundararajan 2007; Silva et al. 2010; Zhou 2013; Cai & Zhdanov 2015; Mojica & Bassrei 2015; Feng et al. 2018, 2020; Florio 2020). However, the inhomogeneous and complicated nature of the subsurface makes the inverse problem more challenging. The presence of noise in the data set and deficient observations, that is, scarcity and sparsity of the observed data generally cause ambiguous solutions and error-prone estimates (Zhdanov 2002; Tarantola 2005). Moreover, derivative-based local-search optimizers mainly search the minima around initial model guess rather than the global minimum situated in the model space (Sharma & Biswas 2013; Essa & Elhussein 2018; Ekinci et al. 2021b). Therefore, the success of local optimizers depends to a large extent on a well-chosen initial guess position in the model space (Menke 1989; Meju 1994). Unlike to derivative-based algorithms, nature-inspired derivative-free global optimization algorithms perform some efficient different procedures (Göktürkler & Balkaya 2012; Biswas 2015; Ekinci et al. 2016; Balkaya et al. 2017; Kaftan 2017; Essa & Munschy 2019; Ben et al. 2021; Roy et al. 2021b). Thus, these vector-based optimizers do not need well-designed initial model parameters to reach the unique global minimum and they have the ability to avoid local minima situated in the model space through stochastic search procedures (Biswas & Sharma 2014; Montesinos et al. 2016; Ekinci et al. 2017; Alkan & Balkaya 2018; Essa & Elhussein 2020; Biswas & Rao 2021; Pallero et al. 2021). Besides, uncertainty appraisal analysis which is an essential step for model reliability in the inverse problems are also possible when global optimizers are used (Singh & Biswas 2016; Pallero et al. 2017; Ekinci et al. 2019; Turan-Karaoğlan & Göktürkler 2021; Roy et al. 2022). As a result, effective global optimization algorithms produce successful results in global exploration and local exploitation steps in the potential field data inversions (Biswas et al. 2017; Ekinci et al. 2020a; Sungkono 2020; Balkaya & Kaftan 2021; Essa 2021; Ben et al. 2022).
Following our previous work (Ekinci et al. 2021a), which provided 2-D inversion of gravity data sets to reconstruct basement relief geometry using a vector-based differential evolution (DE) algorithm, we present here an efficient and robust 3-D inversion scheme. This is the first study in the literature to practice DE-based algorithm in a full 3-D geophysical problem. As the number of unknown model parameters increases in the 3-D case, inversion of the gravity data sets with classical DE requires extremely large computation time. Therefore, we adapted a variant of the DE algorithm to our inverse problem. The success-history-based adaptive DE algorithm (SHADE) (Tanabe & Fukunaga 2013) enhances the effectiveness of the DE standard by significant improvements, namely adjusting its control parameters, including mutation scale and crossover probability, and performing the current-to-pbest/1/bin mutation approach used by adaptive DE with optional external archive (JADE) (Zhang & Sanderson 2009). The use of linear population number reduction (LPSR) strategy with the SHADE, namely L-SHADE algorithm (Tanabe & Fukunaga 2014), significantly reduces the computational cost. Moreover, integrating a population reduction scheme into SHADE increases computational efficiency and convergence rate, which is a crucial problem for relatively large data sets in optimization problems. Despite the No Free Lunch theorems (Wolpert & Macready 1997), which reject the idea of general superiority of a particular metaheuristic algorithm, SHADE is considered one of the most efficient evolutionary algorithms in continuous optimization problems (Piotrowski 2018). Its variants became the winners of many serious competitions in evolutionary computations. In the last decade, DE-based algorithms have been successfully applied to many different science and engineering fields. Bilal et al. (2020) reported a useful summary about the use of these promising optimizers. A very recent work (Piotrowski et al. 2023) performed on 22 real-world problems clearly proved that SHADE-variants outperform the Particle Swarm Optimization-based strategies which are the most popular global optimization tools in geophysical anomaly inversion. Recently, there has been an increasing trend in the application of DE-based optimization algorithms for the inversion of geophysical potential field anomalies (Ekinci et al. 2016, 2017, 2019, 2020a; 2021a; Balkaya et al. 2017; Sungkono 2020; Du et al. 2021; Song et al. 2021; Roy et al. 2021b).
In the applications presented here, we comprehended the mathematical basis of the inversion problem using the surface topography landscapes of the cost function |$( {{\rm{Cf}}} )$|. We carried out some tuning studies and determined optimum value of algorithm-based control parameter. Some undesirable outcomes of the ill-posed character of inversion were prevented thanks to a relative constraint. Theoretical experiments showed that exponential population number reduction (EPSR) strategy that we introduce outperforms LPSR. In the field data case, gravity anomalies of two major grabens of western Türkiye, where the basement relief depths are still controversial, were studied. Prior to the inversion procedures residual gravity anomalies were obtained by means of a finite element method (FEM) with element shape functions (ESF). Probability density function (PDF) and principal component analysis (PCA) techniques enabled us to perform pro-inversion analyses in both theoretical and real data cases.
2 METHODOLOGY
2.1. Gravity anomaly equation
A set of vertical rectangular or square prisms that are placed side-by-side in a grid plane can be used to obtain the gravity responses of sedimentary basins over the basements. The gravity anomaly response of a prism (Fig. 1) with a limited dimension in Cartesian coordinates is given by (Plouff 1976; Blakely 1995)
where |$\gamma $| is the universal gravitational constant, |$\Delta \rho $| is the density contrast, top and bottom depths of the prismatic source are represented by |${z_1}$| and |${z_2}$|, respectively. x and y denote the horizontal coordinates of the prism. If the half-width and half-thickness of the prismatic source are defined by |$w\,\,$|and t, respectively (Fig. 1) the derivation of the integrals yields (Rao et al. 1990)
where r is the distance from the corners of the prismatic body to the |$P\,\,( {x,y} )$| as demonstrated in Fig. 1. After the substitutions, the relevant anomaly equation becomes
where
The other terms in anomaly equations are given in the appendix. Calculation of the gravity response at any |$P\,\,( {x,y} )$| on the 2-D grid plane requires the repetitive use of the formidable anomaly equation (eq. 3) for each prism and therefore computer time increases considerable. To reduce the computational cost, Rao et al. (1990) suggested an approximation instead of eq. (3). This is practical for certain distances from the centre of the prismatic body. As the distance from the centre of the prism increases, the anomaly response calculated by this approximation approaches the true values. Thus, it is possible to reduce the computational cost significantly by calculating the gravity response of a line mass instead of a prismatic cell at certain distances. The anomaly equation for a line mass is given as follows (Rao et al. 1990)
Substitutions lead to
where |$\Delta x$| and |$\Delta y$| are the data intervals in both directions. Definitions of |${r_1}$| and |${r_2}$| are given in the appendix. The approximate equation can be used for all data points except the ranges given below
It is clear that a large value of |${\rm{Lt}}$| causes the approximate equation to be applied to fewer data points. By performing trial-and-error studies, we observed that |${\rm{Lt}} = 2$| provides nearly exact gravity anomalies and significantly reduces the computational costs. Schematic representation of using exact and approximate equations is shown in Fig. 2.

Illustration showing a hypothetical 3-D basin model composed of prismatic cells.

Illustration showing at which nodes in the grid plane the exact and approximate equations are valid when |$\text {Lt} = 2$|. This illustration is valid for the prismatic cells (1,1) and (6,7).
2.2. SHADE algorithm
The DE is a powerful derivative-free and stochastic approach used in numerical optimization problems. This evolutionary population-based algorithm is built on an efficient integrated mutation and crossover operations for exploring and exploiting the search space (Storn & Price 1997; Price et al. 2005). Several individuals that form the population in the algorithm provide a possible solution to the optimization problem under investigation. In short, the offspring generated by DE through mutation, crossover and selection operations in the evolutionary cycle are candidates for the optimum solution. Throughout this cycle, population diversity deteriorates as the number of generations increases, which may lead to premature convergence or stagnation of evolution. This is an undesirable case for the procedure, which depends mainly on the differences between populations. To achieve a better optimization, the user-defined control parameters of the algorithm should be tuned for different optimization problems, however, determining their respective values can be a time-consuming task due to repeated experiments. To address these shortcomings of the standard DE, some researchers proposed several improvements focusing on control parameters and mutation strategies, which are summarized in detail in Zhong & Cheng (2020, 2021). SHADE, an enhanced version of JADE (Zhang & Sanderson 2009), is a state-of-the-art DE scheme introduced by Tanabe & Fukunaga (2013) at IEEE-CEC. SHADE benefits from a historical memory |$({M_{CR}},\,\,{M_F})$| that stores a set of values for the crossover rate |$\textit {CR}$| and the scaling factor F that have worked successfully before, and generates new |$\textit {CR}$| and F values by sampling the parameter space near the previously stored values. In addition, a trial vector is produced using the current-to-pbest/1/bin mutation approach. Adjusting the control parameters F and |$\textit {CR}$| is ensured by the weighted Lehmer mean, and a historical memory includes H entries |$( {| {{M_F}} | = | {{M_{CR}}} | = H} )$| (Brest et al. 2016). Similar to the other evolutionary algorithms, an initial population is randomly initialized containing a series of real parameter vectors |${x_i} = ( {{x_1},{x_2}, \ldots ,{x_D}} )$|, |$i = 1, \ldots , \textit {NP}$|, where D indicates the optimization dimension, namely number of unknown parameters to be estimated, and |$\textit {NP}$| is the population number. The process of generating and selecting trial vectors is then repeated until a user-defined stopping criterion is reached. The following steps describe the SHADE algorithm in detail (Tanabe & Fukunaga 2014).
The first step is to assign control parameters based on historical memory. As explained above, SHADE maintains a historical memory with H entries, including |${M_{CR}}$| and |${M_F}$| for the control parameters |$\textit {CR} \in [ {0,\,\,1} ]$| and |$F \in [ {0,\,\,1} ]$|. Differential mutation operator magnitude is controlled by F. Initially, all |${M_{CR,k}},\,\,{M_{F,k}}$||$( {k = 1, \ldots H} )$| values are assigned 0.5. In each generation G, |$\textit C{R_i}$| and |${F_i}$| used by each individual are randomly generated by choosing an index |${r_i}$| from |$[ {1,\,\,H} ]$|, and then the definitions given below are applied. SHADE introduces the procedures of JADE for values that violate the boundaries of |$\textit C{R_i}$| and |${F_i}.$| Thus, the control parameters for each individual |${x_{i,G}}$| are assigned.
The second step is the reproduction of trial vectors. Here, a mutant vector |${v_{i,G}}$| is produced from the current population members by the current-to-pbest/1/bin mutation strategy used by JADE. This is also a variant of the classical current-to-best/1 strategy of DE, in which the greediness is adjustable via a parameter p.
The individual |${x_{pbest,G}}$| in eq. (9) is randomly chosen from the top |$N \times p\,\,( {p \in [ {0,\,\,1} ]} )$| members of generation G, and the indices |${r_1}$|, |${r_2}$| differ from each other as well as from i, since they are randomly chosen from |$[ {1,\,\,N} ]$|. Here p, which trades off exploration and exploitation, influences the greediness of the mutation strategy used. A bound constraint correction is also applied by SHADE, as described in JADE, to the mutated vector elements outside the limits of the search space (i.e. |$x_j^{min}$|, |$x_j^{max}$|). In this step, the trial vector |${u_{i,G}}$| is obtained by crossing the mutant vector |${v_{i,G}}$| with its parent vector |${x_{i,G}}$|, whereby the binomial crossover is commonly used operator.
where |$rand[ {0,\,\,1} )$| indicates a uniformly chosen number between |$[ {0,\,\,1} )$|, and |${j_{rand}}$| is an index of the decision variables selected from |$[ {1,\,\,D} ]$|.
The third step involves a selection procedure to determine the survivors that will exist in next generation, taking into account all the trial vectors |$( {{u_{i,G}},0 \le i \le N\,\,} )$| obtained from the previous step. In the standard DE, the selection operator is based on comparing each individual |${x_{i,G}}$| with its corresponding trial vector |${u_{i,G}}$| and keeping the better vector in the population.
In the fourth step, SHADE uses an external archive, optionally used in JADE, to preserve diversity. Comparing parent and trial vectors, the worse parent vectors that were not selected to survive in the next generation are retained. When the archive is used, |${x_{{r_2},G}}$| in eq. (9) is chosen from the union of the population P and the archive A (|$P \cup A$|). If the predefined archive size |$| A |$| is exceeded by the size of the archive, randomly chosen individuals are removed to provide space for newly added ones.
In the fifth step, SHADE updates the historical-memory. |$\textit C{R_i}$| and |${F_i}$| values, which provide a trial vector that is better than the parent individual, are recorded as |${S_{CR}}$| and |${S_F}$| in each generation, and the memory contents are updated at the end of the generation (Tanabe & Fukunaga 2014).
2.2.1. L-SHADE algorithm
L-SHADE algorithm (Tanabe & Fukunaga 2014) is an extended variant of SHADE, based on a continuous |$\textit {NP}$| reduction strategy. The algorithm adjusts F and |$\textit CR$| and reduces |$\textit {NP}$|, while these parameters are constant during the evolution cycle in the standard DE algorithm. The LPSR strategy is integrated into SHADE to improve its performance. L-SHADE is a state-of-the-art evolutionary algorithm, which was evaluated as the best DE-based algorithm at CEC2014 (Beijing, China). Additionally, many L-SHADE-based schemes have won various IEEE-CEC competitions in the last decade. These tests were performed on many different types of real-world problems and on many benchmark functions (Piotrowski 2018). The method is based on a simple specialization of SVPS (Laredo et al. 2009), which continuously reduces the |$\textit {NP}$| depending on the number of fitness evaluations by adapting a linear function. The |$\textit {NP}$| in |$\textit N\!{P_{G + 1}}$| is calculated after each generation G using the definition given below
where |$\textit {NFE}$| and |$max\_\textit {NFE}$| denote the current and maximum number of fitness evaluations, respectively. The |$\textit {NP}$| is equal to |$\textit N\!{P^{\rm max}}$| in the first generation, and decreases to |$\textit N\!{P^{\rm min}}$| at the end of the last generation. Since L-SHADE uses the current-to-pbest mutation operator specified in eq. (9), the smallest possible |$\textit N\!{P^{\rm min}}$| value is 4.
2.2.2. E-SHADE algorithm
Here we introduce an exponential |$\textit {NP}$| reduction strategy named EPSR instead of LPSR to further improve the SHADE performance. The |$\textit {NP}$| in |$N\!{P_{G + 1}}$| is calculated after each generation G as follows
where |$\lambda $| is a constant indicating the decay rate that reduces |$N\!{P^{\rm max}}$| to the smallest |$\textit N{P^{\rm min}}$| value. This rate is determined before the inversion procedure by a simple search between 0 and 1e-3 with a step size of 1e-9, and the value, which equates the |$N\!{P^{\rm max}}$| to the |$N\!{P^{\rm min}}$|, is assigned as the value |$\lambda $|. Fig. 3 gives a pseudo-code of the SHADE algorithm together with the LPSR and EPSR approaches.
2.3. FEM
The gravity background can be efficiently approximated through FEM with ESF (Mallick & Sharma 1999). An eight-node element is utilized with a 4-sided real |${x_r},{y_{r\,\,\,\,}}$|map space having a |${x_{cp}},{y_{cp\,\,\,\,}}$|centre position. The side lengths of this space are defined as |$2x$| and |$2y$|. The definitions given below are used to transform the real space into the reference space
where |$\alpha $| and |$\beta $| are the non-dimensional coordinates of the reference space, ranging between |$[ { - 1,\,\,1} ].$| Weighted sum of the eight-node gravity values is used to approximate the regional effect at a point using the following expression (Mallick & Sharma 1999)
where |${N_i}( {\alpha ,\beta } )$| denote ESF and they are defined as follows (Cheung & Yeo 1979)
ESF are equal to 1 at node i and zero at other nodes. In the last step, the regional anomaly is transformed into real map space by the following definitions (Mallick & Sharma 1999)
where |${M_i}( {\alpha ,\beta } )$| denote ESF and |${x_r}i$| and |${y_r}i$| are the coordinates of the nodes in the real map space.
3 SYNTHETIC DATA EXPERIMENTS
3.1. Modal analyses of the inverse problem
|${\rm{Cf}}$| minimization searches an input that results in the minimum output of the function. However, the problem is not simple in geophysics, as the inverse problem has an ill-posed and non-unique background. There may be many parameter sets in the model space that can minimize the |${\rm{Cf}}$| to a desirable small value. In optimization problems, the |${\rm{Cf}}$| surface has either convex or non-convex topography. If the surface is smooth and convex, a bowl-shaped valley including the single global minimum dominates the |${\rm{Cf}}$| topography. In some cases, the global minimum may be the deepest point of a deep narrow valley with steep sides. Such topographies are the properties of unimodal functions. On the other hand, in some type of problems, the |${\rm{Cf}}$| topographies show non-convex character, namely they comprise both the unique global minimum and multiple valleys containing deceptive minima. This undesired character may cause an optimization to get stuck at a local minimum. These types of inverse problems have a multimodal nature and are more challenging. Additionally, the difficulty of solving unimodal functions also varies based on the topographical changes on the |${\rm{Cf}}$| surface. To better define the character of the inverse problem, the angles of contour lines with axes in error topography maps should be analysed (Ekinci et al. 2021a; Ai et al. 2022). These maps, showing the |${\rm{Cf}}$| topographies in the vicinity of the global minimum, provide information about the resolutions and dependencies of the parameter pairs. Thus, thanks to these analyses, it is possible to have preliminary information on whether the solutions to be obtained tend to contain uncertainties or not. In order to carry out these vital analyses, the gravity response of a theoretically designed basin model was produced using eq. (3). Assuming that the hypothetical basin model covers an area of 20 km × 30 km, we considered side-by-side right square prisms each placed below each grid point. A total of 176 prismatic cells with a side length of 2 km of the square base was used. As shown in Fig. 4, the maximum and minimum thicknesses of the sedimentary basin are 1.2 and 0.2 km, respectively. The density contrast between the sedimentary cover and the basement is 0.1 g cm−3. The right-hand panel in Fig. 4 demonstrates the corresponding gravity response. Here, we used the following expression for |${\rm{Cf}}$|
where |$\Delta {g^{\rm obs}}$| is the observed anomaly and |$\Delta g( m )$| represents the gravity responses of the predicted model m. Here we present six topography maps (Fig. 5) that reflect the general character of the inverse problem. Numerous trial-and-error runs showed that other maps unpresented here have the same character as those presented. While calculating the |${\rm{Cf}}$| topography for any parameter pair, the thicknesses of the other unused prismatic cells are fixed at their actual values. In the images white points and axes intervals show the global optima in the |${\rm{Cf}}$| topographies and the search bounds, respectively. Fig. 5 shows that contours surround the global minima on the topographic maps and the contour lines are seen in two different characters, nearly circular and elliptical. Thus, it can be stated that the presented inverse problem has a unimodal characteristic. The almost circular contour lines indicate that the parameters are uncorrelated and can be solved independently. On the other hand, the elliptic shapes have an effect on parameter dependencies. On some maps it is clear that the global minima are surrounded by horizontal or vertical ellipses. This behaviour of the contours indicates no correlation between the parameters and therefore independent solutions can be achieved. However, it should be noted that the solution range of the parameter corresponding to the major axis of the ellipse is wider and therefore it can be solved with less precision than the other parameter. Unlike the others, the major and minor axes of the ellipse make angles with the parameter axes, as seen in the upper middle panel of Fig. 5. This tilted ellipse indicates that the thicknesses of these two prismatic cells are negatively correlated and therefore cannot be solved independently. As the thickness of one cell increases, the other decreases, so the sum of the thickness values can be estimated. This negative correlation is also observed between adjacent cells. This last situation is originated from the non-unique nature of the considered gravity inversion problem. Therefore, the optimization solutions of neighbouring cells may be subject to some uncertainties. To overcome this problem, the optimizer should be efficient in both global exploration and local exploitation steps. Moreover, these modal analyses showed that some relative constraints should be used so that the thickness values do not reach unreasonable values in negatively correlated cell pairs. On the other hand, it should be noted that unclosed or banana-shaped contours around the global minima, that make successful parameter approximation almost impossible, were not observed on the |${\rm{Cf}}$| surface maps.

Hypothetical basin model with bottom depths and corresponding gravity anomaly map.

|$\text {Cf}$| topography landscapes for some prismatic cell pairs, showing the character of presented inverse problem.
3.2. Search space generation
Contrary to local optimizers, a global optimization algorithm uses a range of values as the initial guess instead of a single value for each model parameter to be optimized. Therefore, these optimizers have the ability to perform a more comprehensive search in the predefined search space. It was reported that the Bouguer slab formula (Bott 1960) given below can be used to calculate an appropriate search space in basement relief estimation studies (Pallero et al. 2015, 2017; Ekinci et al. 2021a)
Accordingly, the lower and upper limits of search space for the cell thicknesses are designed as follows
3.3. Parameter tuning and synthetic data inversion
Parameter tuning process is another vital step in parameter estimation studies performed with global optimization algorithms. Algorithm-based control parameters are the structural elements of global optimization algorithms. The optimum values of these elements are not unique and may vary depending on the optimization problem. Therefore, the optimum values need to be determined to obtain the best performance of the algorithm used (Ekinci et al. 2021a). Contrary to the classical DE algorithm, the control parameters, namely |$\textit {CR}$| and F are selected by means of a historical memory in the SHADE algorithm. Therefore, there is no need to perform tuning studies for these parameters. However, the effects of |$\textit {NP}$| should be investigated before the inversion process. We set |$\textit {NP} = D \times K$|, where |$K \in [ {5,\,\,\,\,10,\,\,\,\,15,\,\,\,\,20} ]$| and performed experimental inversion studies to observe the effects on the model solutions and also on the convergence behaviour of the algorithm. Secondly, we compared the EPSR strategy that we designed for the |$\textit {NP}$| reduction process with the standard LPSR. In the optimizations, we used a relative constraint to avoid the problems which arise from the non-uniqueness of the inverse problem. As mentioned in modal analyses section using |${\rm{Cf}}$| topography maps, negatively correlated adjacent cells may lead to geologically unreasonable solutions such as saw-tooth-shaped bottom surfaces for basin models. Therefore, as we did in Ekinci et al. (2021a), a balancing/regularization function was included as a relative constraint in the optimization algorithm. After many trial-and-error studies, we found that an isotropic Gaussian function given below helps us to avoid abrupt and inappropriate thickness changes between adjacent cells in our problem.
where s represents the standard deviation (std) of the distribution. In the applications we used s|$\,\, = 1$|. This constraint was used periodically at the end of every 500th generation. While the effect of this function on depth balancing is particularly large in adjacent cells, it decreases in other cells as a function of distance from the window centre. This 2-D function with a kernel size of 5 × 5 can be considered as a global optimization equivalent to the regularization term applied to obtain a smooth basin model in Barbosa et al. (1997), Silva et al. (2006, 2009), namely in local optimization. In Case 1, we inverted the gravity anomaly shown in Fig. 4 using LPSR and EPSR strategies independently. We ran both algorithms with the same initial population in order to make comparisons under equal conditions. Additionally, 20 independent runs were performed to carry out some statistical analyses. Search space limits for cell thicknesses were determined using eq. (23). We set the generation/iteration number |$( {GN} )$| as a multiple of the number of cells used. Here we used |$GN = D \times 500$|.
Fig. 6 shows the per cent errors in cell thicknesses obtained with both |$\textit {NP}$| reduction strategies. Also, the effects of different |$\textit {NP}$| values on the solutions are presented. It can be clearly observed that in the case of |$\textit {NP} = D \times 5$|, the error rates in the prism thicknesses obtained through L-SHADE and E-SHADE algorithms are lower than the others. This is due to the inadequacy of the |$\textit {GN}$| used for large values of |$\textit {NP}$|. However, we did not prefer to increase the |$\textit {GN}$| value because it does not affect the solutions seriously, but it increases the computational cost considerably. In addition, we also observed that the per cent error rates for the cell thicknesses obtained with the E-SHADE strategy are lower. The inverse basin models calculated with both strategies and the corresponding gravity anomalies are demonstrated in Fig. 7.

Plots showing the per cent errors in cell thicknesses obtained by LPSR and EPSR strategies for different |$\textit {NP}$| values.

Inverse basin models of Case 1 calculated with L-SHADE and E-SHADE optimization algorithms with the use of eq. (24) and the corresponding gravity anomalies.
In the figure, although it seems that there is no difference between the two inverse models, the interpretation made for Fig. 6 becomes clearer in Fig. 8. Given the differences between the true and calculated gravity values at the data points, it is clear that the L-SHADE algorithm produced slightly more differences. Both algorithms showed a constant |${\rm{Cf}}$| decreasing behaviour, but as seen from the lower left-hand panel, the |${\rm{Cf}}$| value of the model solution of L-SHADE is higher at the end of the generations. The gravity values calculated via E-SHADE algorithm fit well with the true values and lie on the 99 per cent confidence line (Fig. 8). Therefore, we used the E-SHADE algorithm in the next optimizations.

Differences between true and calculated anomalies via both algorithms, |$\text {Cf}$| and |$\textit {NP}$| changes against generations, and data fit obtained via E-SHADE algorithm.
In the second case, we tested the robustness of the algorithms using noise content. We contaminated the synthetic anomaly with pseudo-random values with a mean and std of 0.1 mGal to obtain a reasonably noisy data set. The added noise content caused a change in the amplitude values without affecting the overall character of the anomaly. The noisy gravity anomaly and the inversion results obtained from the best of 20 runs are given in Fig. 9.

Noisy anomaly used in Case 2 (upper left-hand panel) and the anomaly calculated via E-SHADE algorithm with eq. (24) (upper right-hand panel), the differences between them (lower left-hand panel), and the reconstructed inverse basin model (lower right-hand panel).
The E-SHADE algorithm with relative constraint yielded a plausible basin model with a smooth bottom structure. It can also be seen from the lower left-hand panel that the difference between the true and calculated data at the data points is quite small, except for the points located at the edges of the basin model. Considering that the deepest part of the theoretical basin model is 1.2 km, the depth of 1.13 km (5.8 per cent error) obtained from the noisy data inversion is a reasonable result. Moreover, as shown in Fig. 12 the fit between the calculated and true gravity data sets is acceptable.
In the third case, we aimed at observing the effects on the solution when the isotropic function (eq. 24) is not used. The best solution obtained after 20 independent runs was considered. The calculated anomaly and the corresponding basin geometry are shown in the right-hand panels of Fig. 10. The difference between the true and calculated data sets is smaller, as can be seen in the lower left-hand panel. However, the depth to the bottommost is 1.34 km (lower right-hand panel), which corresponds to an error of 11.7 per cent. In addition, irregular and abrupt changes in cell thicknesses are readily apparent in the deep regions of the inverse basin model. Although the agreement between the data sets in this case is excellent (Fig. 12), a geologically unreasonable basement relief geometry was obtained. This is due to the phenomenon of overfitting, which is an error in generating unnecessary model solutions that occurs when the calculated data set is too close to a limited number of observed data set having a noise content (Menke 1989; Zhdanov 2002).

Noisy anomaly used in Case 3 (upper left-hand panel) and the anomaly calculated via E-SHADE algorithm without eq. (24) (upper right-hand panel), the differences between them (lower left-hand panel), and the reconstructed inverse basin model (lower right-hand panel).
In the fourth case, we increased the noise ratio by two times and observed the response of the E-SHADE optimization algorithm. The added noise content caused notable anomaly amplitude changes, mostly in the middle part of the grid (Fig. 11). It is clear that these random amplitude changes produce a more difficult case in determining the basement relief depths. The optimization process was performed with the use of eq. (24). The results of the best run are shown in Fig. 11. The differences between two data sets are reasonable. A smooth inverse basin model was obtained. The deepest point of the basement relief was calculated with an error of 5 per cent. However, considering the noise ratio, this error rate is a negligible value. Fig. 12 shows that the majority of the data lie on the confidence ellipses. Based on the findings of Case 4, we can mention that the E-SHADE algorithm with the use of relative constraint produced reasonable model solution even in inversion of data with high noise content.

Noisy anomaly used in Case 4 (upper left-hand panel) and the anomaly calculated via E-SHADE algorithm with eq. (24) (upper right-hand panel), the differences between them (lower left-hand panel), and the reconstructed inverse basin model (lower right-hand panel).

Data fits and the confidence ellipses for the synthetic data cases.
3.4. Uncertainty appraisal analyses
One of the opportunities offered by derivative-free global optimization algorithms is the ability to perform uncertainty appraisal analysis on the obtained solutions. Unlike local optimizers, global optimization algorithms are run multiple times for the data sets being inverted. Since scanning of the best parameter values starts at a random position in the model space each time, different solutions may be obtained in each run. The statistical distribution of the solution sets containing the model parameters provides information about the quality of the obtained solutions. Therefore, it is necessary to examine the consistency of all solutions obtained from independent runs. Additionally, the sampled nonlinear |${\rm{Cf}}$| landscape of the best run should be analysed for model uncertainty assessment.
3.4.1. PDF analysis
PDF is a fundamental building block for the statistical definition of an estimated parameter value (Talley et al. 2011). PDF analysis defines how the probability density is scattered over the range of values of an estimated parameter. The definition of this function for a normal distribution is given as follows
where s and a represent std and average values, respectively, and x denotes a discrete variable of the parameter population. This function takes a non-negative value and its integral over the entire parameter space, that is, the total area under the distribution curve, is equal to 1. Prismatic cells, the true thicknesses of which are given in Fig. 4, were used in the PDF analyses. Fig. 13 demonstrates the confidence intervals of the mean cell thicknesses with a probability of 95 per cent for both noise-free and noisy data experiments. Additionally, the best values of each cell thicknesses obtained from 20 independent runs are also given. The solutions obtained are very close to each other. Extremely low std values indicate that all solutions fall into very narrow ranges in the Cases 1, 2 and 4. This finding reveals that the solutions are within confidence intervals and that there is no uncertainty in the solutions. Third row in Fig. 13 shows the PDF curves of the solutions obtained without using relative constraint (Case 3). It is seen that the std values are higher than the ones of other cases. Additionally, best solution of the thickest cell (middle column) is outside the 95 per cent confidence interval. This can be neglected as the values change only in a very small range. Nevertheless, it should be noted that running the E-SHADE algorithm with the isotropic function given in eq. (24) provides more stable and robust solutions.

Plots showing the PDF analyses of the solutions for the synthetic data cases. Note that the solutions of the prismatic cells shown in Fig. 4 are used. True thickness values are given in the upper left-hand corners of the plots.
3.4.2. PCA
PCA is used to analyse and visualize multidimensional parameter/variable sets in a much less dimensional space. Although reducing the data dimensionality causes some loss of information, PCA performs this task by preserving the general character of the multidimensional variables (Jackson 1991; Jolliffe 2002; Alpaydın 2020). In brief, the method finds the projection of this initial set in the new coordinate system produced with the largest variance directions (Munoz-Romero et al. 2016). As described in the modal analysis section, |${\rm{Cf}}$| topography maps are generated for the parameter pair examined to determine parameter dependencies. In these 2-D maps, the two axes represent two model parameters not the whole parameters. On the other hand, a large number of model parameters can be represented on two axes by PCA. Therefore, nonlinear uncertainty appraisal analysis can be performed, which is an essential step in understanding the reliability of the model solutions (Pallero et al. 2015, 2017; Roy et al. 2021b; Sungkono 2023). To carry out this orthogonal linear transformation, covariance matrix is calculated as follows
where M denotes a matrix containing all model parameter sets calculated during optimization and |$\mu $| is the mean of each parameter set. Eigenvalues and eigenvectors of the matrix |${\rm{Cov}}$| are calculated to obtain the principal components. Then using the first two components the coordinates of the projection are generated. These coordinates and the vector including the corresponding misfit values are used to map the |${\rm{Cf}}$| topography in 2-D space. The details of PCA in geophysical data inversion can be found in Pallero et al. (2015). In the analyses, we used the relative |${\rm{Cf}}$| as given below
All model solutions obtained with a certain cutoff value (< 15 per cent) were taken into account to generate the principal component coordinates and then the misfit values were plotted with respect to these coordinates. Fig. 14 shows the equivalence function topographies of the synthetic cases. In Case 1, it is clearly seen that both best and true models are overlapped in the lowest |${\rm{rCf}}$| region. As the PDF analysis also indicates, the model solution is reliable based on PCA. The best model is not located in the lowest misfit point in Case 2. In this noisy case, relative constraint produced a smooth inverse model and avoided overfitting of data. Thus, the obtained solution is not suspicious. However, in Case 3 where eq. (24) is not used the result is different. The |${\rm{rCf}}$| value of the best model is much lower than the true one. This is the overfitting situation where the optimization converges not only to the data but also to the noise component in the data. In the last case, despite the high noise ratio the best model is located in a satisfactory point in the principal component space and there is no overfitting. It is clear that implementing E-SHADE algorithm with the use of isotropic function avoids getting trapped by deceptive massive minima that exist in the |${\rm{rCf}}$| topographies and provides reliable model solutions. In addition, as can be observed from the images, the algorithm's ability to scan large sectors in the model space shows its efficiency.

Maps showing the equivalence |$\rm rCf$| topographies in 2-D principal component space for the synthetic data cases. Maps are produced using the first two components and corresponding |$\rm rCf$| values. In the Cases 3 and 4, true model parameters are added into the last column of matrix M and their |$\rm rCf$| values are added to the end of the misfit vector. Smoothed contour lines are used for better illustrations.
4 REAL DATA EXPERIMENTS
4.1. Study area and gravity data
Türkiye is located in an earthquake-active region and has experienced many devastating earthquakes throughout history (Bozkurt 2001). The earthquakes (Mw 7.7, Mw 7.5 and Mw 6.4), which occurred on the East Anatolian Fault (EAF) and Dead Sea Fault (DSF) in 2023 February, clearly show the very active tectonics in the country. The relative movements of the African, Arabian and Eurasian plates cause this activity (Fig. 15). This interaction has produced many significant fault systems in the country (Dewey & Şengör 1979; Şengör & Yılmaz 1981). Because of this interaction, the Anatolian Plate moves counterclockwise along the dextral North Anatolian Fault (NAF) and the sinistral EAF (McKenzie 1972; Burke & Şengör 1986). Aegean and Cyprian Arcs, NAF and EAF control the neotectonics of Türkiye (Bozkurt 2001). The crustal extension in western Anatolia and the Aegean Sea is caused by the subduction of the African plate beneath the Anatolian plate along the Aegean and Cyprean subduction zones (Fig. 15) (Hancock & Barka 1987; Patton 1992). The N–S continental extension that has occurred in West Anatolian Extensional Province (WAEP) over the last 5 Ma (Jackson & McKenzie 1984) is about 3–4 cm per year (Le Pichon et al. 1995). The geodynamic evolution of the region is remarkably influenced by the Aegean Arc which is a very active plate boundary (Bozkurt 2001). The young horst and graben systems and the basin-bounding faults control the tectonism of the region (Dewey & Şengör 1979; Bozkurt 2001).
The neotectonics structures, namely the Aegean Graben System (AGS) is responsible for the sharp topography of the region (Fig. 16). Bozdağ Horst (BH), Aydın Horst (AH), the asymmetrically growing Büyük Menderes Graben (BMG), the arc-shaped Gediz Graben (GDG), Küçük Menderes Graben (KMG) and their bounding active fault systems (Fig. 17) are the most noteworthy neotectonics elements in western Anatolia (McKenzie 1978; Bozkurt 2000). They represent the properties of N–S directional expansional tectonics (Şengör et al. 1985). Detachment faults played an active role in the development of these grabens which shape the Menderes Massif, a significant core complex on the world scale. These major grabens are composed predominantly of continental sedimentary deposits (Fig. 17). In the AGS, BMG and GDG have the thickest sedimentary cover (Neogene volcano-sedimentary basin rocks plus alluvial deposits) compared to the others and overlie the Pre-Neogene basement rocks. Both grabens are controlled by normal fault systems and are about 150 km long (Paton 1992).

Türkiye is one of the countries with the largest geothermal energy potential (Gonenc et al. 2012). The tectonic system of the region has enabled the rich geothermal potential of western Anatolia. The AGS in western Anatolia hosts most of the geothermal fields in the country. To date, numerous investigations have been carried out to find geothermal energy sources. Additionally, many relevant gravity surveys provided information about the subsurface nature of sedimentary cover in the graben system. However, there are discrepancies in the interpretation of the basement relief depths in BMG and GDG (Ekinci et al. 2021a). These studies, most of which used local optimization methods, reported different thicknesses of sedimentary cover, ranging from 1.5 to 4.6 km for the BMG and 1.4 to 4.0 km for the GDG. Except for Ekinci et al. (2021a), the regional component in gravity anomalies was either neglected or not analysed with an effective algorithm. Additionally, the uncertainties in the solutions were not analysed. In other studies, the depths to the basement reliefs were reported as 2.0 km through resistivity data inversion in BMG (Bayrak et al. 2011) and 3.8 km based on magnetotelluric data inversion in GDG (Gürer et al. 2009).
Gravity data for the country were collected by a national institute, the General Directorate of Mineral Research and Exploration (MTA), between the years 1973 and 1988. All necessary corrections were made by the same institute. The anomaly map of the whole country was published by Ateş et al. (1999) and MTA (2006). The Bouguer anomaly map of the AGS, produced using a data spacing of 1 km is shown in the upper panel of Fig. 18. High amplitudes are observed in the W of the anomaly map. These gravity highs are located near to the Aegean Sea and they gradually decrease toward the W due to the thickening crust. Additionally, gravity lows extending the central Türkiye are most likely due to the continuation of low-amplitude anomalies of the convex side of the Crete Island arc, while gravity highs observed W of Menderes Massif may be interpreted as the extension of high amplitude anomalies of the concave side of the island arc (Rabinowitz & Ryan 1970; Sari & Şalk 2002). Relatively thick Neogene aged volcano-sedimentary basin rocks plus alluvial deposits of BMG and GDG cause a decrease in gravity amplitude values (Fig. 18). However, low gravity amplitudes are not observed in the KMG which is located between these two grabens. This finding suggests that the sedimentary cover over the basement is not thick enough to produce remarkable gravity lows. Göktürkler et al. (2003) also reported a shallow Pre-Neogene basement in KMG by analysing the conductive heat transfer using 2-D models. Rojay et al. (2005) reported a maximum thickness of about 270 m and an average thickness of 50–60 m for the Quaternary fill in the graben. Thus, sedimentary cover in KMG is thinner than the covers in BMG and GDG. As shown in the upper panel of Fig. 18 gravity anomalies are consistent with this information, and the anomalies reflect well the geological setting of the region. However, long-wavelength effects should be eliminated from Bouguer gravity anomalies in order to obtain more accurate results from the inversion procedure. Here, the 2-D FEM scheme with the use of ESF (Mallick & Sharma 1999) was performed to obtain the regional gravity anomalies reflecting the deeper source responses. Fig. 18 lower panel shows the residual gravity anomalies obtained by removing the regional background. It can be seen that the effects of the relatively thick sediment deposits of both grabens are more evident in the anomaly map. The geometries of these low-amplitude gravity anomalies are compatible with the approximately W–E (BMG) and NW–SE (GDG) oriented grabens seen in the topographic map (Fig. 16). Residual gravity anomaly map comprises some southern parts of the Gördeş, Demirci and Selendi basins which are bounded by normal faults (Fig. 17). These areas are characterized by low gravity anomalies as expected. BH and AH produce high gravity anomalies due to the metamorphic rocks of the massif. In addition, the high gravity amplitudes in the southern part of the map are also due to these rocks. The folding of the BMG towards the southwest in the W of Aydın Province can be clearly traced in the residual anomaly map. In the western part of the map close to the Aegean Sea coast, gravity highs originated from crustal effects are also reduced as seen from the lower panel in Fig. 18. Therefore, it can be concluded that the calculated residual gravity anomalies are compatible with the surface geology of the AGS.

Bouguer gravity (upper panel) and the residual gravity (lower panel) anomaly maps of the AGS. The grids used for the inversion are shown with rectangles.
4.2. Gravity data inversion
To calculate the interface topography between the sedimentary cover and the Pre-Neogene basement correctly, the grids with the lowest gravity anomalies in both grabens were considered for the inversion. The data points with the lowest amplitude values are located approximately at the grid centres. The residual anomalies used are shown in the lower panel of Fig. 18. The dimensions of both grids are 24 km × 12 km. A density contrast of 0.5 g cm−3 between the sedimentary fill and basement rocks was previously determined by MTA from the boreholes used for geothermal investigations (Sari & Şalk 2006). In the deep basins, the density contrast between the overlying sedimentary cover and the underlying basement rocks decreases with depth because the density of the cover increases with depth due to increasing pressure. Thus, variable density contrast is preferred for the inversion procedure in deep basins (Chai & Hinze 1988; Rao & Babu 1991; Mallesh et al. 2019; Oksum 2021). However, for basins that are relatively shallow, it is reasonable to use a fixed density contrast (Murty & Rao 1989; Barbosa et al. 1997; Mendonca 2004; Pallero et al. 2015; Ekinci et al. 2021a). Thus, as we did in Ekinci et al. (2021a), a density contrast of 0.5 g cm−3 was used for the inversion procedure. As with synthetic experiments, 20 independent runs were carried out for both grid data sets. E-SHADE algorithm was used for the optimizations. Upper and lower bounds of the search space were constructed using eq. (23). The stabilizing function, namely relative constraint (eq. 24) was used to obtain plausible geological models. A total of 288 square prismatic cells with a base edge length of 1 km were used in both grids. In the optimizations, |$\textit {GN}$| and |$\textit {NP}$| were fixed at 144 000 |$( {D \times 500} )$| and 1440 |$( {D \times 5} )$|, respectively. Figs 19 and 20 illustrate the residual gravity anomaly maps and the inversion results of BMG and GDG, respectively. The right-hand panels in these figures show the calculated anomalies and the best inverse basin models. The fits between the observed and calculated anomalies are given in Fig. 21. The maps (Figs 19, 20 lower left-hand panels) showing the differences between the data sets reveal that the differences are slightly higher only in the cells located at the edges of the grids, and quite lower in the other cells. The depths of the bottommost parts of BMG and GDG were determined as 1.75 and 1.55 km, respectively. The solutions are consistent with the results of the 2-D nonlinear inversion study performed using the classical DE algorithm (Ekinci et al. 2021a). From statistical point of view, the reliability of the solutions of some prismatic cells is illustrated in Fig. 22 by using PDF analyses. It is clear that the best solutions obtained are within the confidence interval limits. The |${\rm{rCf}}$| topographies in 2-D principal component space are shown in Fig. 23. The misfit values of both cases gradually decrease from the massive global maxima region to the global optima. This finding shows that the E-SHADE has the ability to reach the global minimum by narrowing the |${\rm{rCf}}$| space. Also, the optimization algorithm is not caught by the deceptive closures and there is no premature convergence. Thus, it can be mentioned that the algorithm we propose is effective in both global exploration and local exploitation steps.

Observed anomaly (upper left-hand panel) and calculated anomaly via E-SHADE algorithm with eq. (24) (upper right-hand panel), the differences between them (lower left-hand panel), and the reconstructed inverse basin model for BMG (lower right-hand panel).

Observed anomaly (upper left-hand panel) and calculated anomaly via E-SHADE algorithm with eq. (24) (upper right-hand panel), the differences between them (lower left-hand panel), and the reconstructed inverse basin model for GDG (lower right-hand panel).

Fits between the observed and calculated data sets and confidence ellipses for the real data cases.

Plots showing the PDF analyses of the solutions for the real data cases.

Maps showing the equivalence |$\text {rCf}$| topographies in 2-D principal component space for the real data cases. Maps are produced using the first two components and corresponding |$\text {rCf}$| values. Smoothed contour lines are used for better illustrations.
The depth solutions obtained for the grabens are lower than most of the previously reported ones (Table 1). One of the reasons for obtaining deeper basement reliefs in previous studies is that an effective regional/residual separation technique was not applied to the Bouguer gravity anomalies. Additionally, considering the mathematical nature of the problem, not controlling the cell (3-D case) or block (2-D case) thickness changes during optimization may also cause misleading results.
Previous studies . | Anomaly . | Technique . | Regional removal . | Basement depth (km) . | |
---|---|---|---|---|---|
BMG | GDG | ||||
Paton (1992) | Gravity | FS | – | 1.5 | 1.4 |
Sevinç & Ateş (1996) | Gravity | DBLO | – | 4.6 | – |
Sarı & Şalk (2002) | Gravity | DBLO | – | 2.0 | 2.5 |
Sarı & Şalk (2006) | Gravity | DBLO | PF | 3.5 | 2.0 |
Chakravarthi & Sundararajan (2007) | Gravity | DBLO | – | – | 1.6 |
Işık & Şenel (2009) | Gravity | DBLO | – | 3.9 | – |
Silva et al. (2010) | Gravity | DBLO | – | 1.8 | – |
Çiftçi et al. (2011) | Gravity | PS | – | 2–3 | – |
Özyalın et al. (2012) | Gravity | DBLO | PF | – | 4.0 |
Akay et al. (2013) | Gravity | PS | – | 4.3 | |
Lima & Silva (2014) | Gravity | DBLO | – | 1.7 | – |
Chakravarthi et al. (2017) | Gravity | DBLO | – | 2.3 | 1.8 |
Altınoğlu et al. (2018) | Gravity | DBLO | – | 4.1 | – |
Timur et al. (2019) | Gravity | DBLO | – | 2.3 | – |
Altınoğlu (2020) | Gravity | DBLO | PS | 2.4 | – |
Ekinci et al. (2021a) | Gravity | DFGO | FEM | 1.5 | 1.4 |
Gürer et al. (2009) | Magnetotelluric | DBLO | – | 3.8 | |
Bayrak et al. (2011) | Resistivity | DBLO | 2.0 | – |
Previous studies . | Anomaly . | Technique . | Regional removal . | Basement depth (km) . | |
---|---|---|---|---|---|
BMG | GDG | ||||
Paton (1992) | Gravity | FS | – | 1.5 | 1.4 |
Sevinç & Ateş (1996) | Gravity | DBLO | – | 4.6 | – |
Sarı & Şalk (2002) | Gravity | DBLO | – | 2.0 | 2.5 |
Sarı & Şalk (2006) | Gravity | DBLO | PF | 3.5 | 2.0 |
Chakravarthi & Sundararajan (2007) | Gravity | DBLO | – | – | 1.6 |
Işık & Şenel (2009) | Gravity | DBLO | – | 3.9 | – |
Silva et al. (2010) | Gravity | DBLO | – | 1.8 | – |
Çiftçi et al. (2011) | Gravity | PS | – | 2–3 | – |
Özyalın et al. (2012) | Gravity | DBLO | PF | – | 4.0 |
Akay et al. (2013) | Gravity | PS | – | 4.3 | |
Lima & Silva (2014) | Gravity | DBLO | – | 1.7 | – |
Chakravarthi et al. (2017) | Gravity | DBLO | – | 2.3 | 1.8 |
Altınoğlu et al. (2018) | Gravity | DBLO | – | 4.1 | – |
Timur et al. (2019) | Gravity | DBLO | – | 2.3 | – |
Altınoğlu (2020) | Gravity | DBLO | PS | 2.4 | – |
Ekinci et al. (2021a) | Gravity | DFGO | FEM | 1.5 | 1.4 |
Gürer et al. (2009) | Magnetotelluric | DBLO | – | 3.8 | |
Bayrak et al. (2011) | Resistivity | DBLO | 2.0 | – |
Abbreviations: DBLO: Derivative-based local optimization, DFGO: Derivative-free global optimization, FS: Forward solution, PS: Power spectrum, PF: Polynomial fitting.
Previous studies . | Anomaly . | Technique . | Regional removal . | Basement depth (km) . | |
---|---|---|---|---|---|
BMG | GDG | ||||
Paton (1992) | Gravity | FS | – | 1.5 | 1.4 |
Sevinç & Ateş (1996) | Gravity | DBLO | – | 4.6 | – |
Sarı & Şalk (2002) | Gravity | DBLO | – | 2.0 | 2.5 |
Sarı & Şalk (2006) | Gravity | DBLO | PF | 3.5 | 2.0 |
Chakravarthi & Sundararajan (2007) | Gravity | DBLO | – | – | 1.6 |
Işık & Şenel (2009) | Gravity | DBLO | – | 3.9 | – |
Silva et al. (2010) | Gravity | DBLO | – | 1.8 | – |
Çiftçi et al. (2011) | Gravity | PS | – | 2–3 | – |
Özyalın et al. (2012) | Gravity | DBLO | PF | – | 4.0 |
Akay et al. (2013) | Gravity | PS | – | 4.3 | |
Lima & Silva (2014) | Gravity | DBLO | – | 1.7 | – |
Chakravarthi et al. (2017) | Gravity | DBLO | – | 2.3 | 1.8 |
Altınoğlu et al. (2018) | Gravity | DBLO | – | 4.1 | – |
Timur et al. (2019) | Gravity | DBLO | – | 2.3 | – |
Altınoğlu (2020) | Gravity | DBLO | PS | 2.4 | – |
Ekinci et al. (2021a) | Gravity | DFGO | FEM | 1.5 | 1.4 |
Gürer et al. (2009) | Magnetotelluric | DBLO | – | 3.8 | |
Bayrak et al. (2011) | Resistivity | DBLO | 2.0 | – |
Previous studies . | Anomaly . | Technique . | Regional removal . | Basement depth (km) . | |
---|---|---|---|---|---|
BMG | GDG | ||||
Paton (1992) | Gravity | FS | – | 1.5 | 1.4 |
Sevinç & Ateş (1996) | Gravity | DBLO | – | 4.6 | – |
Sarı & Şalk (2002) | Gravity | DBLO | – | 2.0 | 2.5 |
Sarı & Şalk (2006) | Gravity | DBLO | PF | 3.5 | 2.0 |
Chakravarthi & Sundararajan (2007) | Gravity | DBLO | – | – | 1.6 |
Işık & Şenel (2009) | Gravity | DBLO | – | 3.9 | – |
Silva et al. (2010) | Gravity | DBLO | – | 1.8 | – |
Çiftçi et al. (2011) | Gravity | PS | – | 2–3 | – |
Özyalın et al. (2012) | Gravity | DBLO | PF | – | 4.0 |
Akay et al. (2013) | Gravity | PS | – | 4.3 | |
Lima & Silva (2014) | Gravity | DBLO | – | 1.7 | – |
Chakravarthi et al. (2017) | Gravity | DBLO | – | 2.3 | 1.8 |
Altınoğlu et al. (2018) | Gravity | DBLO | – | 4.1 | – |
Timur et al. (2019) | Gravity | DBLO | – | 2.3 | – |
Altınoğlu (2020) | Gravity | DBLO | PS | 2.4 | – |
Ekinci et al. (2021a) | Gravity | DFGO | FEM | 1.5 | 1.4 |
Gürer et al. (2009) | Magnetotelluric | DBLO | – | 3.8 | |
Bayrak et al. (2011) | Resistivity | DBLO | 2.0 | – |
Abbreviations: DBLO: Derivative-based local optimization, DFGO: Derivative-free global optimization, FS: Forward solution, PS: Power spectrum, PF: Polynomial fitting.
The crustal fault system known as detachment faults bond the sedimentary cover consisting of Neogene volcano-sedimentary rocks and alluvial fills (Fig. 17). Normal and/or oblique faults over the detachment faults cover the alluvial deposits of BMG and GDG. In depth, they dip northwardly along the southern margins of the grabens and southwardly along the northern margins of them (Ekinci et al. 2013). In the basement relief map of BMG (Fig. 19), it is clear that the deepest parts get narrower towards W due to low-angle normal faults dipping towards the opposite geographic directions (N and S) through the margins of the graben. The thickest parts of GDG are NW–SE oriented (Fig. 20). This is in good agreement with the orientation of the GDG (Fig. 16). The depth of the basement relief decreases from the deepest point towards NW, but then the second deepest point appears. The N and S of GDG are bounded by curve shaped normal and/or oblique faults developed over the detachment faults (Sözbilir et al. 2011) (Fig. 17) and goes through parallel to these faults. The basement topography undulation throughout the GDG is most likely related to the segmentation of the normal faults passing along the margins of the graben and their subsurface dipping in spite of not being shown its segments on the regional scaled surface geological map (Fig. 17).
5 CONCLUSIONS
In this paper, we presented a nature-inspired 3-D gravity inversion scheme to reconstruct the interface topography between the sedimentary cover and the basement rocks. The inversion algorithm is based on an evolutionary procedure that uses a historical memory with a series of successful values for the crossover rate and scaling factor. In each time, new values of these control parameters are produced from this series. In order to reduce the computational cost of the optimization procedures in this highly nonlinear inverse problem, we introduced the E-SHADE algorithm which outperforms the L-SHADE based on the findings obtained from synthetic experiments. Moreover, we considerably reduced the computational cost by using a line mass equation on predefined nodes of the grid plane, instead of using the very formidable forward equation of the right rectangular prism. Modal analyses of the objective functional allowed us to better understand the mathematical characteristics of the inversion problem presented. Based on the information obtained from the |${\rm{Cf}}$| topography maps, a balancing/regularization function was included as a relative constraint in the inversion algorithm to overcome the ill-posedness of the problem. In this way, block thicknesses were prevented from reaching arbitrary values in model space, thus avoiding saw-tooth-shaped basement relief models. Uncertainty appraisal analyses showed that the solutions obtained from noise-free and noisy synthetic data experiments are reliable.
In real data applications, two major grabens of the well-known AGS (western Anatolia, Türkiye), whose sedimentary package thicknesses are still controversial, were examined. The 3-D inversion algorithm applied to the residual gravity anomalies calculated using the efficient 2-D FEM technique revealed that the basement reliefs in both grabens are shallower than most of the previously suggested values. Pre-Neogene rock topographies below the sedimentary cover conform with the geometries of the grabens. From a statistical point of view, the obtained solutions are within the parameter confidence intervals based on PDF analyses. PCA maps also indicated the solidity of the solutions. This study clearly showed the efficiency of the derivative-free population-based E-SHADE optimization algorithm. On the other hand, like other global optimization algorithms, it has a disadvantage in terms of computational cost compared to derivative-based local optimizers. This disadvantage can be overcome by powerful computers and effective parallel computing techniques. Additionally, it has many superiorities such as providing to carry out vital uncertainty appraisal analyses, performing efficient random walk procedures, scanning every sector of the predefined model parameter space, not needing a well-designed initial model and having a robust and stable computational nature. Because of these advantages, the E-SHADE algorithm can be considered as a very promising inversion tool for 2-D/3-D inversion of geophysical data sets.
Acknowledgement
We thank Editors Prof. Jörg Renner and Prof. Bert Vermeersen and two anonymous reviewers for their suggestions that improved the quality of the manuscript.
DATA AVAILABILITY
The grid data sets used in the optimizations are available upon request from the corresponding author (YLE).
CONFLICTS OF INTEREST
The authors declare that there is no conflict of interest.
References
APPENDIX
The definitions of the terms in eqs (3) and (6) are given as follows
where