Abstract

Based on the reformulation of the problem of galactic shape measurement, introduced by Tessore and Bridle, we develop and test new estimators which more efficiently solve what is now a statistical problem. Frequently only a single measurement is available from an observation made by telescopes, which may be distorted due to noise introduced in the measuring process. To soften these effects, we investigate a number of approaches, such as scaling, data censoring, or mixtures of estimators. We then run simulations for various set-ups which emulate real-life conditions, to determine practical suitability of all proposed estimation approaches.

1 INTRODUCTION

An important pillar of modern Cosmology is the measurement of Cosmic Shear, which describes the subtle change of the observed light distribution of distant galaxies due to weak gravitational lensing by the large-scale structure of the universe (for an introduction, see Bartelmann & Schneider 2001; for a recent review, see Kilbinger 2015). The measurement depends on the accurate estimation of the shape of millions to billions of individual galaxies, for which generally only a single observation is available. Recently, Tessore and Bridle have shown how a certain method for shape measurement can be reduced to a classical problem of parameter estimation (Tessore & Bridle 2018). The shape of a galaxy is described by the complex-valued ellipticity ε, which for a perfectly elliptical galaxy with axial ratio q ∈ (0, 1] between the minor and major axes and angle ϕ between the major and first coordinate axes is defined as
(1)
For a perfectly round galaxy image q = 1 and ε = 0. For a practically one-dimensional galaxy image with vanishing minor axis, q = 0 and ε = 1. Therefore, 0 ≤ |ε| < 1 for the absolute value |ε| of the ellipticity. For the complicated morphology of a real galaxy, it is necessary to define the ellipticity in terms of the moments of the observed brightness distribution. Introducing the so-called Stokes parameters u, |$v$|⁠, and s, which are related to the central second-order image moments, the ellipticity becomes
(2)
where u2 + |$v$|2 < s2 is guaranteed by the Cauchy–Schwarz inequality. Estimation of the ellipticity (2) is rendered difficult by the noise in the observed images, as well as image distortions due to atmospheric conditions and imperfect optics. Recently, Tessore and Bridle have shown (Tessore & Bridle 2018) that for a certain shape measurement method, the noisy image data can be reduced to three independent and jointly normal random variables:
(3)
(4)
(5)
so that the parameters u, |$v$|⁠, and s for the ellipticity (2) are recovered through the means of X, Y, and Z as |$\mathbb {E}[X] = u$|⁠, |$\mathbb {E}[X] = v$|⁠, and |$\mathbb {E}[Z] = s$|⁠. The variance σ2 is determined by the amount of noise in the observed data and can be assumed known. In this way, the problem of measuring the ellipticity ε becomes a classical problem of parameter estimation.

This paper is structured as follows, in the next section we focus on previously proposed estimation techniques to the newly formulated statistical problem. We shed light on the well-understood properties the parameters need to fulfil, and explain how they inspired the novel estimation approach listed in the upcoming section. With the completion of the method section, we elaborate on the simulation set-up used to investigate the performance of the proposed estimators, and discuss their properties in relation to the composure of the parameter values. In the fourth and final section, we give a conclusion on what our observations were, and give an indication on how the approximation methods should be optimally chosen in regard to signal-to-noise ratio (SNR) values and different parameter combinations. Additionally, we give a motivation for further research and new questions opened up by our results.

2 METHODS

The two main estimation methods are on the one hand the previously proposed unbiased estimation technique,
(6)
(7)
and the rather crude mean estimator
(8)
where the first estimator was introduced by Tessore & Bridle (2018), constructed as an unbiased approximation approach.

As previously shown, both estimators are unbiased in the argument |$\arg \epsilon$| of the ellipticity, so the main improvement lies in the estimation of the radius of the complex parameter ε.

In addition to the random sample values X, Y, and Z, we are aware of certain restrictions which have to hold. While we do not know the means of the normal distributions, we do know the standard deviation fairly well, and it will therefore be presumed known in our simulation efforts. As initially mentioned, a second implication for the parameters u, |$v$|⁠, and s can be derived via the Cauchy–Schwarz inequality for the underlying moments,
(9)
For a significantly larger sample size, we could consider the asymptotic property of the mean estimator to hold, giving us a similar restriction to apply on to the observations made. This would serve as an approximation to the hard mean value rule:
(10)
This, in our admittedly simplistic case, would be X2 + Y2 < Z2. We do realize this is not a hard criterion, due to the limited sample size, it may however still serve as an indicator of the viability of the sample values. In Table 1, we have given a number of estimators, which seek to incorporate this information into ellipticity estimation.
Table 1.

Definition of crude estimators.

graphic
graphic
Table 1.

Definition of crude estimators.

graphic
graphic
Secondly, and perhaps more obviously, the ellipticity’s absolute value (for example, it’s radius in the Euler formulation of the complex value), should not exceed 1. An ellipticity of 1 itself would essentially mean an object with one dimension value vanishing entirely. Hence, we desire the outcome of the estimation to fulfill this condition:
(11)

Again, we have tried to incorporate this condition into our estimators. Most of them will be marked as scaled as many of the estimators seek to rectify the initial estimation by adjusting the radius. For the crude estimator, we can very easily show that both conditions are the same, but not so for the unbiased estimator.

If any of these conditions are violated by the data given, we have some indication of problematic data. We can then proceed with one of the following steps, or a combination thereof:

  • Censoring: The data in question can be discarded, since it is to be expected that an estimation based on the values would be flawed.

  • Scaling: Based on the relative values of the sample to each other, it can be concluded that the estimation effort would not accurately capture the correct radius of the ellipticity. The most straightforward case would be an ellipticity which leaves the unit circle, which would have to have the radius cropped to produce a permissible value.

  • Partial scaling: If we can pinpoint with sufficient confidence which sample values are flawed, we might be able to scale only a single, or multiple values, instead of the entire outcome of the estimation. The scaled sample would then be reintroduced into other estimation techniques.

  • Mixed models: With several estimators having known properties such as over- or underestimation, we could produce a more reliable estimator by merging individual estimation methods together. This might lead to a dampening effect when investigating samples with values, which would throw otherwise reliable estimators off.

The crude estimators of Table 1 introduce our two main estimators, the integration based unbiased estimator |$\widehat{\epsilon }_{I}$| and the estimator |$\widehat{\epsilon }_{II}$| as described above with only minor modifications. Those include scaling and censoring of data, if one of the two feasibility conditions is violated, for example X2 + Y2 < Z2 or the radius of the ellipticity exceeding 1. Other variations of |$\widehat{\epsilon }_{II}$| modify the scaling component, which is in this case the denominator of the estimator in case of presumed false sample values. For now, this is done by either rounding overshoots down to r = 1 or simply by avoiding the root altogether.

Next up we seek to rectify the sample values by either scaling the directional components X, Y simultaneously down to fulfill the inequality, or scale Z up to achieve the same effect, see Table 2. The whole process is repeated for both base estimators, to enable an accurate comparison between their advantages and disadvantages, and changing behaviour to data scaling and censoring. For the most part these estimators are differing in the scaling method. While the first four try to incorporate the known variance of the samples, the latter two introduce an arcus tangent scaling function, which we rely on for further estimators as well. The idea is to map a potentially infinite radius, or quotient of sample values into the feasible range [0, 1).

Table 2.

Definition of scaled parameter estimators.

graphic
graphic
Table 2.

Definition of scaled parameter estimators.

graphic
graphic
In short, we can write the transformation of the rescaling procedure using an inverse tangent function in the following way:
(12)
(13)
(14)
(15)

Equivalently, we can retrieve a scaled version for the X and Y values, depending on their relative contribution to the violation of the inequality. We investigate further how the different combinations of conditions, scaling methods, and underlying base estimator behave, see Table 3.

Table 3.

Definition for result and parameter scaling estimators (I).

graphic
graphic
Table 3.

Definition for result and parameter scaling estimators (I).

graphic
graphic

Alternatively, we can scale the estimator after evaluation, see Table 4. Again, this can be done either with regard to the entire radius itself, or depending on the exact deviation of the estimate on the real and imaginary parts (depending how grossly the estimate seems to lean one way or the other), this can be done for either part separately. Furthermore, we can apply parameter scaling techniques on to the input values themselves, if the error origin to either one of the input values X or Y alone can be traced reasonably. Failing this, the entire estimate can still be scaled in its entirety to fulfill either the radius condition or the mean value inequality. These selective estimators are described in Table 5

Table 4.

A definition for result-only scaling estimators.

graphic
graphic
Table 4.

A definition for result-only scaling estimators.

graphic
graphic
Table 5.

Definition for result and parameter scaling estimators (II).

graphic
graphic
Table 5.

Definition for result and parameter scaling estimators (II).

graphic
graphic

Furthermore, we can apply parameter scaling techniques directly on to the input values, if we can reasonably assume that only one of the input variables is straying far from its true value. Failing this, the entire estimate can still be scaled to fulfill either the radius condition, or the mean value inequality, see Table 5.

In addition, we investigate the averages of a selection of previously introduced estimators, see Table 6. As stated earlier, the idea is to potentially dampen the shortcomings a single estimator might have on its own. We simply use the arithmetic mean on the unbiased and crude version of the more promising estimators. This concludes the list of possible candidates for a more accurate estimation, yet this is by no means an exhaustive compilation as we have only used basic estimation approaches on which to improve upon via varying measures of censoring and scaling.

Table 6.

Definition for averaged estimators.

EstimatorBased onCategory
|$\widehat{\epsilon }_{46}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{1}(X,Y,Z) + \widehat{\epsilon }_{5}(X,Y,Z)\right)/2$|I + IICrude
|$\widehat{\epsilon }_{47}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{2}(X,Y,Z) + \widehat{\epsilon }_{6}(X,Y,Z)\right) /2$|I + IIScaled
|$\widehat{\epsilon }_{48}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{3}(X,Y,Z) + \widehat{\epsilon }_{7}(X,Y,Z)\right)/2$|I + IICensored
|$\widehat{\epsilon }_{49}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{11}(X,Y,Z) + \widehat{\epsilon }_{12}(X,Y,Z)\right)/2$|I + IIScaled
|$\widehat{\epsilon }_{50}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{13}(X,Y,Z) + \widehat{\epsilon }_{14}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{51}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{19}(X,Y,Z) + \widehat{\epsilon }_{20}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{52}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{25}(X,Y,Z) + \widehat{\epsilon }_{32}(X,Y,Z)\right)/2$|I + IIResult scaled
|$\widehat{\epsilon }_{53}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{44}(X,Y,Z) + \widehat{\epsilon }_{45}(X,Y,Z)\right)/2$|I + IIMixed
EstimatorBased onCategory
|$\widehat{\epsilon }_{46}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{1}(X,Y,Z) + \widehat{\epsilon }_{5}(X,Y,Z)\right)/2$|I + IICrude
|$\widehat{\epsilon }_{47}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{2}(X,Y,Z) + \widehat{\epsilon }_{6}(X,Y,Z)\right) /2$|I + IIScaled
|$\widehat{\epsilon }_{48}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{3}(X,Y,Z) + \widehat{\epsilon }_{7}(X,Y,Z)\right)/2$|I + IICensored
|$\widehat{\epsilon }_{49}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{11}(X,Y,Z) + \widehat{\epsilon }_{12}(X,Y,Z)\right)/2$|I + IIScaled
|$\widehat{\epsilon }_{50}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{13}(X,Y,Z) + \widehat{\epsilon }_{14}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{51}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{19}(X,Y,Z) + \widehat{\epsilon }_{20}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{52}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{25}(X,Y,Z) + \widehat{\epsilon }_{32}(X,Y,Z)\right)/2$|I + IIResult scaled
|$\widehat{\epsilon }_{53}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{44}(X,Y,Z) + \widehat{\epsilon }_{45}(X,Y,Z)\right)/2$|I + IIMixed
Table 6.

Definition for averaged estimators.

EstimatorBased onCategory
|$\widehat{\epsilon }_{46}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{1}(X,Y,Z) + \widehat{\epsilon }_{5}(X,Y,Z)\right)/2$|I + IICrude
|$\widehat{\epsilon }_{47}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{2}(X,Y,Z) + \widehat{\epsilon }_{6}(X,Y,Z)\right) /2$|I + IIScaled
|$\widehat{\epsilon }_{48}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{3}(X,Y,Z) + \widehat{\epsilon }_{7}(X,Y,Z)\right)/2$|I + IICensored
|$\widehat{\epsilon }_{49}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{11}(X,Y,Z) + \widehat{\epsilon }_{12}(X,Y,Z)\right)/2$|I + IIScaled
|$\widehat{\epsilon }_{50}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{13}(X,Y,Z) + \widehat{\epsilon }_{14}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{51}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{19}(X,Y,Z) + \widehat{\epsilon }_{20}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{52}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{25}(X,Y,Z) + \widehat{\epsilon }_{32}(X,Y,Z)\right)/2$|I + IIResult scaled
|$\widehat{\epsilon }_{53}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{44}(X,Y,Z) + \widehat{\epsilon }_{45}(X,Y,Z)\right)/2$|I + IIMixed
EstimatorBased onCategory
|$\widehat{\epsilon }_{46}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{1}(X,Y,Z) + \widehat{\epsilon }_{5}(X,Y,Z)\right)/2$|I + IICrude
|$\widehat{\epsilon }_{47}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{2}(X,Y,Z) + \widehat{\epsilon }_{6}(X,Y,Z)\right) /2$|I + IIScaled
|$\widehat{\epsilon }_{48}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{3}(X,Y,Z) + \widehat{\epsilon }_{7}(X,Y,Z)\right)/2$|I + IICensored
|$\widehat{\epsilon }_{49}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{11}(X,Y,Z) + \widehat{\epsilon }_{12}(X,Y,Z)\right)/2$|I + IIScaled
|$\widehat{\epsilon }_{50}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{13}(X,Y,Z) + \widehat{\epsilon }_{14}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{51}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{19}(X,Y,Z) + \widehat{\epsilon }_{20}(X,Y,Z)\right)/2$|I + IIParameter scaled
|$\widehat{\epsilon }_{52}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{25}(X,Y,Z) + \widehat{\epsilon }_{32}(X,Y,Z)\right)/2$|I + IIResult scaled
|$\widehat{\epsilon }_{53}(X,Y,Z)$||$= \left(\widehat{\epsilon }_{44}(X,Y,Z) + \widehat{\epsilon }_{45}(X,Y,Z)\right)/2$|I + IIMixed

We now wish to determine the performance and suitability of the new suggestions. We will therefore discuss each cluster under the criteria introduced in this section. The accuracy will be tested in different ways, on the one hand via predetermined values which are exemplary for certain regions of parameter sets, and on the other hand via truly random data. This means that the necessary mean values u, |$v$|⁠, and s which we rely upon for the determination of the ellipticity need to be randomized as well. This approach will give us the chance to investigate potential areas of dominant performance, as well as the overall performance in indiscriminate testing for each candidate.

3 ESTIMATION COMPARISON

In this section, we investigate what the strengths and weaknesses of the proposed estimators are, when and where their properties are of the most benefit. With the actual parameters u, |$v$|⁠, and s known, we can determine the true ellipticity beforehand. To gain an insight into this, we firstly design an experiment for specific pairings of the mean values u, |$v$|⁠, and s. This helps determining the effective ranges of each individual estimator. Essentially we begin with the true ε value, and then choose the mean parameter values accordingly. This is done by the well-known distributions below, from which we can sample X, Y, and Z values:
(16)
(17)
(18)
The mean values u, |$v$|⁠, and s, as well as the standard deviation σ are deterministic, but chosen in several combinations to cover a large set of realistic parameter values.

The ellipticities range from close to the origin 0 + i0 to the border of the unit circle, for example|$\sqrt{\Im (\epsilon)^2 + \Re (\epsilon)^2} = r \sim 1$|⁠. We have conducted the experiment for the pairings of parameter values in Table 7. A visual representation of the corresponding ellipticity values can be found in Fig. 1.

Overview of investigated ellipticity values.
Figure 1.

Overview of investigated ellipticity values.

Table 7.

An overview of the tested parameter combinations.

Combinationuvs
1005
2015
3025
4035
5045
6115
7125
8135
9145
10225
11235
12245
13335
14345
15-1-25
Combinationuvs
1005
2015
3025
4035
5045
6115
7125
8135
9145
10225
11235
12245
13335
14345
15-1-25
Table 7.

An overview of the tested parameter combinations.

Combinationuvs
1005
2015
3025
4035
5045
6115
7125
8135
9145
10225
11235
12245
13335
14345
15-1-25
Combinationuvs
1005
2015
3025
4035
5045
6115
7125
8135
9145
10225
11235
12245
13335
14345
15-1-25

We pair this with the sigma values σ = 0.05, 0.1, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 5, using the sigma value for the sampling from the mean values u, |$v$|⁠, and s. To minimize variance of the experiment, we choose a sufficiently large sample size for each combination of n = 25 000. Each estimator is tested against the true ellipticity value and an absolute error measure is taken. The mean value of these errors can be found in the Tables 8–11.

Table 8.

Error table for the different estimators and sigma values for u = −2, |$v$| = −2, and s = 5, for example ε = −0.219 224 − 0.219 224i.

graphic
graphic
Table 8.

Error table for the different estimators and sigma values for u = −2, |$v$| = −2, and s = 5, for example ε = −0.219 224 − 0.219 224i.

graphic
graphic

We discuss a selected pairings of values only, which serve as examples for the various ellipticity locations. The full set of tables can be provided upon inquiry.

To investigate the influence of negative values on the estimators, we have included an example with negative mean values u = −1, |$v$| = −2, and s = 5. The results are listed in Table 8. In comparison to the positive values, we see that the negative values make very little difference in the performance of the estimators. As u and |$v$| are rather directly estimated through the values x and y, the negative sample values directly translate into the estimation. We will therefore consider positive mean values without loss of generality.

The colour coding is designed to help scan the experiment results more easily. Higher error averages are in red, and gradually turning into green background colouring the lower the absolute error gets. The main estimator we compare our newly designed estimators against, is the unaltered |$\widehat{\epsilon }_{I}$| or the scaled version, since they are considered to be the strongest contenders for a new estimator against the cruder direct version based on |$\widehat{\epsilon }_{II}$|⁠, which simply plugs in the single sample values into the ellipticity formula.

Around the origin u = 0, |$v$| = 0, and s = 5 in Table 9, we can see mostly indifference between the estimators, with a bit of preference for estimators 30–32 and 37–38. Also the differences in performance naturally increase for larger variances. In this range, there is a slight preference for 11, 15, 17, and 19 as well as 23–29 in addition to the overall 30–32 performances. This however may be due to the scaling function in certain estimators, which for an ellipticity in the origin will automatically create better results. Therefore, this set of results has to be taken with a grain of salt.

Table 9.

Error table for the different estimators and sigma values for u = 0, |$v$| = 0, and s = 5, for example ε = 0 + 0i.

graphic
graphic
Table 9.

Error table for the different estimators and sigma values for u = 0, |$v$| = 0, and s = 5, for example ε = 0 + 0i.

graphic
graphic

When we move a little further away from the origin, the picture changes a little in Table 10. While estimators 23–29 are still performing as well or better as the original estimators and their modifications, especially for larger sigmas (for small sigmas there is little difference to the unbiased estimator, as the derivatives are based upon |$\widehat{\epsilon }_{I}$|⁠), we see the addition of estimators 11, 15, 17, 19, and 23–29. Furthermore, estimator 44 works fairly well, with decreasing performance for larger variances. Also the average estimators from number 48 onwards perform consistently well.

Table 10.

Error table for the different estimators and sigma values for u = 0, |$v$| = 3, and s = 5, for example ε = 0 + 0.333i.

graphic
graphic
Table 10.

Error table for the different estimators and sigma values for u = 0, |$v$| = 3, and s = 5, for example ε = 0 + 0.333i.

graphic
graphic

For the combination u = 2, |$v$| = 2, and s = 5 the same candidates stand out again in Table 11, delivering sensible alternatives to the original estimators. We do recognize some issues with estimators 19 and 21, who grossly misestimate the true ellipticity value for larger sigma values. For number 19 this might be due to singularities, but it seems there is some intrinsic problem with the estimator 21, which produces arbitrarily large values.

Table 11.

Error table for the different estimators and sigma values for u = 2, |$v$| = 2, and s = 5, for example ε = 0.219 + 0.219i.

graphic
graphic
Table 11.

Error table for the different estimators and sigma values for u = 2, |$v$| = 2, and s = 5, for example ε = 0.219 + 0.219i.

graphic
graphic

In the matrix of scatter plots in Figs 2 and 3 (continued), we can see the behaviour of each estimator with a distribution of empirical ellipticities. Each individual estimate for a random experiment is marked by a blue dot, the true value is marked by a single red spot. Furthermore, we have added the average of all estimates of all random experiments, to give a sense of what the bias is.

Matrix of estimators’ 1–28 performance, with true values and bias added. Real x-axis versus imaginary y-axis.
Figure 2.

Matrix of estimators’ 1–28 performance, with true values and bias added. Real x-axis versus imaginary y-axis.

Matrix of estimator’ 29–53 performance, with true values and bias added. Real x-axis versus imaginary y-axis.
Figure 3.

Matrix of estimator’ 29–53 performance, with true values and bias added. Real x-axis versus imaginary y-axis.

It becomes evident where some estimators have their largest shortcomings by observing their distribution. The unscaled estimators often overshoot the feasible range of the unit circle, and therefore produce avoidable overestimation. On the other hand some of the scaling approaches, especially the parameter scaling approaches distort the actual value. We can see this through the way the scaled values on the unit circle boundary exhibit a grossly miscalculated angle.

On the other hand, we can see some decent performances, where the individual estimates exhibit a tighter grouping around the true value without producing a noticeable bias shift. These candidates seem promising, and we will review their practical potential by looking into their range-specific behaviour. We see this confirmed at the border of the unit circle with parameter values u = 3, |$v$| = 4, and s = 5 in Table 12. The problems with estimators 19 and 20 persist for this configuration, yet our favoured estimators exhibit a decent performance when compared to the original estimators. Additionally, the |$\widehat{\epsilon }_{II}$| based estimators perform rather well for the mid-range variance values around σ = 1.

Table 12.

Error table for the different estimators and sigma values for u = 3, |$v$| = 4, and s = 5, for example ε = 0.6 + 0.8i.

graphic
graphic
Table 12.

Error table for the different estimators and sigma values for u = 3, |$v$| = 4, and s = 5, for example ε = 0.6 + 0.8i.

graphic
graphic

3.1 Non-perfect ellipticity

Sampling from the means u, |$v$|⁠, and s via a normal distribution implicitly assumes the shape of the galaxies to be perfectly elliptical, which of course is a strong assumption. We have therefore tested a simulation in which the X, Y, and Z samples are distributed according to a non-central Student-t distribution (Gosset 1908).

We may assume the previous values u, |$v$|⁠, and s as the means, as well as the standard deviation σ and retrieve the corresponding characterizing parameters ν and μ which are the degrees of freedom and non-centrality parameters, respectively.

If TS(ν, μ) is a non-central Student-t random variable with ν > 2,
(19)

Assume that XS(νX, μX), YS(νY, μY), and ZS(νZ, μZ). By setting Var[X] = Var[Y] = σ2, Var[Z] = 4σ2, |$\mathbb {E}[X] = u$|⁠, |$\mathbb {E}[Y] = v$|⁠, and |$\mathbb {E}[Z] = s$|⁠, we can estimate values of ν and μ. We would like to note that we have to assume σ > 1 due to the nature of the non-central Student-t distribution. This enables us to do the exact same experiment we have done with the assumption of normally distributed measurements.

We have listed the results as before for an example problem with values u = 2, |$v$| = 2, and s = 5 across different standard deviations, which translate to the SNR values 4.7619,4,3.333,2.5,2,1.75,1.5,1.25, and 1. The matrix can be seen in Table 13.

Table 13.

Error table for the different estimators and sigma values for u = 2, |$v$| = 2, and s = 5, for example ε = 0.219 + 0.219i, assuming Student-t distribution.

graphic
graphic
Table 13.

Error table for the different estimators and sigma values for u = 2, |$v$| = 2, and s = 5, for example ε = 0.219 + 0.219i, assuming Student-t distribution.

graphic
graphic

Furthermore, we have repeated the graphic representation of the estimator spread and behaviour in Figs 4 and 5. The colour scheme remains the same as previously introduced to help distinguish the classes of estimators.

Matrix of estimators’ 1–28 performance, with true values and bias added for the Student-t distribution. Real x-axis versus imaginary y-axis.
Figure 4.

Matrix of estimators’ 1–28 performance, with true values and bias added for the Student-t distribution. Real x-axis versus imaginary y-axis.

Matrix of estimators’ 29–53 performance, with true values and bias added for the Student-t distribution. Real x-axis versus imaginary y-axis.
Figure 5.

Matrix of estimators’ 29–53 performance, with true values and bias added for the Student-t distribution. Real x-axis versus imaginary y-axis.

The most striking change we can observe in the tables is how much more vulnerable the estimators have become to low-SNR values. This is especially visible in the |$\widehat{\epsilon }_{I}$| based estimators. The scaling approaches with dampening parameter seem to work better across all SNR ranges, even though sometimes only slightly so. This of course depends on the position of the true mean values. The closer the true ellipticity gets to the unit circle boundary, the better the scaling approach works in comparison to the original estimator. We have observed similar behaviour in the normal sampling experiment, but here we restricted ourselves to only certain value pairings, full error tables may be provided upon request.

In the figures, we see much of the behaviour from previous experiment, yet notice a recurring pattern of the individual estimators, which seem to group along the axes, giving the estimate cloud a diamond-shaped appearance. However, generally most of the scaling approaches seem to be doing their job and resize estimates based on measurements exceeding the unit circle to a sensible new approximation.

3.2 Performance by signal-to-noise ratio

With the investigation of the performance of the estimators for predetermined parameter sets complete, we have settled on a number of promising new candidates. While the original estimator delivers a solid performance, it neglects the potential infeasibility of output values, or the input’s violation of the ellipticity formula rules.

To further test the estimator for performance on randomly selected parameter ranges, we now first off select the ellipticity. This can be via a two-dimensional normal distribution as a first approximation:
(20)
This technically allows for ellipticities outside the feasible range of the unit circle. However, this can be somewhat corrected by choosing the sampling variance accordingly. Alternatively, we can directly proceed to the truncated normal distribution, which essentially crops the normal distribution to our needs. A rather simple modification of the normal distribution from Horrace (2005) could be considered (as the R package tmvtnorm is available on CRAN; Wilhelm & Manjunath 2010):
(21)
which provides a scaled probability function, restrained between a and b, which can be limited to the unit circle. However, in the test we have been conducting, the truncated normal samples seem to place much stress on the boundaries, falsifying the results. It is hard to determine what constitutes a close approximation to the real life underlying processes, therefore we will rely on the untruncated normal distribution (also suggested in Tessore & Bridle 2018).
Once the ellipticity samples are randomly drawn from the distribution decided upon, we are still left with the task of constructing the respective parameters u, |$v$|⁠, and s accordingly. We therefore construct an error plot which lists the mean absolute error of each estimator against the SNR (Schroeder 1999; Bushberg 2006):
(22)
Since this provides a set of s values for every experiment, we can therefore determine u and |$v$| simply through the imaginary and real parts of the ellipticity:
(23)
From this, we can then in turn sample the actual observations X, Y, and Z. To summarize, with the variance σe for the initial ellipticity sampling, and σo for the observation sample, we draw values as follows:
(24)
(25)
(26)
(27)

This procedure of procuring the necessary randomized observations ensures that we do not focus on singular areas and parameter combinations as before. This should provide a more complete view of how the estimators are likely to perform in practical applications.

We have settled on the standard deviations σe = 0.2 and σo = 1 and an SNR range of [0, 100] which should cover most values of interest. Since there are more points to investigate, we have lowered the sample size for each individual SNR to n = 500 for a first glance at the estimator performance in relation to the SNR. The results are visualised in Figs 6 and 7, where we split the estimators into two groups for better visibility.

The SNR versus mean absolute errors for estimators 1–28, normal distribution. SNR range on x-axis versus average absolute error on y-axis.
Figure 6.

The SNR versus mean absolute errors for estimators 1–28, normal distribution. SNR range on x-axis versus average absolute error on y-axis.

The SNR versus mean absolute errors for estimators 29–53, normal distribution. SNR range on x-axis versus average absolute error on y-axis.
Figure 7.

The SNR versus mean absolute errors for estimators 29–53, normal distribution. SNR range on x-axis versus average absolute error on y-axis.

The sample size mainly controls the variance of the error dots in the plots. We split the plot matrices in two groups, 1–28 and 29–53. We observe the generally expected shape of the curve for most estimators, with higher values of s = SNR*σo the absolute error subsides. Since a higher s value diminishes the ellipticity ε, as it is only recurring in the denominator of the fraction, we get a smaller spread in initial samples.

Furthermore, we can see that some estimators are susceptible to failure, mostly due to the lack of a scaling mechanism. This becomes visible in some of the outliers, which seem to become more frequent as s → 0, leading to a divergence of the ellipticity value. We therefore measure the success of the other estimators mainly against |$\widehat{\epsilon }_{2}$| and |$\widehat{\epsilon }_{5}$|⁠, since the original unbiased estimator fails for ellipticities around the unit circle (as previously discussed).

We see many of the trends from the static parameter tables confirmed. The odd numbered estimators based on the unbiased estimators 15, 17, 19, 21, and 23 for example exhibit a lower mean absolute error, which is especially visible in the lower SNR regions near the coordinate origin. The y-intercept for |$\widehat{\epsilon }_{2}$| is between 0.9 and 1, a value which a number of estimators seem to underbid.

For the parameter and result based scaling techniques, we see good success in estimators such as number 44 or 49, especially for the tail regions there seems to be a fairly quick convergence. Unfortunately, the approach does not seem stable for small SNR values, as the error values scatter a lot more and eventually break due to missing values in the experiment output. We now move on to a direct comparison of all the methods, to give us an impression of how the performance relate to one another, and if there is significant potential for the newly constructed empirical ellipticities.

Figs 8 and 9 depict all approaches in a single frame, showing the averages of absolute error and variance, respectively. The crude and unbiased estimators (⁠|$\widehat{\epsilon }_{2}$| to be precise) are highlighted in red and superimposed over the bulk of the other estimators. The plots primarily give us a first glance at the outcome of the experiment, before we highlight single estimation approaches. As we can see, several monochrome point clouds shape up underneath the red line plots. This is a first indication that stronger estimators for arbitrary values exist.

Average absolute error, SNR = [0, 100], logarithmic x-scale. SNR range on x-axis versus average absolute error on y-axis.
Figure 8.

Average absolute error, SNR = [0, 100], logarithmic x-scale. SNR range on x-axis versus average absolute error on y-axis.

Average estimator variance, SNR = [0, 100], logarithmic x-scale. SNR range on x-axis versus average variance on y-axis.
Figure 9.

Average estimator variance, SNR = [0, 100], logarithmic x-scale. SNR range on x-axis versus average variance on y-axis.

We single out the approaches found to deliver the strongest results, based on the error tables for fixed ellipticity values and the individual and comparative plots. We narrow down the observed estimators to the preferred selection, in order to make the plots more readable. Since the SNR area of interest is below 30, we have omitted the larger tail of the plot, which served to check the estimators for convergence. In the focus plots, we get a much clearer picture of the comparative performance of the estimators, depicted in Figs 10 and 11.

Average absolute error of selected estimators, SNR = [0, 30]. SNR range on x-axis versus average absolute error on y-axis.
Figure 10.

Average absolute error of selected estimators, SNR = [0, 30]. SNR range on x-axis versus average absolute error on y-axis.

Average variance of selected estimators, SNR = [0, 30]. SNR range on x-axis versus average variance on y-axis.
Figure 11.

Average variance of selected estimators, SNR = [0, 30]. SNR range on x-axis versus average variance on y-axis.

Starting with the largest SNR, we can see that estimator 39 exhibits the lowest average error, converging towards zero the fastest. This effect takes effect roughly from SNR = 20 onwards, whereas the estimator has a seemingly weaker performance for smaller SNR values, for example, higher standard deviations, eventually matching the benchmark of |$\widehat{\epsilon }_{II}$|⁠. The variances in this interval are very tightly grouped, with estimator 37 narrowly beating out the aforementioned approach number 39.

The next interval which is clearly marked by a difference in accuracy covers SNR = [10, 20], edging closer to the area of practical applicability. Both the scaled estimator number 15 and averaged estimator 50 show a reduction in absolute error by as much as |$25-30{{\ \rm per\ cent}}$|⁠. The variance plots show an even clearer gap to the next best competitor, where the faint beige coloured error points of estimator 50 seem to surpass the scaled unbiased estimator once again, by roughly |$60{{\ \rm per\ cent}}$|⁠. It should be stated at this point that there are some outliers visible in the plot from both approaches, suggesting robustness issues. This becomes even more evident when both estimators fail due to missing values being introduced, thus questioning the practical use of those estimators for SNR ≤10, and the robustness in general.

Lastly, and most importantly we focus on [0, 10], representing the interval of the highest practical interest. Once more it becomes clear which approach delivers the best results, as estimator 23 undercuts the other approximations consistently throughout the range in question. The absolute difference is about 0.075–0.1, which is roughly an error reduction of |$10{{\ \rm per\ cent}}$|⁠. Perhaps more importantly, we cannot see the same amount of outliers as for estimators 15 and 50, suggesting greater stability of the estimators, less susceptible to variances in the input parameters. Additionally, we see a reduction in variance, as the estimator 23 exhibits the lowest variances for the range [0, 10]. The similar estimators 24, 25 and onwards perform similarly, albeit less accurately. Possibly the parameter β = 0.25 is not yet optimally chosen, and can be altered for even better results in the future.

We have analysed and discussed the test results for all newly suggested approaches, and have settled on a recommended estimation technique for a given SNR range. As a last step we have averaged the mean absolute values across different SNR intervals, thus putting our observations from the visualised results in comprehensible numbers. Table 14 lists the mean error results, whereas Table 15 contains the variances summarized in the same way.

Table 14.

Average absolute error over the respective SNR ranges.

SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.061 640.100 510.147 690.237 46
50.205 970.389 310.629 091.197 73
Scaled152.04E + 824.08E + 826.80E + 821.37E + 83
R & P170.052 870.083 200.119 720.189 52
194.95E + 3048.23E + 1551.96E + 1041.37E + 83
Dampening230.049 350.076 240.108 380.169 60
240.051 940.081 370.116 760.184 46
250.054 960.087 330.126 430.201 27
370.038 980.076 750.124 790.221 75
390.049 000.089 060.139 260.238 55
410.071190.113 960.165 990.264 20
R&P II443.29E + 826.59E + 821.10E + 832.21E + 83
Mixed481.75E + 823.51E + 825.86E + 821.18E + 83
501.66E + 823.33E + 825.56E + 821.12E + 83
SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.061 640.100 510.147 690.237 46
50.205 970.389 310.629 091.197 73
Scaled152.04E + 824.08E + 826.80E + 821.37E + 83
R & P170.052 870.083 200.119 720.189 52
194.95E + 3048.23E + 1551.96E + 1041.37E + 83
Dampening230.049 350.076 240.108 380.169 60
240.051 940.081 370.116 760.184 46
250.054 960.087 330.126 430.201 27
370.038 980.076 750.124 790.221 75
390.049 000.089 060.139 260.238 55
410.071190.113 960.165 990.264 20
R&P II443.29E + 826.59E + 821.10E + 832.21E + 83
Mixed481.75E + 823.51E + 825.86E + 821.18E + 83
501.66E + 823.33E + 825.56E + 821.12E + 83
Table 14.

Average absolute error over the respective SNR ranges.

SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.061 640.100 510.147 690.237 46
50.205 970.389 310.629 091.197 73
Scaled152.04E + 824.08E + 826.80E + 821.37E + 83
R & P170.052 870.083 200.119 720.189 52
194.95E + 3048.23E + 1551.96E + 1041.37E + 83
Dampening230.049 350.076 240.108 380.169 60
240.051 940.081 370.116 760.184 46
250.054 960.087 330.126 430.201 27
370.038 980.076 750.124 790.221 75
390.049 000.089 060.139 260.238 55
410.071190.113 960.165 990.264 20
R&P II443.29E + 826.59E + 821.10E + 832.21E + 83
Mixed481.75E + 823.51E + 825.86E + 821.18E + 83
501.66E + 823.33E + 825.56E + 821.12E + 83
SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.061 640.100 510.147 690.237 46
50.205 970.389 310.629 091.197 73
Scaled152.04E + 824.08E + 826.80E + 821.37E + 83
R & P170.052 870.083 200.119 720.189 52
194.95E + 3048.23E + 1551.96E + 1041.37E + 83
Dampening230.049 350.076 240.108 380.169 60
240.051 940.081 370.116 760.184 46
250.054 960.087 330.126 430.201 27
370.038 980.076 750.124 790.221 75
390.049 000.089 060.139 260.238 55
410.071190.113 960.165 990.264 20
R&P II443.29E + 826.59E + 821.10E + 832.21E + 83
Mixed481.75E + 823.51E + 825.86E + 821.18E + 83
501.66E + 823.33E + 825.56E + 821.12E + 83
Table 15.

Average variance over the respective SNR ranges.

SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.157 500.264 570.376 840.562 44
50.181 410.312 170.455 100.708 01
Scaled151.37E + 382.74E + 384.57E + 389.17E + 38
R & P170.146 490.242 700.341 250.500 89
196.77E + 1631.62E + 752.36E + 499.17E + 38
Dampening230.141 050.231 730.322 960.467 67
240.145 150.240 030.336 920.493 63
250.149 610.249 000.351 700.519 67
370.197 660.261 130.344 310.520 14
390.159 690.250 690.359 270.559 42
410.179 860.295 980.418 500.619 58
R & P II442.21E + 384.43E + 387.38E + 381.48E + 39
Mixed482.35E + 384.71E + 387.87E + 381.58E + 39
502.24E + 384.48E + 387.47E + 381.50E + 39
SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.157 500.264 570.376 840.562 44
50.181 410.312 170.455 100.708 01
Scaled151.37E + 382.74E + 384.57E + 389.17E + 38
R & P170.146 490.242 700.341 250.500 89
196.77E + 1631.62E + 752.36E + 499.17E + 38
Dampening230.141 050.231 730.322 960.467 67
240.145 150.240 030.336 920.493 63
250.149 610.249 000.351 700.519 67
370.197 660.261 130.344 310.520 14
390.159 690.250 690.359 270.559 42
410.179 860.295 980.418 500.619 58
R & P II442.21E + 384.43E + 387.38E + 381.48E + 39
Mixed482.35E + 384.71E + 387.87E + 381.58E + 39
502.24E + 384.48E + 387.47E + 381.50E + 39
Table 15.

Average variance over the respective SNR ranges.

SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.157 500.264 570.376 840.562 44
50.181 410.312 170.455 100.708 01
Scaled151.37E + 382.74E + 384.57E + 389.17E + 38
R & P170.146 490.242 700.341 250.500 89
196.77E + 1631.62E + 752.36E + 499.17E + 38
Dampening230.141 050.231 730.322 960.467 67
240.145 150.240 030.336 920.493 63
250.149 610.249 000.351 700.519 67
370.197 660.261 130.344 310.520 14
390.159 690.250 690.359 270.559 42
410.179 860.295 980.418 500.619 58
R & P II442.21E + 384.43E + 387.38E + 381.48E + 39
Mixed482.35E + 384.71E + 387.87E + 381.58E + 39
502.24E + 384.48E + 387.47E + 381.50E + 39
SNR range
[0, 100][0, 50][0, 30][0, 15]
Estimator numberCrude20.157 500.264 570.376 840.562 44
50.181 410.312 170.455 100.708 01
Scaled151.37E + 382.74E + 384.57E + 389.17E + 38
R & P170.146 490.242 700.341 250.500 89
196.77E + 1631.62E + 752.36E + 499.17E + 38
Dampening230.141 050.231 730.322 960.467 67
240.145 150.240 030.336 920.493 63
250.149 610.249 000.351 700.519 67
370.197 660.261 130.344 310.520 14
390.159 690.250 690.359 270.559 42
410.179 860.295 980.418 500.619 58
R & P II442.21E + 384.43E + 387.38E + 381.48E + 39
Mixed482.35E + 384.71E + 387.87E + 381.58E + 39
502.24E + 384.48E + 387.47E + 381.50E + 39

Most strikingly, we recognize the issues of stability, which are visible for estimators 15, 17, 44, 48, and 50. The tremendously high errors and variances are due to the inherent flaw to these estimation techniques, that they allow for arbitrarily large estimates, which can be caused by unchecked or unscaled problematic input parameters. Starting with the largest range of SNR in [0, 100], we recognize the lowest average error values from estimators 37, 39, closely followed by number 23, all offering a better option than the benchmark |$\widehat{\epsilon }_{2}$|⁠. However, this behaviour does not persist for all SNR intervals. We had realized earlier that the estimators 37 and 39 give their strongest performance only from SNR = 30 onwards, which lies outside of the realistically sensible ranges. If we draw the circle a little bit closer, and omit results larger than SNR = 50 estimator 23 delivers the best results, undercutting all other approaches. As we reduce the investigated interval, the effect becomes stronger, with estimator 23 reducing the error of |$\widehat{\epsilon }_{2}$| by almost |$29{{\ \rm per\ cent}}$| for the narrowest range [0, 15].

For the variance the observations are even clearer, here the scaled estimator 23 exhibits the smallest variance for all listed ranges. Realistically speaking, the smallest range [0, 15] is of the highest significance for practical applications, therefore leaving us with a clear recommendation on which estimator has the largest potential in practice.

In a more categorical overview, we try to investigate how the different types of estimators perform in a given setting. The first category we dubbed ‘crude’ estimators, due to their straightforward nature and simplistic scaling and censoring approach, do not perform remarkably better than any baseline method. In the ‘parameter-only’ scaling section estimator 15 alone performed noteworthily better than the baseline approach. The more sophisticated estimators which scale either parameters, or the entire result based on certain parameter values perform considerably better. This may lead us to the conclusion that result scaling is the general approach, a claim that is verified by the section of exclusively result-scaling approaches. Censoring on the other hand does only seems to slightly increase performance, certainly not to a degree that would allow for routine discarding of data. Another problem with data censoring is the censoring criterion. While X2 + Y2 < Z2 effectively works for the Type 2 basic estimator, it does not necessarily omit corrupted data for the Type 1 estimator. The most complex estimators of the second to last category perform best in the very high SNR region, beginning with a ratio of 17 or higher. This puts them outside the most common region of ratio values.

What strikes us is that the result-only scaling introduces only very little bias. As the basic estimator is unbiased, it has to be expected to trade some degree of underestimation of the ellipticity (as we always scale down) for a generally more accurate result overall, observable in Fig. 2. However, the slight bias only occurs in the individual ellipticity estimation of a galaxy, and as the scaling is centred towards the origin (for example only the radius, not the angle is corrected), no bias will appear in an average of galactic ellipticities. The dampening parameters introduced as β could for this purpose be analysed more thoroughly, as the values we have chosen were somewhat arbitrary to give us a more diverse picture. It is imaginable that a dependence on the SNR or standard deviation σ could be derived for an optimal β either empirically or theoretically.

Other scaling functions besides the inverse tangent function may be considered as well, as only the asymptotic values of the scaling function are fixed. A number of different functions with the correct properties and numerous optimisation parameters may be tested, to increase accuracy, while even further minimizing variance and bias of the estimation. We may consider this question in further work, as this would have gone beyond the scope of this paper.

At this point, we want to stress that these simulation are highly idealized. The assumption of normal distributed samples corresponds to galaxies exhibiting the shape of perfect ellipses, which in nature obviously will not have to be the case. Furthermore, the process of deriving the measurement values from the actual galaxy is much more complex than our simulation does justice. Various steps along the actual process of obtaining measurement may introduce bias or distortions which this experimental set-up can not reproduce. The performance analysis of this section should be seen as a very simple assessment of theoretical applicability of these approaches, rather than emulating practical effects.

4 CONCLUSIONS

In this paper, we have introduced and discussed a number of extensions to the base estimators |$\widehat{\epsilon }_{I}$| and |$\widehat{\epsilon }_{II}$|⁠. Due to the limited sample size, strictly speaking we were not able to employ common statistical methods. We employed scaling and censoring techniques based on the conditions the parameters have to fulfill, or are at least desirable to the coherence of the input data.

As we have demonstrated in the simulation experiments, and have explained in the discussion part of Section 3, some of the proposed techniques have definite potential for practical use. Both for fixed underlying means and randomised ellipticities, we see strong performances for a number of our proposed approaches, which retain sufficient numerical stability to be of use in application scenarios.

Other application areas may be considered as well, as the estimation problem is not strictly bound to the original problem anymore. We would have to reconfigure the scaling criterion according to the new application, and might reconsider the scaling function, but the general approach may be much more widely applicable.

We believe there might be even further room for improvement, if the estimators get used in accordance to the SNR or true ellipticity region (for example r ≈ 0 or r ≈ 1). As the variance is known in most cases, it is possible to create confidence areas of the true location of the ellipticity. Hence, an estimation selection algorithm could be developed, which based on the known input factors chooses the optimal approach. As touched upon in the simulation section, scaling functions other than the inverse tangent function may be considered as well. As the dampening parameter β in the most successful category of estimators had tremendous influence over the performance of the estimator, an optimal parameter dependent on σ would be desirable and could help to minimize. We have given first possible starting points with respect to selective estimators.

ACKNOWLEDGEMENTS

We would like to thank Sarah Bridle and Nicolas Tessore for bringing this problem to our attention. We also thank them for their helpful comments which greatly contributed to this paper. We would furthermore like to express our thanks to the referee, whose comments were very constructive and helped to improve this paper.

REFERENCES

Bartelmann
M.
,
Schneider
P.
,
2001
,
Phys. Rep.
,
340
,
4

Bushberg
J. T.
,
2006
,
The Essential Physics of Medical Imaging
.
Lippincott Williams and Wilkins
,
Philadelphia

Gosset
W. S.
,
1908
,
Biometrika
,
6
,
1

Horrace
W. C.
,
2005
,
J. Multivariate Anal.
,
94
,
209

Kilbinger
M.
,
2015
,
Rep. Prog. Phys.
,
78
,
086901

Schneider
P.
,
Kochanek
C. S.
,
Wambsganss
J.
,
2006
,
Gravitational Lensing: Strong, Weak and Micro
.
Springer Verlag
,
New York

Schroeder
D. J.
,
1999
,
Astronomical Optics
.
Academic Press
,
London

Gosset
W. S.
,
1908
,
The Probable Error of a Mean Biometrika
,
6
,
1
.

Tessore
N.
,
Bridle
S.
,
2018
,
New Astron.
,
69
,
58

Wilhelm
S.
,
Manjunath
B. G.
,
2010
,
The R Journal
,
2
:

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/journals/pages/open_access/funder_policies/chorus/standard_publication_model)