Abstract

Computed tomography (CT) imaging of the thorax is widely used for the detection and monitoring of pulmonary embolism (PE). However, CT images can contain artifacts due to the acquisition or the processes involved in image reconstruction. Radiologists often have to distinguish between such artifacts and actual PEs. We provide a proof of concept in the form of a scalable hypothesis testing method for CT, to enable quantifying uncertainty of possible PEs. In particular, we introduce a Bayesian Framework to quantify the uncertainty of an observed compact structure that can be identified as a PE. We assess the ability of the method to operate under high-noise environments and with insufficient data.

Significance Statement

Computed tomography (CT) imaging in medicine is widely used to visualize internal organs for diagnostic purposes. In the context of pulmonary embolism (PE) detection in the setting of acute chest pain, the PE can appear in CT scans as small structures with weak amplitude. So PE detection can be challenging in practice for clinicians, who have to decide whether the structures are PEs or not. This ambiguity can occur due to imperfect data acquisition (e.g. insufficient data, high-noise environment). In this work, we propose a computational tool to help clinicians to decide whether an observed structure is a PE or an artifact due to imperfect data. Our method quantifies the uncertainty of the structure, leveraging optimization and Bayesian theory.

Introduction

Pulmonary embolism detection with computed tomography angiography

Most medical image modalities such as computed tomography (CT), ultrasound, and magnetic resonance imaging are the result of an intricate image reconstruction process that uses noisy and incomplete captured data. In particular, CT is a popular imaging modality used to diagnose various types of pathologies, such as acute inflammatory conditions, strokes, and malignancy. X-rays are passed through the patient’s body from multiple angles and an attenuation coefficient is calculated depending on the densities of the different tissues the X-rays pass through. A reconstruction algorithm is used to create final 3D image. This algorithm is subject to creation of artifacts, i.e. structures not present in the ground truth image being captured (1). They can interfere with conclusions drawn by radiologists, who then have to infer if structures appearing in CT images are pathological or artifactual due to the inaccuracy of the data acquisition.

This is quite common when assessing CT scans for the presence of acute pulmonary embolism (PE), which is a major cause of mortality with approximately 30,000 deaths per year in the United Kingdom (2). Assessment and detection of PE and its cardiovascular complications is routinely performed with a CT pulmonary angiography (CTPA) (3). Chronic thromboembolic pulmonary hypertension is also a potential long-term disabling complication of acute PE, and CTPA is an important diagnostic tool as well as being useful to assess for operability (4). However, a variety of patient- and protocol-related factors can result in image artifacts that may impact the clinicoradiological confidence of image interpretation. If a false positive diagnosis is made, this can result in inappropriate patient treatment with anticoagulation, which is associated with an unnecessary increase in bleeding rates (5).

In this context, quantifying uncertainty of the PE-like structures observed in reconstructed CT thorax images would improve diagnosis accuracy. In this paper, we present an uncertainty quantification (UQ) framework to perform hypothesis tests on PE-like structures, and determine whether they are present in the patient thorax or are artifacts arising from inaccurate data acquisition.

Bayesian inference for imaging

Reconstruction of images from CT data can be formulated as an inverse problem. The objective is to find an estimate xx of an unknown image xx (i.e. patient’s thorax) from measurements yy acquired with a CT scanner (6, 7). Following a Bayesian framework (8), the image and the data are related through a statistical model. Then the estimate xx is inferred from yy according to its posterior distribution, which combines information from the likelihood, related to the observations yy, and the prior, used to introduce a priori information on the target image. The prior is used to regularize the model, to help to overcome ill-posedness and/or ill-conditionedness of the inverse problem. Common choices are to impose feasibility constraints, and to promote smoothness or sparsity of xx, possibly in some transformed domain such as wavelet, Fourier, or total variation (TV) (9).

Sampling methods, e.g. Markov chain Monte Carlo methods (MCMC), draw random samples according to the posterior distribution. These methods then allow us to form estimators [e.g. minimum mean square error estimator, posterior mean or maximum a posteriori (MAP) estimator], and to perform UQ through confidence intervals and hypothesis testing (10, 11). The main drawback of these methods is their high computational cost making them inefficient for high-dimensional problems, as encountered in imaging. Indeed, for CT imaging, the dimension of xx is often of the order of 108 in the case of high-resolution lung scans (12). Although multiple works have emerged in the last years to help scaling sampling methods, e.g. (13–15), they usually remain prohibitive in such high dimensions.

Methods of choice for handling high-dimensional problems are proximal splitting optimization algorithms (16–18). These are known to be very efficient to form MAP estimates. Nevertheless, these methods only provide a point estimate, without quantifying the uncertainty on the delivered solution. To overcome this issue, recently a Bayesian Uncertainty Quantification by Optimization (BUQO) approach has been proposed in Refs. (19–21), to perform hypothesis testing on particular structures appearing on MAP estimates. The method determines whether the structures of interest are true, or are reconstruction artifacts due to acquisition inaccuracy. BUQO has the advantage of being scalable for high-dimensional problems, as the UQ problem is recast in an optimization framework, to leverage proximal splitting optimization algorithms.

UQ for PE

UQ is the main tool to assist doctors for accurate decision-making processes. Ill-posed and ill-conditioned inverse problems result in high uncertainty about the estimate. In this work, we focus on quantifying uncertainty of PE-like structures in CT thorax images. Specifically, we design a method based on BUQO to determine whether these structures are PEs, or if they are reconstruction artifacts.

Methods

In this section, we describe the steps of the proposed PE UQ technique. First, we form the CT image using an optimization algorithm (see below). Second, we identify PE-like structures in the image estimate, and postulate the null hypothesis that these structures are not present in the ground truth image, i.e. they are not in the patient’s thorax, but instead are reconstruction artifacts arising due to the ill-posedness of the problem. Third, we use our method to decide whether the null hypothesis can be rejected or not.

Bayesian inference and optimization for CT imaging

In general, the gantry of a CT scanner, which includes multiple X-ray sources and multiple detectors, will rotate around the patient’s chest. This generates an M-dimensional array of data, denoted by yy, consisting of attenuated X-ray intensities (6, 7). The pattern of attenuation is determined by the geometry of the area through which the beams are directed. The aim of CT reconstruction is to recover a voxel array of dimension N, denoted by xx, that represents the geometry of the organs inside the thorax given the observed noisy data yy. This can be reasonably approximated as a linear inverse problem of the form

(1)

where ΦΦ represents the CT measurement operator described above and ww is a realization of an additive independent and identically distributed (i.i.d.) random noise.

Using a Bayesian formulation, the posterior distribution of the problem, which combines information from the likelihood and the prior, can be expressed as

(2)

where f is assumed to be a log-concave likelihood associated with the statistical model of Eq. 1, and g is a log-concave prior distribution for xx. The usual approach to estimate xx is to use a MAP approach, that consists in defining xx as a minimizer of the negative logarithm of 2, i.e.

(3)

In this work, we assume that the exact noise distribution is unknown, but that it has a bounded energy, i.e. ww2ε, where 2 is the usual Euclidean norm, and ε>0. Then, a typical choice is to take fyy(ΦΦxx) to be the indicator function of the 2 ball B2(yy,ε), centered in yy with radius ε>0. In addition, a common choice for the prior term g(xx) is to promote sparsity of the image of interest in some basis (e.g. wavelet or TV). Then, Eq. 3 can be rewritten as

(4)

where the operator ΨΨ models a linear transform, chosen such that ΨΨxx has only few nonzero coefficients. Eq. 4 can be solved efficiently using proximal splitting algorithms (16–18).

High-dimensional hypothesis testing

The method described in the previous section provides a point estimate xx of xx, without additional information regarding its uncertainty. In this work, we propose to perform a hypothesis test on structures that can be identified as PEs in the MAP estimate.

To illustrate our approach, we recall the basics of hypothesis testing. Typically, we postulate a null hypothesis, i.e. we make a claim about the distribution of observed data. We use the observed data to compute a statistic θ^. We decide to reject or not the null hypothesis depending if θ^ lies in a high-probability interval (see Fig. 1).

Difference between the traditional hypothesis testing and our method. In traditional hypothesis testing, one computes the credible interval Iα and the test statistic θ^ from data. The null hypothesis H0 is rejected if θ^ is not in the credible region. Similarly for our proposed method, we compute the high posterior density region Cα and an image xxS with the structure removed (similar to the test statistic). We reject the null hypothesis (which states that the structure is absent) if xxS does not lie inside the credible region. This is determined by the distance between xxS and xxC, which are the two elements of S and Cα, respectively, that are closest to each other. If this distance is zero, we conclude that xxC∈S otherwise, xxC∉S.
Fig. 1.

Difference between the traditional hypothesis testing and our method. In traditional hypothesis testing, one computes the credible interval Iα and the test statistic θ^ from data. The null hypothesis H0 is rejected if θ^ is not in the credible region. Similarly for our proposed method, we compute the high posterior density region Cα and an image xxS with the structure removed (similar to the test statistic). We reject the null hypothesis (which states that the structure is absent) if xxS does not lie inside the credible region. This is determined by the distance between xxS and xxC, which are the two elements of S and Cα, respectively, that are closest to each other. If this distance is zero, we conclude that xxCS otherwise, xxCS.

This can be extended to computational imaging (20, 21), to quantify uncertainty on structures appearing on the MAP estimate xx, obtained by solving 4. In this context, we postulate the null hypothesis H0 and the alternative hypothesis H1 as follows:

  • H0: The structure is absent from the true image

  • H1: The structure is present in the true image

Formally, using Bayesian decision theory (10), we can conclude that H0 is rejected in favor of H1 if P(H0|yy)α, where α(0,1) denotes the level of significance of the test. Such probability can be approximated by MCMC approaches (11); however, it becomes intractable for high-dimensional problems such as CT imaging. To overcome this difficulty, we introduce a subset S of RN, associated with H0, containing all the possible images without the structure of interest.

To perform the hypothesis test, we will compare S with a posterior credible set Cα*, corresponding to the set of possible solutions where most of the posterior probability mass of xx|yy lies (19). Formally, Cα* satisfies P(xxCα*|yy)=1α. Again computing such probability in high dimension is intractable. Instead, Pereyra (19) introduced a conservative credible region Cα, in the sense that P(xxCα|yy)1α, that does not require any additional computational cost other than building a MAP estimate xx, i.e. solving 4. Note that, by construction, we have xxCα, and Cα consists of defining a feasibility set around xx.

The BUQO approach adopted in this work consists in determining if the intersection between S and Cα is empty. If it is empty, it means that P(xxS|yy)=P(H0|yy)1(1α)=α, hence H0 is rejected. To determine if SCα= Ø, we aim to find an image belonging to SCα. If such image exists, it means that SCα Ø, and it is possible to find (at least) one image supported by the data yy without the structure of interest, hence H0 cannot be rejected. Otherwise SCα= Ø, and H0 is rejected (see the second row of Fig. 1).

Hypothesis test for PE detection

In this section, we explain the proposed method to determine whether SCα is empty or not. In addition, we give mathematical definitions of sets S and Cα, tailored for the PE UQ problem.

To find the closest image to the the MAP estimate xx, belonging to S, one can project xx into S. We denote xxS=ProjS(xx) this projected image. The first step is to verify if xxSCα. If it is the case, then we have found an image in the intersection xxSSCα, and H0 cannot be rejected, i.e. we are uncertain that the PE is present. If xxSCα, it does not mean that CαS is empty, and there might still be an image which belongs to both sets. To ascertain if the intersection is empty, we propose to equivalently compute the distance between S and Cα, denoted dist(S,Cα), and to verify if it is zero or positive. If dist(S,Cα)>0, then we can conclude that CαS= Ø, so H0 is rejected in favor of H1. Otherwise, if dist(S,Cα)=0, there exists (at least) one image in the intersection, and hence H0 cannot be rejected.

To evaluate dist(S,Cα), we need to minimize the distance between an element xxC of Cα and an element xxS of S, i.e. we want to

(5)

For our problem, the conservative credible set, associated with Eq. 4, is defined as Cα:={xx0|ΦΦxxyy2ε and ΨΨxx1ηα}, where ηα=ΨΨxx1+N+16Nlog(3/α). Given a candidate area that is to be assessed for the presence of a PE, we define the set S as the set of images that do not contain PE-like structures at the candidate location. In particular, we want the pixel intensity profile within the structure’s area to be similar to the pixel intensity profile of a neighborhood of the structure. To this aim, we propose to define S as the intersection of three sets, i.e. S:=IES, given by

(6)
(7)
(8)

where M:RNRNS is a linear operator selecting the pixels of the image corresponding to the PE area. The first set I is the positive orthant, to ensure images in S are intensity images. The second set E controls the energy in the structure, ensuring that pixels inside the structure’s area are taking values around a predefined mean value μpix, chosen according to its neighborhood. The third set S is a smoothness constraint, to control the pixel intensity variation in the structure’s area to be close to a mean value μ corresponding to the variations in its neighborhood. For both E and S, rpix and r are positive predefined constants to control the similarity between the structure’s area and its neighborhood.

Experiments

In this section, we present experimental results on synthetic CT data. We apply the BUQO method to real CT slices that contain a PE and assess the ability of the algorithm to detect the PEs under different noise levels and detector setups. We also apply the BUQO method to test for the presence of reconstruction artifacts that were created when simulating the forward problem. The selected reconstruction artifacts were deemed to be similar enough to PEs to be included in our study by trained radiologists. Hence, although we did not produce artifacts via the routinely encountered mechanisms such as beam hardening, their appearance had enough clinical significance for our purposes.

Experiment settings

Dataset description

CTPA was performed on multidetector array scanners (SOMATOMⓇ Drive and Definition Edge, Siemens Healthineers, Erlangen, Germany). The parameters were as follows: 128×0.6mm slice thickness, 1.2 pitch, 0.5 s rotation time, 145 kVp tube voltage, and 120 mAs with automatic dose modulation. Sixty milliliters of nonionic intravenous contrast medium (iohexol, 350 mg iodine/ml; Omnipaque 350, Amersham Health, England) were administered at 6 ml/s via an 18 G cannula. The acquisition was triggered by bolus tracking of the main pulmonary artery, with a threshold of 100 Hounsfield units (HU) and 4-s delay after triggering. The study received approval from the Research Ethics Committee and Health Research Authority (IRAS ID 284089). Informed written consent was not required.

Measurements

From these data, we consider two slices of reconstructed clinical images containing PEs. Using these slices, we simulate data to study the effect of CT acquisition quality on PE detection. To this end, we consider the model described in Eq. 1, with a forward operator ΦΦ modeling a parallel beam geometry with a fixed number of detectors D=450 and a variable number of acquisition angles Ma{50,100,200,300,450}. We generate ww in Eq. 1 as a realization of an i.i.d. Gaussian noise vector of size Ma×D and variance σ2. We then reconstruct the CT image by solving Eq. 4 to obtain the MAP estimate.

PE definition

To create the masks related to the operator M in Eqs. 7 and 8, we used MITK (22). Two types of masks were created by experienced clinical radiologists: masks identifying the location of real PEs appearing in the CTPA scans and masks identifying the location of PE-like artifacts appearing in the CTPA scans due to low quality of the acquired data. In Fig. 2 we show, for both slices, the PEs, and the artifacts of interest arising from the reconstruction process.

Left: Output of BUQO when used to quantify uncertainty of reconstruction artifacts. The forward problem parameters are chosen to be (Ma,σ)=(50,0.175) for all the artifacts. Right: Output of BUQO when used to quantify uncertainty of PEs, as the value of ρα increases. The forward problem parameters are chosen to be (from the left to right column): (σ,Ma)=(50,0.007), (σ,Ma)=(200,0.035), and (σ,Ma)=(450,0.007). First row: MAP estimates, zoomed on the structures of interest. Second row: Output image xxC from BUQO. Third row: Difference images |xxS−xxC|.
Fig. 2.

Left: Output of BUQO when used to quantify uncertainty of reconstruction artifacts. The forward problem parameters are chosen to be (Ma,σ)=(50,0.175) for all the artifacts. Right: Output of BUQO when used to quantify uncertainty of PEs, as the value of ρα increases. The forward problem parameters are chosen to be (from the left to right column): (σ,Ma)=(50,0.007), (σ,Ma)=(200,0.035), and (σ,Ma)=(450,0.007). First row: MAP estimates, zoomed on the structures of interest. Second row: Output image xxC from BUQO. Third row: Difference images |xxSxxC|.

The set S, as defined in Section 22.3, captures the pixel profile for an artery that does not have a PE. In the definition of S, some parameters related to the energy and smoothness constraints must be chosen (see Eqs. 7 and 8, respectively). We propose to choose them automatically, by looking at histograms of pixel intensities and gradients in a neighborhood of the mask. Precisely, we sample pixels around the area of interest and compute the histogram of the intensities of the sampled pixel. Then, in Eq. 7, μpix is set to be the median of this histogram, and rpix is set to be the maximum of the difference between the upper 60th percentile and the median; and the difference between the median and the lower 60th percentile. The same is done to compute μ and μpix in Eq. 8, but with the histogram of sampled gradients instead.

Result interpretation

To assess the effect of the acquisition quality (i.e. noise level σ and number of angles Ma) on the ability of our method to detect true structures, we introduce a structure confidence quantity

(9)

If ρα=0, then dist(S,Cα)=0 and we can conclude that there exists an image without the observed structure that lies in the credible set Cα. If ρα>0, then dist(S,Cα)>0, and the null hypothesis is rejected. The closer to one the value of ρα is, the more certain we are that the null hypothesis should be rejected, and thus that the structure of interest is present in the true image. In practice, numerical errors must be taken into account, and the two above conditions should be relaxed as ραδ and ρα>δ, respectively, for some tolerance δ to be determined by the user.

Note that ρα provides additional information than only an accept/reject hypothesis test. It can be interpreted as a percentage of the structure’s energy that is confirmed by the data. So when a selected PE-like structure is probed for UQ, ρα provides a percentage of the structure’s energy that can be trusted.

In Fig. 1, we compare our method to traditional hypothesis testing in statistics. It is therefore natural to interpret ρα as being equivalent to a P-value in hypothesis testing. However, accepting or rejecting the null hypothesis in our cases does not depend on some hard threshold on ρα. There are two reasons for this. First, traditional hypothesis testing is a frequentist method, where one would typically take the output of models at face value. Our method is a Bayesian method, where one is more interested about prior and posterior distribution. As such, ρα shows us the percentage of the structure that can be explained by the data. Setting a threshold on when to accept or reject the null hypothesis should be an application-specific matter. Second, the method we have proposed does not only generate ρα but also generates xxC and xxS, whose qualitative contribution to the decision to accept or reject the null hypothesis is as important as the quantitative contribution of ρα. Figure 2 shows the images xxC and difference images |xxCxxS|, for different detector settings, and therefore different values of ρα. It can be seen that nonnegative values of ρα do not necessarily correspond to images that would be considered normal by a radiologist. However, very high values of ρα (close to 1) tend to correspond to high fidelity images, which mimic real CT scans very well.

Results

Confidence with respect to measurements

We show in Fig. 3 the behavior of ρα for two assessed PE structures, with respect to the noise level σ for a fixed number of angles Ma (left), and with respect to the number of angles for a fixed noise level (right). It can be observed that the ability of the algorithm to confirm the presence of PEs improves with decreasing noise levels and increasing number of angles.

Structure confidence ρα as a function of number of angles Ma (left) and noise level σ (right). High and low structure confidence are illustrated with qualitative examples of xxC,xxS, and xx†. Both plots show that as the data quality (i.e. number of angles and signal-to-noise ratio) increases, the structure confidence increases too, and we are more certain of the presence of the structure.
Fig. 3.

Structure confidence ρα as a function of number of angles Ma (left) and noise level σ (right). High and low structure confidence are illustrated with qualitative examples of xxC,xxS, and xx. Both plots show that as the data quality (i.e. number of angles and signal-to-noise ratio) increases, the structure confidence increases too, and we are more certain of the presence of the structure.

For the PE structure in Fig. 3(left), we provide additional results in Fig. 2(right). The images show the results of BUQO when considering (Ma,σ)=(50,0.007), (Ma,σ)=(200,0.035), and (Ma,σ)=(450,0.007). In particular, the last row shows the differences (in absolute values) between xxS and xxC. This corresponds to the residual PE structure that is probed by BUQO. It can be seen as a 2D map version of quantity ρα, giving the intensity value per pixel that is validated by the data. We can see that when the acquisition quality improves (i.e. σ decreases and/or Ma increases), the intensity value per pixel that is validated by the data increases.

In Fig. 2(left), we show results of BUQO for three PE-like structures that are reconstruction artifacts. For these structures, the last row shows that the intensity value per pixel that is validated by the data is equal to 0 (i.e. ρα=0). Hence, our method cannot reject H0, and the data cannot support the existence of the structure.

Complexity

In our experiments (see Fig. 4), we found that the numerical complexity of the proposed UQ is usually negligible compared with that of the reconstruction algorithm providing the MAP estimate. The computational bottleneck is usually the evaluation of the forward operator and its adjoint. The complexity is assessed in terms of the total number of iterations (i.e. number of evaluations of the forward operators and their adjoints) to reach convergence of the algorithms used to evaluate the MAP and for BUQO (primal-dual algorithms in both cases). Convergence is assumed to have occurred when all constrained are satisfied, and the estimates are stable, up to a fixed tolerance.

We measure the computational cost using the number of forward operator evaluations needed. The figure is a histogram of BUQO computational cost relative (as a ratio of) CT reconstruction computational cost across our experiments. In the majority of cases, the computation cost of BUQO amounts to 20% that of the image reconstruction. The overhead is not only small but also does not depend on the imaging setting.
Fig. 4.

We measure the computational cost using the number of forward operator evaluations needed. The figure is a histogram of BUQO computational cost relative (as a ratio of) CT reconstruction computational cost across our experiments. In the majority of cases, the computation cost of BUQO amounts to 20% that of the image reconstruction. The overhead is not only small but also does not depend on the imaging setting.

Discussion

We have introduced an UQ method in CT imaging that can be used to assess PE-like structures observed in CT scans. We have simulated different acquisition environments by varying the number of measurements and the noise level in the forward problem and used the resulting MAP estimate to investigate the behavior of the proposed method to quantify uncertainty of PE-like structures. Our method demonstrates diminishing confidence with a decrease in data quality, while correctly identifying reconstruction artifacts produced in simulation using low-quality data. In this closing section, we go over the strengths and weaknesses of the proposed method.

Manual annotations

The proposed method requires three inputs, namely the MAP estimate, the mask that isolates the area under investigation, and the set S, which represents our prior knowledge.

Currently, the mask is the result of a time-consuming manual segmentation exercise, done by experienced clinical radiologists which can be replaced by PE and artifact detection models that leverage artificial intelligence (23, 24).

The set S is built making use of a constraint defined in the gradient domain of the image (which is unsuitable for artifacts appearing close to a boundary) and is done by manual sampling (which is time consuming). Instead, the set S could be the result of a data-driven method such as generative Artificial Intelligence (25). Hence, a full pipeline would consist of data-driven methods for PE and artifact detection, a generative model to define the set S and finally the BUQO method to quantify the uncertainty of observed features.

Clinical use

Acute PE carries a significant associated morbidity and mortality and thus improvement in the degree of radiologist certainty in the positive identification of acute PEs in clinical practice is paramount. It is also important to improve the degree of radiologist certainty in identifying artifacts as such rather than false positive PEs, in order to avoid inappropriate treatment with anticoagulation and unnecessary bleeding risks. Further work is needed to validate the described method in clinical practice. In particular, the artifact creation should be more realistic to reflect those ecnountered in routine clinical care. Those presented in this paper served the purpose of a proof-of-concept study only.

Note

1

Here, N is the product of the individual dimensions of the 3D voxel array.

Funding

M.J.E. acknowledges support from the EPSRC (EP/S026045/1, EP/T026693/1, EP/V026259/1) and the Leverhulme Trust (ECF-2019-478). A.R. acknowledges support from the Royal Society of Edinburgh. All authors were supported by the Research Capability Funding of the Royal United Hospital.

Author Contributions

A.M.R.—conceptualization, methodology, software, data curation, writing-original draft preparation, investigation, visualization. H.K.—data curation, resources, validation. J.R.—data curation, resources. J.S.—funding acquisition. J.C.L.R.—project administration, funding acquisition, resources, validation. M.J.E.—conceptualization, methodology, writing–reviewing and editing, supervision, project administration, funding acquisition. A.R.—conceptualization, methodology, software, writing–reviewing and editing, supervision, project administration, funding acquisition.

Preprints

This manuscript was posted on arxiv as a preprint arXiv.2301.02467.

Data Availability

The matlab code to reproduce the results along with links to the data can be found at https://github.com/adwaye/test˙ct.

References

1

Withers
 
PJ
, et al.
2021
.
X-ray computed tomography
.
Nat Rev Methods Primers
.
1
(
1
):
1
21
.

2

The prevention of venous thromboembolism in hospitalised patients, House of Commons Report, HC
99,
2005
.

3

Meinel
 
FG
, et al.
2015
.
Predictive value of computed tomography in acute pulmonary embolism: systematic review and meta-analysis
.
Am J Med
.
128
(
7
):
747
759
.

4

Kim
 
NH
, et al.
2019
.
Chronic thromboembolic pulmonary hypertension
.
Eur Respir J
.
53
(
1
):
1801915
.

5

Kempny
 
A
, et al.
2019
.
Incidence, mortality and bleeding rates associated with pulmonary embolism in england between 1997 and 2015
.
Int J Cardiol
.
277
:
229
234
.

6

Hansen
 
PC
,
Jørgensen
 
J
,
Lionheart
 
WRB
.
2021
.
Computed tomography: algorithms, insight, and just enough theory
. Philadelphia: Society for Industrial and Applied Mathematics.

7

Seeram
 
E
.
2015
.
Computed tomography—e-book: physical principles, clinical applications, and quality control
.
Elsevier Health Sciences
.

8

Kaipio
 
JP
,
Somersalo
 
E
.
2006
.
Statistical and computational inverse problems
.
Vol. 160
.
New York (NY)
:
Springer
.

9

Bredies
 
K
,
Lorenz
 
DA
.
2018
.
Mathematical image processing
. 1st ed.
Basel
:
Birkhäuser
.

10

Robert
 
C
.
2007
.
The Bayesian choice: from decision-theoretic foundations to computational implementation
.
New York (NY)
:
Springer Science & Business Media
.

11

Robert
 
CP
,
Casella
 
G
.
2004
.
Monte Carlo statistical methods
.
New York (NY)
:
Springer
.

12

Siemens somatom manual [accessed 2022 Aug 30]. https://www.manualslib.com/manual/524455/Siemens-Somatom.html.

13

Pereyra
 
M
.
2016
.
Proximal Markov chain Monte Carlo algorithms
.
Stat Comput
.
26
(
4
):
745
760
.

14

Thouvenin
 
P-A
,
Repetti
 
A
,
Chainais
 
P
.
2022
.
A distributed gibbs sampler with hypergraph structure for high-dimensional inverse problems, arXiv, arXiv:2210.02341, preprint: not peer reviewed
.

15

Vono
 
M
,
Dobigeon
 
N
,
Chainais
 
P
.
2021
.
Asymptotically exact data augmentation: models, properties, and algorithms
.
J Comput Graph Stat
.
30
(
2
):
335
348
.

16

Chambolle
 
A
,
Pock
 
T
.
2016
.
An introduction to continuous optimization for imaging
.
Acta Numer
.
25
:
161
319
.

17

Combettes
 
PL
,
Pesquet
 
J-C
.
2010
.
Proximal splitting methods in signal processing. In: Bauschke HH, et al. editors. Fixed-point algorithms for inverse problems in science and engineering. New York: Springer-Verlag. p. 185–212
.

18

Komodakis
 
N
,
Pesquet
 
J-C
.
2015
.
Playing with duality: an overview of recent primal-dual approaches for solving large-scale optimization problems
.
IEEE Signal Process Mag
.
32
(
6
):
31
54
.

19

Pereyra
 
M
.
2017
.
Maximum-a-posteriori estimation with Bayesian confidence regions
.
SIAM J Imaging Sci
.
10
(
1
):
285
302
.

20

Repetti
 
A
,
Pereyra
 
M
,
Wiaux
 
Y
.
2018
.
Uncertainty quantification in imaging: when convex optimization meets Bayesian analysis. 2018 26th European Signal Processing Conference (EUSIPCO). p. 2668–2672
.

21

Repetti
 
A
,
Pereyra
 
M
,
Wiaux
 
Y
.
2019
.
Scalable Bayesian uncertainty quantification in imaging inverse problems via convex optimization
.
SIAM J Imaging Sci
.
12
(
1
):
87
118
.

22

Wolf
 
I
, et al.
2004
.
The medical Imaging Interaction Toolkit (MITK): a toolkit facilitating the creation of interactive software by extending VTK and ITK. Medical Imaging 2004: Visualization, Image-Guided Procedures, and Display. Vol. 5367. SPIE. p. 16–27
.

23

Dasegowda
 
G
, et al.
2023
.
Auto-detection of motion artifacts on ct pulmonary angiograms with a physician-trained ai algorithm
.
Diagnostics
.
13
(
4
):
778
.

24

Soffer
 
S
, et al.
2021
.
Deep learning for pulmonary embolism detection on computed tomography pulmonary angiogram: a systematic review and meta-analysis
.
Sci Rep
.
11
(
1
):
1
8
.

25

Wang
 
T
, et al.
2021
.
A review on medical imaging synthesis using deep learning and its clinical applications
.
J Appl Clin Med Phys
.
22
(
1
):
11
36
.

Author notes

M.J.E. and A.R. are joint last authors.

Competing Interest: J.C.L.R. makes the following disclosures: speaker’s fees—Sanofi, consultancy fees—NHSX, physician services—HeartFlow, co-founder and share holder—Heart & Lung Imaging LTD, part-time employee and share holder—RadNet. J.S. has received speakers fees, consultancy fees, and travel grants from AstraZeneca, Chiesi, MSD, and Janssen Pharmaceuticals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Editor: Rui Reis
Rui Reis
Editor
Search for other works by this author on: