-
PDF
- Split View
-
Views
-
Cite
Cite
Imaging: the next generation, Astronomy & Geophysics, Volume 64, Issue 1, February 2023, Pages 1.17–1.20, https://doi.org/10.1093/astrogeo/atac095
- Share Icon Share
Abstract
Sue Bowler talks to the editors of a RASTI Special Issue about interferometric imaging and the computational challenges needed for the next wave of radio astronomy instruments
The new open-access journal RAS Techniques and Instruments – RASTI – is publishing a Special Issue on the topic of Next-Generation Interferometric Image Reconstruction. The editors are looking for submissions proposing new algorithms and software, or addressing modern scientific computing challenges (including software development technologies, reproducibility and data challenges) related to image formation with current and future radio interferometric arrays.

MeerKAT radio telescope image at 1000MHz showing collimated synchrotron threads linking the two lobes of radio galaxy 137–006
The issue will be overseen by a special Editorial Board comprising RASTI editors Prof. Yves Wiaux of Heriot-Watt Edinburgh and Prof. Anna Scaife of the University of Manchester, together with guest editors Dr Kazunori Akiyama of MIT Haystack Observatory and Prof. Oleg Smirnov of Rhodes University & South African Radio Astronomy Observatory. We talked about the current and future challenges with instruments such as the Square Kilometre Array (SKA) that call for a focus on interferometric imaging.
Let's start with introductions…
Yves Wiaux: The four of us are experts in astronomical data processing. For myself in particular, I am an expert in computational imaging, and aim to bring the newest developments in applied mathematics, signal processing, and computer science to radio astronomical imaging. I entered the field of radio-interferometric imaging nearly 15 years ago, proposing new image reconstruction techniques coming from the theory of compressive sensing and optimisation and, more recently, leveraging artificial intelligence. My focus is to make not only astronomical imaging evolve, but also computational imaging theory and algorithms. The unprecedented challenges raised in astronomical imaging, targeting extreme image resolution and dynamic range, and coping with extreme data volumes and image dimension, offer a fantastic playground to push the frontiers of computational imaging quite significantly.
Kazunori Akiyama: I'm an astrophysicist interested in black holes and heavily involved in the Event Horizon Telescope (EHT) collaboration, an international group of scientists releasing the first images of black holes in 2019, where I spent my last ten years after the PhD. I joined the EHT in 2010, before there was even an Event Horizon Telescope name and the collaboration was only a few tens of people. I worked on the interferometric imaging algorithm and co-founded the imaging working group of the EHT Collaboration. I'm now a co-coordinator of the Algorithm and Inference working group for the Next Generation Event Horizon Telescope (ngEHT) collaboration, which is super-relevant for this special issue.
Oleg Smirnov: I'm not an astrophysicist at all, I'm more of a mathematics and software guy. My interest is in the entire data processing cycle, how to turn the data into something that scientists can look at. And the particularly challenging thing about radio interferometry is that it takes a very long processing cycle to turn the raw data into a direct image of the sky. It's a lot easier in optical astronomy, where I started with my PhD, also working on algorithms and techniques.
What are the challenges in this field that make this special issue of RASTI so timely?
YW: The new generation of radio telescope arrays, such as the upcoming Square Kilometre Array (SKA) and its existing pathfinders MeerKAT and ASKAP at radio frequencies, but also the Event Horizon Telescope (EHT) and the Very Large Telescope Interferometer (VLTI) at millimetre and optical/near-infrared frequencies, are set to observe the sky at an extreme regime of sensitivity and resolution, revealing complex emission of major scientific interest, and across frequencies and scales. Many algorithmic and data processing challenges arise in our quest to endow these instruments with their expected acute vision. Today, image reconstruction is heavily supported by the CLEAN algorithm, a conceptually simple algorithm that has been developed over decades and has beautifully served the field so far. But in an era of extreme resolution and dynamic range its capabilities are being severely stressed. Not to mention a lack of theoretical robustness of the combined process for calibration and imaging, the absence of uncertainty quantification functionality, and the fact that CLEAN often requires significant manual intervention, when algorithm automation is necessary with the extreme data volumes upon us. In this context, radio-interferometric imaging is to be reinvented and multiple groups around the world have embarked on this endeavour. The special issue aims to host submissions proposing new algorithms and software, or addressing modern scientific computing challenges (including software development technologies, reproducibility, data challenges) related to image formation with current and future interferometric arrays.
OS: The older telescopes had relatively lower capabilities in terms of sensitivity, in terms of the field of view they would give you, and in terms of the data rates, so you could do a much simpler job with the data, and get most of the potential out of it. Whereas with the new generation of telescopes, the engineers have pushed their capabilities so much, that there is just so much more to be seen in the data. So for us, it's a much bigger challenge to exploit all these new capabilities and effectively mine the scientific information of the data.
When I started as a postdoc, there was a bit of a lull in techniques development – in fact, barely anyone was working in that field. I think this is because a lot of progress had been made in the 1980s on algorithms like selfcal and CLEAN, and packages such as AIPS, so it was natural that in the subsequent decade people rather concentrated on exploiting those new tools and mining their data. But what's been happening since about the year 2000, is that all these new telescopes started being designed and built, e.g. LOFAR, MeerKAT, ASKAP, the EHT… Suddenly, there was a lot more interest in new techniques for them. A lot of new stuff has been proposed and implemented in the past 20 years, and I think another big problem for us is that we are still trying to digest it all. CLEAN has been around for 40 years, it's well-trusted; people understand its shortcomings, and they know how to use it effectively. Whereas with all the new algorithms that we've created, that operational experience is not there yet.
KA: The original EHT images are just two dimensional, and are capturing only a blurry ring of light from hot plasma at the edge of black holes. The ngEHT will have a sharper and more sensitive eye, providing a finer view of the black hole's photon ring as well as much fainter and more extended emission from the accretion flow and jet surrounding the black hole. As the EHT capability has been extended toward ngEHT, there is a growing need for dedicated imaging algorithms to produce multi-colour and full-polarisation movies of black holes with a much wider field of view at a few orders of magnitudes higher dynamic range. And that's why, even after the first imaging of black holes, the imaging techniques are still a central area of active development in this field. This issue comes at the right time, because many experts are exploring new imaging techniques for ngEHT and we're seeing new techniques popping up almost every month. So it's a very hot area in the field right now.
YW: It is worth mentioning the imbalance between the resources made available for the development of new instruments, and the small amounts made available to the necessary research to address the algorithmic and software challenges. Approaching jointly the questions of the design of instruments (data acquisition) and of the design of algorithms and software (data processing) is paramount, to avoid losing information in the process of image formation.
OS: Anecdotally, when I was entering the field, there was almost a culture of “Let's just build the instrument!” And the software was almost an after-thought, you know, it was assumed that someone, usually a postdoc, would just come along and write it.
KA: Connected interferometers such as ALMA usually have very good calibrations. EHT, on the other hand, is very, very different. Each telescope is located at a different site, with no common sky. We need to solve the atmospheric effects of each telescope site simultaneously with the imaging because the measurements of these effects are challenging or even impossible at sub- or short millimetre wavelengths and often much more inaccurate than other radio wavelengths. In part it is because the angular resolutions of arrays such as ALMA are low enough that some of the bright sources on the sky are just observed as unresolved point sources, and some sources with known intensities like planets are observable; these ‘known’ sources allow accurate calibrations of data. But the EHT has an extremely high angular resolution – a few tens of microarcseconds. We are observing a really, really tiny part of the universe, unseeable before us and still not well known. We have learned that every single source we have observed has complex morphologies evolving dynamically over time. So we don't have any good sources for calibration.
OS: In optical astronomy, it's a lot more direct, because, you know, a telescope, whether it's an amateur telescope in your backyard, or the JWST, pretty much directly takes an image. What happens in the data processing is much more direct, and in many ways, a more intuitive process. For interferometry you have a long processing chain, with many steps and algorithms, and the algorithms tend to have knobs on them and different settings; it's not always obvious how changing a setting will affect the results. It's been quite a challenge, digesting all these new developments, and figuring out the best way to put them to use to make the optimal images, and the best products from these instruments. It's basically about exploiting the full potential of what the engineering has done.
YW: The joint requirements for precision, robustness, and scalability of the next-generation algorithms for image reconstruction in radio interferometry are not met by any existing algorithmic structure. This really calls for the communities of signal processing, applied mathematics, and computer science to push the frontiers of computational imaging. That requires a ‘conversation’ between these communities and the radio astronomy community, which has started to take place already.
OS: In the past, we still had so-called ‘Compleat Radio Astronomers’, someone who could understand the entire signal chain, from the signal to the receiver on the telescope, through to the imaging and down to the astrophysics, with a broad and yet deep enough understanding of all those steps. Now, given how complex the instruments have become, and how complex their engineering has become, it's just impossible for one brain to contain all that information. So there is a lot of specialisation necessary. Some astronomers just want to use the telescope and get an image; they're perfectly happy to treat it as a black box. And then there are the ‘black belt’ users who have an understanding of, perhaps not the entire engineering side, but at least all of the data processing. The black belts are extremely valuable to us because they are the ones who are providing the testing. We are driving the improvements, but they are validating the improvements and they can give the most feedback. But once we get to something like the SKA, the processing almost has to become a black box, because nobody is going to be doing the data reduction on their laptop. By that point we will have to understand our algorithms and be sure they work well enough to say to the end-user astronomer, “go do your astrophysics, because we've taken care of everything else”. That's a big, big challenge.
I think this is where instruments like MeerKAT, LOFAR and ASKAP play such a large role. I think the biggest progress is actually coming from getting the ‘black belt’ users to run those instruments with our software, with our algorithms, and giving their feedback, as a rehearsal for the SKA.

The first EHT array is sparse, with limited angular resolution, so even the finest image in the history of astronomy, of the M87 supermassive black hole in polarised light, is only “a very blurry doughnut”
What about VLBI?
KA: We know we need new algorithms for ngEHT from the fact that the EHT array has evolved. The first full EHT array observation used five geographic sites – pretty sparse coverage to provide the finest image in the history of astronomy! But because the first EHT array is so sparse with limited angular resolution, we only see a very blurry doughnut. It was the highest angular resolution ever achieved, but that's still not enough to see the substructure of the ring, and it doesn't have a lot of sensitivity to extended structure outside the ring from, for instance, the flow of the plasma falling into the black hole or in the jet escaping from the black hole's strong gravity pull. Once we have more dense coverage, the issue is that we need a really good imaging algorithm. For the ngEHT it will need to handle the complicated flow nature, two or three orders of magnitude in both spatial scales and in the radio intensity at multiple radio frequencies. That needs a new dedicated development for the imaging techniques.
Another thing is that a black hole is pretty dynamic. Our first image of M87 looks pretty static but we have already seen some dynamics within the observation window of a week. The M87 black hole is one of the most massive black holes in the universe, but Sgr A* is just 4 million solar masses, 1000 times less. When we see something evolving over a week for M87, it can happen for Sag A* in seven minutes. The first image of Sag A* was a static reconstruction of the time-averaged morphology over a night but, you know, we really want to make the movie. That means dedicated algorithm development also for the time domain.
What do you hope this RASTI special issue will achieve?
OS: I think the value of this endeavour is putting the spotlight on the next generation of algorithms. But there's the question of not only developing much better techniques, but also having them adopted, with the whole scientific process of tailoring, testing, validating the techniques.
YW: Validating those techniques to a level where a community at large feels comfortable to use them is an important question, a science question and a question of mutual education between communities – that's also a role of the special issue.
OS: I also think we need to make substantial progress in addressing the reproducibility question. When you have a very long workflow from the data to the result, the issue of reproducibility becomes very important. And in radio astronomy, we have been playing fast and loose with that. A lot of science papers say, “We got the data, and then we followed standard reduction procedures”. And that's all they say, which, in practice, means that some other group with the same data will not necessarily be able to reproduce the same images. With a long and complicated workflow, this just becomes exponentially harder. So I think that that's another challenge for us, to bring a lot more rigour to this process. In a sense, we're making the problem even bigger with our algorithmic work, because we're putting a lot more different options into the tool chain. I think that's another reason why this special issue, which actually stresses the algorithms, the techniques, and the way we deploy them, can be so valuable.
KA: I have been emphasising why the EHT imaging needs to be specialised, but of course some of the components are in common with other techniques. For instance, once we have a more dense array, the array itself becomes more similar to other interferometers sharing common needs to deal with a wider field of view and higher dynamic range images. I hope that this special issue helps us to see the common problems, and how people working on the different interferometers overcome these challenges. I also hope that this issue will highlight the entire spectrum of the radio interferometers and the imagery they produce, what they have in common and where they are different, to give a good overview of what's going on in this field.
RAS Techniques and Instruments (RASTI) welcomes contributions to a special issue devoted to new developments in image reconstruction techniques for aperture synthesis by interferometry in astronomy. The new generation of radio telescope arrays, such as the upcoming Square Kilometre Array (SKA) and its existing pathfinders MeerKAT and ASKAP at radio frequencies, but also the Event Horizon Telescope (EHT) and the Very Large Telescope Interferometer (VLTI) at millimeter and optical/near-infrared frequencies, are set to observe the sky at an extreme regime of sensitivity and resolution, revealing complex emission of major astrophysical interest, and across frequencies and scales. Many algorithmic and data processing challenges arise in our quest to endow these instruments with their expected acute vision. The special issue is open for submissions proposing new algorithms and software, or addressing modern scientific computing challenges (including software development technologies, reproducibility, data challenges) related to image formation with current and future interferometric arrays.
Submissions will be considered on a rolling basis until 15 October 2023.
Submissions should be made to the RASTI ScholarOne portal (mc.manuscriptcentral.com/rasti) as normal; Author Guidelines are available at the main journal webpage (https://dbpia.nl.go.kr/rasti/pages/general-instructions). Authors should be sure to select the special issue title during the submission process. All papers will go through the usual RASTI review process, except that they will be handled by a special editorial board assembled for this issue. Papers will be published online as they are accepted, and combined into a special volume to be finalised early in 2024.
More information about RAS Techniques and Instruments and the submission process are available at https://dbpia.nl.go.kr/rasti.
For scientific queries, please contact the Yves Wiaux at [email protected]. For all other queries, please contact [email protected].
AUTHORS
Sue Bowler is editor of A&G, and is intrigued by the power and beauty of the images that result from these techniques, and hopes for lots more frpm the next generation of instruments.
Prof. Yves Wiaux ([email protected]) is a theoretical physicist by education. He holds a chair of imaging sciences at the School of Engineering and Physical Sciences of Heriot-Watt University Edinburgh. He specialises in astronomical and biomedical imaging applications, with particular focus in radio-interferometric imaging in astronomy. He also chairs the Biomedical and Astronomical Signal Processing Frontiers Conference series and is a member of the RASTI Editorial Board.
Dr Kazunori Akiyama is an astrophysicist at MIT Haystack Observatory and has been a member of the EHT Collaboration since 2010. He co-led the EHT imaging working group after the Collaboration was established in 2017, with Michael Johnson until 2020, and Katie Bouman & Jose Gómez since then. He has played roles in observations, imaging techniques and software development for the EHT.
Distinguished Prof. Oleg Smirnov holds the SKA Research Chair in Radio Astronomy Techniques and Technologies (RATT) at Rhodes University, and heads the Radio Astronomy Research Group at the South African Radio Astronomy Observatory (SARAO). He received his PhD in Astronomy & Astrophysics in 1998 from the Institute of Astronomy of the Russian Academy of Sciences.