Computational imaging problems are traditionally formulated as ill-posed inverse problems, where prior knowledge is essential for recovering high-quality images from limited or corrupted measurements. Classical approaches rely on explicitly defined priors, leading to optimization problems that are mathematically principled but often limited in their ability to capture the complexity of real-world data.
In this talk, I will argue that recent advances in generative modeling are driving a fundamental shift: from explicit priors to learned restoration operators that implicitly encode data distributions. Rather than specifying a prior as a function, we now access it through learned models that can be queried but not explicitly written down. This perspective reframes inverse problems as the design of iterative algorithms that interact with these operators.
I will present an optimization-centric view that connects classical proximal methods, plug-and-play algorithms, and modern generative models within this operator-centric framework. I will highlight three key ideas: (i) general restoration operators implicitly define priors, (ii) stochasticity can be used as a principled tool to improve optimization behavior in highly ill-posed problems, and (iii) priors can be learned directly from measurements without access to clean training data. The talk will cover both the theoretical foundations of these ideas and their applications in biomedical image reconstruction.
Reconstructing physical phenomena from indirect observations lies at the heart of scientific measurement and discovery, and is a unifying challenge across medical imaging, geophysical exploration, remote sensing, astronomy, and nondestructive testing. Classical approaches to such inverse problems have drawn on optimization, signal processing, and careful exploitation of physical forward models, yielding principled methods with well-understood theoretical properties. Machine learning offers transformative new capabilities in this space, yet off-the-shelf data-driven approaches fail to leverage our collective, if partial, understanding of the underlying physics. Such approaches can be brittle, data-hungry, and difficult to interpret.
In this talk, we explore how physical structure can be systematically integrated into learned reconstruction pipelines to yield methods that are faster, more accurate, and more data-efficient than either purely classical or purely data-driven alternatives. We develop these ideas through a progression of linear and nonlinear inverse problems, using nonlinear inverse scattering — where one seeks to recover an inhomogeneous medium from scattered acoustic or electromagnetic wave measurements — as a rich example that illustrates the challenges of nonlinearity, non-locality, and ill-posedness at scale. We show how classical algorithmic insights, such as frequency-continuation strategies and recursive linearization, can inspire neural network architectures and training procedures that respect the structure of the problem. Looking ahead, we identify key open problems and opportunities for the imaging community: the need for uncertainty quantification and robustness to distribution shift, theoretical foundations for learned solvers, and scalability to high-fidelity forward models.
Data generation with score diffusions and deep networks can synthesize high quality images, videos, and simulate complex physics. They learn and sample high dimensional probability distributions. Are they memorising or generalising? How can they circumvent the curse of dimensionality? What are the underlying structures which are learned? We demonstrate that they do generalize, and that multiscale organisations play a central role. We show that deep networks estimations involve sparse representations, with harmonic analysis transformations adapted to the image geometry.