Expert-to-Non-Expert (ETON) Talks

Background

This program consists of lectures that offer an overview of significant advancements and emerging topics in signal processing. The Expert-to-Non-Expert Talk (ETON) series is designed for both students and industry professionals to bridge knowledge gaps. These lectures are delivered by the original inventors or leading experts in the field, providing valuable insights and fostering a deeper understanding of cutting-edge developments.


Computational Lighfield Microscopy for Neuroscience

Presenter:
Pier Luigi Dragotti, EEE Department, Imperial College London

Abstract:
Understanding how networks of neurons process information is one of the key challenges in modern neuroscience. A necessary step to achieving this goal is to be able to observe the dynamics of large populations of neurons over a large area of the brain. Light-field microscopy (LFM), which uses a type of scanless microscope, is a particularly attractive candidate for high-speed 3D imaging. It captures volumetric information in a single snapshot, allowing volumetric imaging at video frame rates. In this talk, we review fundamental aspects of LFM and then present computational methods based on generalized sampling theory and on physics-inspired  deep learning for neuron localization and activity estimation. We also show how the unfolding technique which is an approach that allows embedding priors and models in the neural network architecture can be successfully employed in this context. We conclude by outlining opportunities for the computational imaging community to have an impact in this emerging research field.


Can I Trust the AI “I” Want to Learn How to Build? A Non-Expert’s Path to Responsible Image Processing Systems

Presenter:
Shreya Verma, Technical Lead at Boeing, Seattle, Washington, United States

Abstract:
Image processing systems increasingly influence real-world decisions, from medical imaging and aviation vision systems to large-scale visual analytics and generative models. As AI tools become more accessible, many practitioners can now build image-based models before fully understanding when and whether those models should be trusted. This gap between technical capability and responsible use is especially critical in high-stakes domains.

This talk addresses a central question for emerging and experienced practitioners alike: What does it mean to trust an image-based AI system? Designed for a non-expert to expert audience, the session reframes trust as the outcome of concrete choices across the image processing pipeline, including data curation, labeling practices, evaluation metrics, and deployment context.

Using intuitive examples from healthcare and aviation, the talk illustrates how common decisions optimizing for accuracy alone, training on narrow visual distributions, or deploying opaque models can introduce bias, brittleness, and hidden risk. Rather than focusing on  algorithms or mathematical detail, the session provides a practical mental model for progressing from “I can train an image model” to “I understand its limitations, uncertainty, and real-world impact.”

Attendees will leave with a structured framework for evaluating image processing systems not just by performance, but by their suitability, reliability, and accountability in real-world applications.