Satellite Workshops

Background

We invite submissions to a wide variety of Satellite Workshops dedicated to focused or emerging topics in image processing, vision and imaging, not specifically covered in the main conference. Please find more information about each workshop from the list below.

Manuscripts submitted to satellite workshops will undergo a double-anonymous peer review and accepted papers will be included to the IEEE Xplore Digital Library as part of the ICIP 2026 Satellite Workshop Proceedings. The process is managed by the workshop organizers and follows the conference review standards. Submitted manuscripts must conform to the conference’s style, format, and length requirements (see Author Kit).


Workshop page:
https://mcl.usc.edu/learning-beyond-deep-learning-lbdl/

Organizers:
– C.-C. Jay Kuo, University of Southern California, USA
– Ling Guan, Toronto Metropolitan University & Ryerson University, Canada

Workshop description:

There has been rapid development in artificial intelligence and machine learning technologies over the last decade. The core lies in large amounts of annotated training data and deep learning networks. Although deep learning networks have significantly impacted various application domains, they have several shortcomings. They are mathematically intractable, vulnerable to adversarial attacks, and require substantial training data. Furthermore, their large model sizes make deploying mobile and edge devices a significant challenge. Developing new learning paradigms beyond deep learning is desirable. Yet, progress in this direction remains slow and sparse, despite advancements in recent years. This workshop invites researchers of common interests to contribute and generate momentum for future breakthroughs. One or more characteristics will feature the new learning paradigm: interpretability, smaller model sizes, lower computational complexities, and high performance. LBDL II is the sequel to the LBDL workshop held at ICIP 2025. Accepted papers will be published in the IEEE Xplore ICIP 2026 Workshop Proceedings.

Workshop page:
https://www.ieeecfm.org/

Organizers:
Changsheng Gao, Nanyang Technological University, Singapore
– Ying Liu, Santa Clara University, USA
– Heming Sun,
Yokohama National University, Japan
– Hyomin Choi, InterDigital, USA
– Dandan Ding, Hangzhou Normal University, China
– Fengqing Maggie Zhu, Purdue University, USA
– Zhan Ma, Nanjing University, China
– Ivan V. Bajić, Simon Fraser University, Canada
– Zhu Li, University of Missouri, USA
– Lu Yu, Zhejiang University, China

Workshop description:

Multimedia signals—images, video, audio, and 3D data—have traditionally been compressed for human perception. With the rise of edge AI, multimodal models, autonomous systems, and advanced wireless networks enabling large-scale machine-to-machine (M2M) communication, machines are now primary consumers of multimedia data.

This shift requires rethinking compression pipelines. Beyond perceptual quality and bitrate, methods must support downstream tasks like detection, segmentation, tracking, recognition, and multimodal reasoning, while meeting constraints on bandwidth, latency, complexity, energy, privacy, and robustness.

This workshop brings together experts in compression, computer vision, multimodal learning, and communications to explore algorithms, representations, systems, and standards for multimedia coding optimized for machine intelligence and joint human–machine use.

Workshop page:
https://dl-hsa.com/

Organizers:
– Emanuela Marasco, Virginia Commonwealth University, USA
– Thirimachos Bourlai, University of Georgia, USA

Workshop description:

This workshop focuses on leveraging deep learning and foundation models to address key challenges such as spectral complexity, high dimensionality, and preserving spectral band integrity, which is often overlooked in conventional methods. The workshop aims to foster innovation and collaboration across domains including healthcare, environmental monitoring, agriculture, public safety, and defense. It will feature regular and invited papers, keynote talks from academia and industry, and a panel discussion to promote dialogue and emerging research directions.

Workshop page:
https://iplab.dmi.unict.it/mmforwild

Organizers:
– Sebastiano Battiato, University of Catania, Italy
– Giulia Boato, University of Trento, Italy
– Alessandro Ortis, University of Catania, Italy
– Nasir Memon, New York University, USA

Workshop description:

MMForWILD offers a forum for proposing multimedia forensic solutions meeting the operational needs of forensics and intelligence operators. The workshop is targeted both at researchers working on innovative multimedia technology and experts developing tools in the field. The goal of the workshop is to attract papers investigating the use of multimedia forensics outside the controlled environment of research laboratories. It intends to offer a venue for theory- and data-driven techniques addressing the trustworthiness of media data and the ability of verifying their integrity to prevent harmful misuses, seeking solutions at the edge of signal processing, deep learning, and multimedia analysis.

Workshop page:
https://sites.google.com/unimib.it/cciw2026/

Organizers:
– Raimondo Schettini, University of Milano-Bicocca, Italy
– Simone Bianco, University of Milano-Bicocca, Italy
– Marco Buzzelli, University of Milano-Bicocca, Italy
– Alain Tremeau, Université Jean-Monnet, France

Workshop description:

The workshop solicits contributions on computational color imaging beyond RGB and emerging sensing modalities, physically based and perceptually motivated color modeling within contemporary image processing and AI pipelines, learning strategies under limited, biased, or synthetic data regimes, perceptually grounded evaluation methodologies and realistic benchmarks, and application-oriented studies in domains strongly represented at ICIP such as computational photography, food analysis, medical imaging, cultural heritage, and extended reality.

Workshop page:
https://sites.google.com/view/mlow-icip26-satellite-workshop/home

Organizers:
– Jihyong Oh, Chung-Ang University, Republic of Korea
– Victoria Vesna, UCLA, USA
– Juan L. Gonzalez Bello, Flawless AI, USA
– JungHyuk Im, Innerverz, Republic of Korea
– Zeyu Xiao, National University of Singapore, Singapore
– Marcos V. Conde, University of Würzburg, Germany
– Ziyu Wan, Microsoft AI, USA
– Hyeokjun Kweon, Chung-Ang University, Republic of Korea
– Rebecca Ruige Xu, Syracuse University, USA
– Rui Zhao, Nanyang Technological University, Singapore
– Hak Gu Kim, Chung-Ang University, Republic of Korea
– Ryszard W. Kluszczyński, University of Lodz, Poland
– Haley Marks, UCLA, USA

Workshop description:

MLOW focuses on interactive metaverse systems that can perceive, communicate, and act within 3D and 4D environments. While ICIP traditionally emphasizes image processing foundations, emerging immersive systems now require deeper integration of multimodal perception, neural rendering, generative modeling, and human-centered design. This workshop addresses the intersection between technical foundations and cultural, affective, and creative dimensions of interactive AI. On the technical side, topics include multimodal VLM and LLM grounding, 3D and 4D perception, dynamic scene understanding, neural rendering, and embodied AI operating across visual and linguistic modalities. On the human dimension side, MLOW explores how AI systems engage with cultural context, diversity, affect, and artistic expression in immersive environments. The workshop complements ICIP 2026 by extending image processing research toward interactive and socially grounded metaverse applications. It stimulates new momentum by connecting image processing, generative modeling, XR systems, and interdisciplinary collaborations with art and human-computer interaction communities. MLOW also promotes synergies between IEEE Signal Processing Society and related societies working on multimedia, robotics, and computational creativity. By bringing together technical researchers, artists, and system builders, the workshop aims to establish a new research direction in culturally-aware interactive metaverse AI.

Workshop page:
https://mmlab-cv.github.io/4dh-workshop/

Organizers:
– Giulia Martinelli, University of Trento, Italy
– Nicola Garau, University of Trento, Italy
– Nicola Conci, University of Trento, Italy

Workshop description:

4D Humans refers to the reconstruction of temporally consistent 3D human models, the 4th dimension being time. Starting from monocular or multi-view image sequences, the objective is to recover human shape, pose, motion, and appearance in a coherent representation.

4D human reconstruction has the potential to change how we represent and interact in digital environments. Spanning from realistic avatar creation, to motion analysis in fields like healthcare, sports, human-robot interaction, contributing to defining more perceptive embodied AI systems, the ability to recover dynamic humans directly from video opens the door to the development of more natural and human-centered technologies.

This workshop aims to expand the awareness of 4D humans research, aiming to build a community within ICIP, with a strong emphasis on image and video-based methods, identifying open challenges and sharing emerging solutions.

Workshop page:
https://sites.google.com/view/scifor-2026/home

Organizers:
– João Phillipe Cardenuto, Universidade Estadual de Campinas, Brazil
– Daniel Moreira, Loyola University Chicago, USA
– Anderson Rocha, Universidade Estadual de Campinas, Brazil

Workshop description:

The number of scientific paper retractions due to forged images has quadrupled over the past two decades, mainly due to image duplication and manipulation. Image duplication, particularly copy–move forgery, remains a primary problem. Yet, detection in biomedical figures remains largely dependent on manual screening, as automatic general-purpose forensic tools often fail in the complex domain of biomedical imaging.

To catalyze solutions to the problem, we organized the RECOD.ai/LUC Scientific Image Forgery Detection Competition, sponsored by the IEEE Signal Processing Society and Kaggle research programs. Featuring a novel benchmark dataset derived from over 2,000 retracted articles, the competition attracted more than 1,500 teams worldwide, yielding a diverse set of approaches and practical insights into what works and fails when cutting-edge AI solutions are deployed to scientific images.

This workshop will present the findings from this competition, detailing the dataset, evaluation protocols, and the practical lessons learned when deploying AI for research integrity. Winning teams are invited to present their solutions and design choices, providing a snapshot of the current state of the art. The workshop will also feature contributed papers on image forensics applied to scientific images, and it will conclude with a panel of experts from image forensics and biomedical imaging to discuss emerging threats, key challenges, and open questions for the community.

Workshop page:
https://zoi.utia.cas.cz/index.php/icip2026

Organizers:
– Barbara Zitová, Czech Academy of Sciences, Czechia
– Matthew C. Stamm, Drexel University, USA
– Babak Mahdian, Czech Academy of Sciences, Czechia
– Adam Novozámský, Czech Academy of Sciences, Czechia

Workshop description:

This workshop addresses the transition from binary classification to decision-grade forensic architectures. This direction is supported by emerging research suggesting that forensic models, despite strong benchmarks, may degrade under domain shift and overfit to generator-specific artifacts. The program targets five research pillars: Quality-to-Confidence Modeling (with dynamic evidence weighting and dataset shift), Calibrated Uncertainty (utilizing conformal prediction and proper scoring rules), Multi-Source Evidence Fusion (using evidence graphs), Operational Robustness, and Reliability-Centric Open Evaluation Challenge. The workshop will host an open evaluation challenge on synthetic image detection and localization of manipulated content, sponsored by ULRI’s Digital Safety Research Institute, using sequestered data with unseen generators and manipulation types to mirror real media-authentication conditions. It will provide a forum for high-performing teams to present approaches.

Workshop page:
https://qciworkshop2026icip.lovable.app/

Organizers:
– Sayantan Dutta, GE HealthCare, India
– Adrian Basarab, Université de Lyon, France
– Denis Kouamé, Université de Toulouse, France

Abstract:

This workshop addresses the transition from binary classification to decision-grade forensic architectures. This direction is supported by emerging research suggesting that forensic models, despite strong benchmarks, may degrade under domain shift and overfit to generator-specific artifacts. The program targets five research pillars: Quality-to-Confidence Modeling (with dynamic evidence weighting and dataset shift), Calibrated Uncertainty (utilizing conformal prediction and proper scoring rules), Multi-Source Evidence Fusion (using evidence graphs), Operational Robustness, and Reliability-Centric Open Evaluation Challenge. The workshop will host an open evaluation challenge on synthetic image detection and localization of manipulated content, sponsored by ULRI’s Digital Safety Research Institute, using sequestered data with unseen generators and manipulation types to mirror real media-authentication conditions. It will provide a forum for high-performing teams to present approaches.

Workshop page:
https://events.aimicroscopy.org/icip-2026/

Organizers:
– Doğa Gürsoy, Argonne National Laboratory, USA
– Jizhou Li, The Chinese University of Hong Kong, Hong Kong

Workshop description:

Computational microscopy is transforming the quantitative characterization of advanced materials and biological systems, shifting the primary bottleneck from hardware limitations to computational challenges. This workshop will highlight emerging computational methodologies for next-generation X-ray and electron microscopy, with a strong emphasis on scalable and robust algorithms that integrate seamlessly into scientific and industrial workflows. By identifying open challenges and incorporating industry perspectives, the workshop seeks to advance quantitative, high-dimensional imaging techniques that directly drive technological innovation and scientific discovery.

Workshop page:
https://sites.google.com/view/comic-at-ieee-icip-2026/

Organizers:
– Erdem Sahin, Tampere University, Finland
– Jani Mäkinen, Tampere University, Finland
– Ugur Akpinar, Tampere University, Finland

Abstract:

Optical microscopy enables noninvasive acquisition of visual information across spatial, spectral, and temporal dimensions, and its long history is marked by steady advances that expand what can be captured. Computational microscopy continues this progress by merging hardware design with computational algorithms for image reconstruction, enhancement, and analysis. increasing information content beyond traditional methods and enabling functionalities like snapshot 3D imaging. Machine learning and AI further accelerate development through data-driven optimization, though challenges remain, especially in life sciences where high-quality annotated data are difficult to obtain and standardize. Future advances will rely on multidisciplinary research combining accurate image formation models, precise optics/photonics fabrication, and advanced AI. This workshop brings together experts across these fields to foster collaboration and inspire new innovations in computational microscopy.

Workshop page:
https://hydroimaging.github.io/

Organizers:
– Mourad Oussalah, University of Oulu, Finland
– Olof Mogren, RISE Research Institutes of Sweden, Sweden
– Jukka Heikkonen, University of Turku, Finland
– Getnet Demil, University of Oulu, Finland
– Farhan Humayun, University of Turku, Finland

Workshop description:

Earth Observation is a rapidly growing research field that brings together computer vision, machine learning, and signal/image processing to provide valuable information about climate change, water cycle and predicting environmental attributes (e.g., water quality, snow depth, vegetation cover, pollution).

Recent advances in satellite observation analysis, discrepancy of on-site sensory modalities and land-surface modelling as well as availability of large scale datasets (e.g., Copernicus) together with growing pretrained models offer new opportunities for scalable, computational efficient, and observation-driven prediction suitable for operational and/or pre-operational use.

This workshop aims to bring together experts in machine vision, satellite observation, machine learning, remote sensing, hydrology and land-surface modelling to report on the latest findings on use of such technology for land-surface processes, water management and climate monitoring with varying spatial, temporal and spectral resolution. This implicitly fosters collaboration between Computer Vision, Remote Sensing and Environmental Monitoring communities, enhancing new computer vision based environmental research. This builds a bridge between several IEEE societies (e.g., GRSS, SPS, SSIT, CIS).

Workshop page:
https://events.tuni.fi/planetarymissionsimaging/

Organizers:
– Sampsa Pursiainen, Tampere University, Finland
– Pamela Such, SETI Institute, USA
– Christelle Eyraud, Aix-Marseille University, France
– Tomas Kohout, University of Turku, Finland
– Topi Pajala, Tampere University, Finland
– Ozgur Karatekin, Royal Observatory of Belgium, Belgium
– Alexandra Koulouri, Tampere University, Finland
– Camilo Andres Reyes, SpaceIn, Colombia

Workshop description:

This one-day satellite workshop focuses on advanced image processing methods for planetary exploration and small-body missions. Modern space missions rely on diverse imaging systems such as optical cameras, hyperspectral sensors, radar and ground-penetrating radar, LiDAR, and in-situ microscopes. These instruments operate in extreme environments characterized by low illumination, strong noise, dust contamination, radiation, limited energy resources, and restricted data transmission. The workshop aims to bring together researchers addressing these challenges. Particular attention is given to algorithmic approaches for low signal-to-noise imaging, radar and subsurface reconstruction, multispectral and hyperspectral analysis, three-dimensional shape reconstruction, multimodal data fusion, physics-informed learning, and onboard processing methods.

Workshop page:
https://sites.google.com/view/ai-for-scientific-imaging

Organizers:
– Samuel Pinilla, Diamond Light Source, UK
– Ahmet Mete Elbir, Istinye University, Turkey
– Gianfelice Cinque, Diamond Light Source, UK

Workshop description:

Scientific imaging constitutes an important bedrock of modern research, providing essential capability to visualize phenomena across scales that span orders of magnitude -from the intricate diffraction patterns of a crystal in crystallography, to the coherent synthetic aperture required for high-resolution inverse synthetic aperture radar 3D reconstruction. Its importance derives from the unique ability to translate physical interactions into quantifiable, high-dimensional data that drives fundamental breakthroughs. The application domains are remarkably diverse yet united by common ill-posed inverse problems: astronomy imaging pushes against the diffraction limit; infrared spectroscopy seeks to unmix complex chemical signatures; and synchrotron hyperspectral imaging generates massive, multimodal datasets. Emerging frontiers such as diffractive optical imaging exploit wavefront shaping to see through scattering media, while multi-agent learning for radar enables distributed, cooperative sensing networks that behave as a single adaptive system. Concurrently, 5G-based detection and imaging leverage ubiquitous communications infrastructure for passive sensing, and 3D inverse SAR imaging reconstructs volumetric target information from limited-aspect reflectivity data. In this regard, AI provides data-driven representations that serve as powerful regularizers, effectively recovering lost phase information, denoising low-SNR spectroscopic signatures, and enabling real-time fusion across heterogeneous multimodal measurements. Throughout this workshop, we will examine the theoretical foundations and algorithmic innovations driving AI-enabled Scientific Imaging and Synthetic Apertures, with concrete real-world use cases. Our objective is to forge connections between historically siloed imaging communities and synergies within IEEE societies, revealing how shared mathematical frameworks and learning-based approaches can bridge gaps between Synthetic Aperture and Scientific Imaging.

Workshop page:
https://alumni.media.mit.edu/~ayush/ToF2026/

Organizers:
– Miguel Heredia Conde, University of Wuppertal, Germany
– Peter Vouras, U.S. Department of Defense, USA

Workshop description:

The ICIP-2026 workshop, “Time-Resolved Computational Imaging,” highlights the use of high-resolution measurements of time delay to produce images of exceptional quality and information. For example, these images may provide depth and three-dimensional views of objects in a factory that a robot can interpret. In most imaging disciplines, the exposure times required by the hardware are often beyond the scale of time variations in the scene. Hence, the delay dimension is often ignored and variability over time is regarded as a distortion. Time-resolved imaging sensors, which not only offer 2D spatial (angular) resolution, but also high resolution over the temporal dimension are attractive for multiple scientific disciplines.

Workshop page:
https://sites.google.com/view/icip2026-cvuia/home

Organizers:
– Alexandre Bernardino, University of Lisbon, Portugal
– Athira Nambiar, SRM Institute of Science and Technology, India
– Nuno Gracias, University of Girona, Spain

Workshop description:

Aquatic ecosystems are vital for climate regulation, biodiversity, and global biogeochemical cycles, yet many underwater regions remain poorly explored due to the technical complexity of long-term observation. Advances in sensing technologies, robotic platforms, and artificial intelligence now enable large-scale collection of underwater imagery, creating new opportunities for automated environmental monitoring and scientific discovery. However, reliable analysis remains challenging because of light attenuation, scattering, turbidity, non-uniform illumination, and limited positioning accuracy. Addressing these constraints requires interdisciplinary collaboration across computer vision, robotics, marine science, and environmental engineering to develop robust algorithms and imaging methods specifically designed for underwater environments. This workshop invites contributions focused on computational approaches for underwater image understanding. Topics of interest include image restoration and enhancement in degraded environments; detection, segmentation, and tracking of marine organisms; visual navigation and mapping; real-time processing for robot navigation; multimodal, spectral and acoustic imaging techniques; ecological monitoring and behavioral analysis; data-efficient and physics-informed learning methods; synthetic data generation and benchmarking; digital twin of underwater ecosystems; and calibration of underwater cameras and sensing systems.

Workshop page:
https://deepastronomy.net

Organizers:
– Lu Fang, Tsinghua University, China
– Sergio Javier González Manrique, Instituto de Astrofísica de Canarias & Universidad de La Laguna, Spain
– Esteban Vera, Pontificia Universidad Católica de Valparaíso, Chile
– Liangcai Cao, Tsinghua University, China
– Chuan Li, Nanjing University, China
– Shangbin Yang, Chinese Academy of Sciences, China
– Xiaoli Yan, Yunnan Astronomical Observatory, China
– Yu Huang, Purple Mountain Observatory, China

Workshop description:

Ground-based astronomical imaging generates high-dimensional data under extreme physical constraints, including atmospheric variability, low photon counts, and instrumental distortions. These conditions pose challenging inverse problems in denoising, super-resolution, deconvolution, spectroscopic reconstruction, and multimodal data fusion. Recent advances in artificial intelligence (AI) and learning-based computational imaging provide powerful tools for addressing these challenges. The “DeepAstronomy Workshop” will explore AI-driven methods for astronomical image restoration, reconstruction, and scientific inference, with emphasis on physics-informed learning, generative modeling, scalable inverse algorithms, and smart imaging paradigms. Aligned with ICIP’s core themes in image processing and learning-based vision, the workshop aims to connect the signal processing, machine learning, and astrophysics communities and to position astronomy as a demanding real-world testbed for next-generation AI imaging systems.

Workshop page:
https://arti-icip.github.io/

Organizers:
– Pai Chet Ng, Singapore Institute of Technology, Singapore
– Guang Hua, Singapore Institute of Technology, Singapore
– Fani Deligianni, University of Glasgow, United Kingdom
– Konstantinos N. Plataniotis, University of Toronto, Canada

Abstract:

Recent breakthroughs in agentic AI have transitioned computer vision from static, feed-forward perception to dynamic systems capable of reasoning, planning, and iterative verification.

Concurrently, the proliferation of synthetic media, adversarial attacks, and sophisticated data degradation has made reliability and authenticity central challenges for the signal processing community. The Agentic Reasoning for Trustworthy Imagery (ARTI) workshop introduces a novel paradigm: moving beyond passive image processing toward “active” systems that reason about visual uncertainty, provenance, and manipulation.

We invite submissions on a wide range of topics related to agentic reasoning and trustworthy imagery, including but not limited to:

✓ Reasoning-guided restoration
✓ Watermark-aware generation
✓ Multi-agent visual forensics
✓ Adversarial-aware perception
✓ Iterative visual verification
✓ Adaptive imaging pipelines
✓ Multimodal trust gap solutions
✓ Self-correcting vision models
✓ Generative AI for visual reasoning
✓ Robustness in human-AI collaborative vision
✓ Explainable agentic vision systems
✓ Privacy-preserving visual reasoning

Workshop page:
https://sites.google.com/view/icip-3dvpc

Organizers:
– Zhu Li, University of Missouri, USA
– Li Li, University of Science and Technology of China, China
– Chuanmin Jia, Peking University, China
– Anique Akhtar, Qualcomm, USA

Workshop description:

Volumetric video is transforming immersive media by enabling full three-dimensional scene capture, representation, and interaction. Emerging formats—including neural radiance fields, 3D Gaussian splatting, point clouds, and mesh-based models—are driving new applications in virtual and augmented reality, telepresence, immersive communication, and interactive storytelling. However, the widespread deployment of volumetric video remains constrained by fundamental challenges across acquisition, representation, compression, transmission, and quality evaluation. This workshop aims to bring together researchers working on next-generation volumetric visual data processing and delivery. Core themes include neural and hybrid 3D representations, scalable compression, real-time streaming systems, perceptual quality assessment, and end-to-end immersive media pipelines. The workshop will emphasize both algorithmic advances and system-level integration, bridging computer vision, graphics, signal processing, and communication networks. The proposed topic merits a dedicated forum due to the rapid emergence of neural scene representations and their profound implications for multimedia systems, which are not yet fully addressed in traditional image and video processing tracks. The workshop will complement the ICIP 2026 technical program by fostering interdisciplinary dialogue and highlighting new research directions at the intersection of visual representation learning and multimedia delivery. By connecting academic researchers, industry practitioners, and standards communities, the workshop aims to stimulate collaboration, accelerate practical deployment, and create momentum toward scalable, high-quality immersive media technologies across multiple IEEE communities, including signal processing, communications, and visualization.

Workshop page:
https://voxellab.pl/mpastive/

Organizers:
– Tatjana Pladere, University of Latvia, Latvia
– Dorota Kamińska, Lodz University of Technology, Poland
– Grzegorz Zwoliński, Lodz University of Technology, Poland

Workshop description:

Virtual environments can serve as platforms for training spatial navigation and complex skills in healthcare, education, rehabilitation, and workforce development. The effectiveness of these platforms depends on advances in multimodal signal processing that enable accurate scene understanding, real-time interaction, adaptive feedback, and perceptually coherent simulation. Modern virtual, augmented, and mixed reality systems rely on visual sensing, 3D reconstruction, motion tracking, gaze estimation, and multimodal sensor fusion to create immersive and responsive training experiences. Progress in spatial signal representation, learning-based perception, and real-time processing architectures is essential for improving realism, stability, and personalization in these environments. This workshop aims to bring together researchers and practitioners working on signal processing, computer vision, and multimodal learning methods that support interaction and skill acquisition in immersive systems.

Workshop page:
https://kuvaspace.com/en/resources/ieee-icip-2026-kuva-space-workshop

Organizers:
– Arthur Vandenhoeke, Kuva Space, Finland
– Olli Eloranta, Kuva Space, Finland

Workshop description:

The next decade of Earth Observation (EO) will be defined by AI-native, spaceborne intelligence, delivering automated analytical products in near real time. This workshop focuses on advances in image processing, deep learning, and space systems enabling rapid-response EO through onboard data interpretation.

Emerging platforms with onboard GPUs, VIS–SWIR hyperspectral sensors, and inter-satellite links process massive data volumes immediately, transmitting insights instead of raw imagery and reducing latency from hours to seconds.

We invite contributions on spaceborne AI and real-time image processing, including hyperspectral compression, restoration, super-resolution, atmospheric correction, segmentation, anomaly detection, multi-sensor fusion (optical, RF, AIS), and ultra-low-latency inference. Applications in event detection, maritime security, wildfire/disaster response, and food security are especially encouraged.

Complementing ICIP 2026, the workshop addresses real-time AI under strict compute/power constraints, bridging imaging, machine learning, signal processing, aerospace engineering, and responsible AI. It emphasizes on-orbit adaptive compression and multi-sensor harmonization to accelerate timely, actionable EO intelligence for food security, wildfire monitoring, maritime safety, and critical infrastructure protection, transforming EO into active, rapid-response Earth intelligence.


Important Dates

  • Satellite Workshop paper submission due date: May 13, 2026
  • Satellite Workshop paper acceptance notification: June 10, 2026
  • Satellite Workshop camera-ready due date: July 1, 2026
  • Satellite Workshop author registration due date: July 16, 2026

Inquiries
Send specific workshop inquiries via e-mail to workshop organizers. Their contacts are available at the corresponding workshops webpages.

Send general workshop inquiries via e-mail to [email protected]


Submit Workshop Paper