The ICIP 2026 program will include the following Grand Challenges:
The ICIP 2026 program will include the following Grand Challenges:
Organized by:
Deepayan Bhowmik, Newcastle University, UK
Touradj Ebrahimi, EPFL, Switzerland
Sabrina Caldwell, University of New South Wales, Australia and Australian National University
Frederik Temmermans, Vrije Universiteit Brussel & imec, Belgium
Grand Challenge website: https://jpeg-trust-community.github.io/watermarking/benchmark/index.html
Proposal submission platform: Details are provided through the Grand Challenge website.
Short description: Digital watermarking, in use for several decades, has been increasingly adopted as a method for embedding information directly into media assets in a way that can be both imperceptible and robust. This technique establishes a link between the metadata and the content, one that is challenging to disrupt without compromising the intended usage of the media asset itself. Since the inception and rapid rise of generative AI, watermarking has increasingly gained popularity, both within industry and among policymakers, as a solution to signal whether the media asset is AI-generated or AI-manipulated content. Such watermarking is equally beneficial for media assets generated outside the context of AI (e.g., photographs, edited images).
This grand challenge aims to assess watermarking performance (e.g., embedding distortion and robustness against attacks) along various evaluation criteria set out by the JPEG Trust Part 3: Media Asset Watermarking initiative. JPEG Trust (ISO/IEC 21617) is an international standardisation effort that provides a framework for establishing trust in media. This framework includes aspects of authenticity, provenance, attribution, intellectual property rights, and integrity of the media assets throughout their life cycle.
Organized by:
Álvaro García Martín, VPULab, Escuela Politécnica Superior, Universidad Autónoma de Madrid
José M. Martínez, VPULab, Escuela Politécnica Superior, Universidad Autónoma de Madrid
Paula Moral de Eusebio, VPULab, Escuela Politécnica Superior, Universidad Autónoma de Madrid
Grand Challenge website: http://www-vpu.eps.uam.es/challenges/UrbanReIDChallenge2026/
Proposal submission platform: https://www.kaggle.com/competitions/urban-elements-re-id-challenge-2026/. For inquiries, please contact: [email protected]
Short description: Building on the foundations of our previous work, the 2026 edition of the Urban Elements ReID Challenge 2026 addresses the growing need for intelligent, automated infrastructure management. While traditional Re-Identification has focused on humans and vehicles, our mission is to decode the complex visual identity of the objects that actually define our cities.This year, we are introducing a significant layer of difficulty: Traffic Sign Re-Identification. Unlike larger urban assets, traffic signs demand a much more granular analysis, challenging algorithms to distinguish between objects that are designed to look identical. The previous edition of the challenge included a dataset with objects such as trash bins, containers, and pedestrian crossings, consisting of around 5,000 images and 397 different identities. This new edition includes the traffic sign object, with around 10,000 images and 1,000 identities.
Organized by:
Jakub Nalepa, Silesian University of Technology, KP Labs, Poland
Krzysztof Kotowski, KP Labs, Poland
Bartosz Grabowski, KP Labs, Poland
Panče Panov, Bias Variance Labs, Slovenia
Tadej Tomanič, Bias Variance Labs, Slovenia
Alice Baudhuin, Bias Variance Labs, Slovenia
Jan Sotošek, Bias Variance Labs, Slovenia
Kevin Halsall, Telespazio UK, United Kingdom
James Harding, Telespazio UK, United Kingdom
Leonardo De Laurentiis, Mission Management & Product Quality Division, European Space Agency, Italy
Roberto Del Prete, Φ-lab, European Space Agency, Italy
Lorenzo Papa, Φ-lab, European Space Agency, Italy
Gabriele Meoni, Φ-lab, European Space Agency, Italy
Grand Challenge website: https://challenges.philab.esa.int/portfolio/clearsar-track-1/
Proposal submission platform: The link to the challenge submission platform is available at the challenge website.
Short description: Synthetic Aperture Radar (SAR) data play an important role in Earth observation by enabling consistent monitoring independent of weather conditions or daylight. With its powerful SAR instrument, the Copernicus Sentinel-1 satellite enables science, innovation, and commercial applications: from environmental monitoring to precision agriculture, to urban planning, or emergency response.
However, SAR systems operate in shared frequency bands and can therefore be affected by radio-frequency interference (RFI), which can lead to image artifacts, undetected biases or even complete data loss. Comprehensive filtering techniques are required to avoid image degradation and enable strong downstream analytics.
Most existing approaches to RFI detection rely on large raw SAR products, which are not always suitable for operational processing. In practice, Sentinel-1 workflows predominantly use compact products such as quicklooks and Level-2 ground range detected (GRD) imagery. The lack of robust RFI mitigation at these processing levels represents a limitation for large-scale and automated use of Sentinel-1 data.
The ClearSAR Challenge aims to address this gap by encouraging the development and benchmarking of automated RFI detection methods that are scalable and compatible with the Sentinel-1 processing chain.
Organized by:
Chaker Larabi, XLIM Laboratory, Université de Poitiers, France
Didier Nicholson, Ektacom, France
Abderrezzaq Sendjasni, CNRS/XLIM, Université de Poitiers, France
Melan Vijayaratnam, Ektacom, France
Gabriele Facciolo, Centre Borelli, École Normale Supérieure Paris-Saclay, France
Alexandre CILIA, Service National de Police Scientifique, France
Fanny PAGÈS, Service National de Police Scientifique, France
Hugo LAMI, Ceraps, Université de Lille, France
Frédérique FÉGÉ, Service National de Police Scientifique, France
Grand Challenge website: https://xlim-perception.github.io/icip_xlpsr/index.html
Proposal submission platform: Submissions are managed through our Codabench competition platform. For inquiries, please contact: [email protected]
Short description: The Extreme License Plate Super-Resolution (XLPSR) Challenge aims to advance the development of super-resolution algorithms capable of recovering readable license plates from severely degraded real-world video footage. The challenge is built upon the IMPROVED dataset, a new collection of over 4,500 images extracted from 17 different camera devices—including surveillance cameras, smartphones, and professional cameras—deployed at the Saint-Laurent-de-Mûre circuit in France. The dataset captures distinct French license plates under diverse conditions, including variable lighting, weather, viewing angles, and distances, with natural degradations such as motion blur, compression artifacts, and sensor noise. Each sequence contains 10 consecutive frames, allowing participants to explore both single-image and multi-image super-resolution approaches. The challenge is structured around three splits: a development set with full annotations, a public validation set for leaderboard benchmarking, and a blind test set for final evaluation. By focusing on a highly constrained and operationally relevant object—the French license plate—this challenge aims to push the boundaries of extreme super-resolution while ensuring functional fidelity for downstream automatic recognition. The ultimate goal is to foster reliable and hallucination-free reconstruction methods applicable to intelligent transportation systems, urban surveillance, and forensic evidence analysis.
© Copyright 2025 IEEE – All rights reserved. A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.