The following short courses will be offered at ICIP 2026:
The following short courses will be offered at ICIP 2026:
Presenter:
– Paul A. Rodriguez, Pontificia Universidad Católica del Perú, Perù
While the impact of convolutional neural networks (CNN) / deep learning (DL) / artificial intelligence (AI) is still being assessed in several everyday technological and ethical aspects of our societies, stochastic optimizers, which encompass the stochastic gradient descent (SGD) and variants (e.g. Momentum, ADAM, AdamW, etc.), and the selection of their associated hyperparameters, play a crucial role in the successful training of such models.
The SGD algorithm, which may be succinctly explained as the classical gradient descent (GD) algorithm along with a (very) noisy gradient, has two hyperparameters, i.e. the learning rate (LR) and the batch-size (BSz), which directly affect the practical rate of convergence as well as the overall model’s performance. However, more effective (and popular) algorithms, such as Momentum, ADAM, AdamW and derived methods, do have several hyperparameters, whose influence is not as direct nor as well understood as SGD’s LR and BSz.
As it will be detailed throughout this proposal, the primary objective of this six-hours course is to combined the essential theoretical aspects associated with the SGD algorithm and variants’ hyperparameters, and their influence on performance, along with hands-on experience to implement (TensorFlow or Pytorch) different methods to fine-tune the most influential hyperparameters, from grid/random search strategies to Bayesian optimization, also considering warm-start tech-niques and metaheuristic methods.
Presenters:
– Deepayan Bhowmik, Newcastle University, UK
– Touradj Ebrahimi, EPFL, Switzerland
– Frederik Temmermans, Vrije Universiteit Brussel, Belgium and imec, Belgium
The rise of technologies such as Generative AI and mobile phone cameras has led to large-scale media content creation and consumption. While this progress opens up opportunities, especially in creative industries, it also enables issues like cyber attacks, piracy, fake media distribution, and concerns around trust and privacy. Manipulated media has caused social unrest, spread political rumors, and incited hate crimes in recent years.
Media modifications are not always negative; they are now a standard part of many production pipelines and new knowledge creation. However, in many domains, creators need to declare the types of modifications performed. Failing to do so can call media trustworthiness into question or suggest an intent to hide manipulations. Such a need triggered an initiative by the JPEG committee (https://jpeg.org/) to standardise a way to annotate media assets (regardless of the intent) and securely link the assets and annotations together. This JPEG Trust standard (published in January 2025) ensures interoperability between a wide range of applications dealing with media asset creation and modification, providing a set of standard mechanisms to describe and embed information about the creation and modification of media assets.
This short course aims to present a holistic understanding of the topic, media authenticity in the age of AI, and potential solutions through state-of-the-art deepfake/manipulation detection algorithms, watermarking, and relevant standards such as JPEG Trust and the Coalition for Content Provenance and Authenticity (C2PA), along with hands-on coding exercises on some implementations and use case scenarios.
© Copyright 2025 IEEE – All rights reserved. A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.