Skip to main content

Annotation

Quantum computing relies on the use of physical systems that are subjected to random noise. While scalability is a critical parameter for reaching quantum advantage in various applications (e.g., cryptography or quantum chemistry), the diversity of computational architectures and of disciplines needed to master this topic is outstanding. In this workshop, we will consider both software and hardware aspects of quantum computing technologies: the deep tech that runs the machines that are already available (e.g., superconducting qubits VLQ at IT4I), and the software development that is inducing limits but also opportunities.

Benefits for attendees, what will they learn

Participants of the workshop will gain an interdisciplinary understanding of the current landscape of quantum computing, spanning both hardware and software perspectives. They will explore how light-based and superconducting architectures can perform large-scale computations, how to rigorously assess claims of quantum advantage in practice, and how innovative circuit designs—such as dynamic quantum circuits—can reduce computational depth and resource demands. The talks will also show how hybrid quantum–classical workflows can overcome current hardware limitations, particularly in data-intensive fields like imaging analytics. Overall, the workshop will equip attendees with insights into the physical principles, algorithmic strategies, and practical challenges shaping today’s and tomorrow’s quantum technologies.

Agenda

Talk Title: Photonic computing for large-scale AI applications

Speaker: Laurent Daudet

Abstract: Using light to perform computations has unique advantages, including massive parallelism and low energy per operation. In this work, we leverage light scattering to perform random-based statistical computations useful in current machine learning pipelines. I will present the physical principles, the hardware/software integration efforts within the startup LightOn, and various AI applications that range from simple image classification or randomised HPC algorithms, to billion-scale training of language models.

About the speaker: Laurent Daudet is Professor of Physics at Université Paris Cité, one of France's leading universities. He has held various academic positions, including Fellow of the Institut Universitaire de France, Associate Professor at Université Pierre et Marie Curie (now Sorbonne University), and visiting positions at the University of London, the University of Tokyo, and Stanford University. Laurent has authored or co-authored more than 200 scientific publications and has been a consultant to various small and large companies. He is a physics graduate from Ecole Normale Supérieure in Paris and holds a Ph.D. in Applied Mathematics from Marseille University. In 2016, Laurent co-founded LightOn, a startup devoted to large-scale AI, originating in photonic computing and now deeply rooted in Generative AI.

 

Talk Title: Quantum runtime (dis)advantage

Speaker: Łukasz Pawela

Abstract: We (re)evaluate recent claims of quantum advantage in annealing- and gate-based algorithms, testing whether reported speedups survive rigorous end-to-end runtime definitions and comparison against strong classical baselines. Conventional analyses often omit substantial overhead (readout, transpilation, thermalisation, etc.), yielding biased assessments. While excluding seemingly not important parts of the simulation may seem reasonable, on most current quantum hardware, a clean separation between "pure compute" and "overhead" cannot be experimentally justified. This may distort "supremacy" results. In contrast, for most classical hardware total time computes a weakly varying constant leading to robust claims.

We scrutinize two important milestones: (1) quantum annealing for approximate QUBO, which uses a sensible time-to-  metric but proxies runtime by the annealing time (non-measurable); (2) a restricted Simon's problem, whose advantageous scaling in oracle calls is undisputed; yet, as we demonstrate, estimated runtime of the quantum experiment is slower than a tuned classical baseline.

About the speaker: Łukasz Pawela is a Quantum computing scientist specialising in quantum information theory, including discrimination and certification of quantum measurements and channels, probabilistic quantum error correction, random quantum channels and states, quantum walks, quantum control, and benchmarking of near-term devices. He builds practical tooling used by researchers and engineers. Among them are QuantumInformation.jl, MatrixEnsembles.jl, omnisolver, SpinGlassPEPS, and PyQBench—supporting simulation, optimisation, and performance assessment across gate-based and annealing paradigms. An associate professor at the Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, he also works hands-on in HPC, delivering GPU-accelerated and distributed workflows for large-scale quantum simulation and annealing workloads. He is a co-author of 50 scientific papers and has led four research projects in the area of quantum computing and simulating and benchmarking of near-term quantum devices.

 

Talk Title: Dynamic Circuits for Efficient Quantum Computation: Logarithmic-Depth Approximate Quantum Fourier Transform on a Line

Speaker: Elisa Baumer Marty

Abstract: Dynamic quantum circuits—those that incorporate mid-circuit measurements and feed-forward operations—offer significant advantages for entanglement distribution and circuit depth reduction. In this talk, we illustrate these benefits through introductory examples and then focus on a major application: implementing the Approximate Quantum Fourier Transform (AQFT) in logarithmic depth using only 4n qubits arranged on a line with nearest-neighbour connectivity. For certain input states, we further reduce the qubit count to 2n. A key contribution of our construction is a new implementation of an adder with logarithmic depth under linear connectivity constraints, enabling the more efficient AQFT realisation and broader applications in quantum algorithm design.

 

Talk Title: Challenges of imaging data analytics on quantum hardware

Speaker: Arun Debnath

Abstract: Compelling practical applications of quantum information processing are fundamentally bottlenecked by the scarcity of fault-tolerant qubits. These limitations are projected to persist in the near term.

This presentation will focus on a novel approach to circumvent these hardware constraints within quantum algorithm-enhanced imaging data processing, analytics, and inference workflows. I will detail several theoretical proposals that combine classical data filtering techniques directly within the algorithm pipeline. These workflows significantly reduce the necessary quantum resource overhead. It demonstrates early promise in achieving quantum hardware-accelerated speedups for automated imaging data analytics within the current technological horizon.

About the speaker: Arun Debnath is a scientist and the founder of a deep-tech startup, Episteme99, focusing on developing integrated machine learning and classical/quantum information processing protocols to accelerate precision imaging workflows.  Apart from the start-up, his current research interest lies in the intersection of photon-matter interactions and deep-learning techniques. He obtained his Ph.D. in Physics at LCAR-IRSAMC, CNRS & Université de Toulouse III.

 

Two additional talks are currently being prepared and will be announced soon, together with the exact timetable. 

Level

intermediate

Language

English