Publications

The papers below are classified by topics. For a chronological list, please see my Google Scholar profile.

My contributions are mainly theoretical, but I am also strongly interested in how these results meet practical needs. In particular, the application-oriented papers are tagged with .

Bandit algorithms, global optimization and beyond

  • Adaptive approximation of monotone functions.   [preprint]
    Pierre Gaillard, Sébastien Gerchinovitz, and Étienne de Montbrun.
    arXiv:2309.07530, 2023.
  • Certified multi-fidelity zeroth-order optimization.   [preprint]
    Étienne de Montbrun and Sébastien Gerchinovitz.
    arXiv:2309.00978, 2023.
  • Regret analysis of the Piyavskii-Shubert algorithm for global Lipschitz optimization.   [preprint]
    Clément Bouttier, Tommaso R. Cesari, Mélanie Ducoffe, and Sébastien Gerchinovitz.
    arXiv:2002.02390, 2022.
  • Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates.   [proceedings]
    François Bachoc, Tommaso R. Cesari, and Sébastien Gerchinovitz.
    Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 24180-24192, 2021.
  • The sample complexity of level set approximation.   [proceedings]
    François Bachoc, Tommaso R. Cesari, and Sébastien Gerchinovitz.
    Proceedings of the Twenty Fourth International Conference on Artificial Intelligence and Statistics (AISTATS 2021, oral presentation), PMLR 130:424-432, 2021.
  • Diversity-Preserving K-Armed Bandits, Revisited.   [preprint]
    Hédi Hadiji, Sébastien Gerchinovitz, Jean-Michel Loubes, and Gilles Stoltz.
    Preprint, 2020.
  • Optimization of a SSP's Header Bidding Strategy using Thompson Sampling.   [pdf] [video]
    Grégoire Jauvion, Nicolas Grislain, Pascal Dkengne Sielenou, Aurélien Garivier, and Sébastien Gerchinovitz.
    Proceedings of KDD 2018, Applied Data Science track, 2018.
  • Refined lower bounds for adversarial bandits.   [proceedings] [preprint]
    Sébastien Gerchinovitz and Tor Lattimore.
    Advances in Neural Information Processing Systems 29 (NIPS 2016), 2016.
  • A multiple-play bandit algorithm applied to recommender systems.   [proceedings]
    Jonathan Louëdec, Max Chevalier, Josiane Mothe, Aurélien Garivier, and Sébastien Gerchinovitz.
    Proceedings of the 28th International Florida Artificial Intelligence Research Society Conference (FLAIRS 2015), 67-72, 2015.
  • Adaptive simulated annealing with homogenization for aircraft trajectory optimization.   [proceedings]
    Clément Bouttier, Olivier Babando, Sébastien Gadat, Sébastien Gerchinovitz, Serge Laporte, and Florence Nicol.
    Operations Research Proceedings 2015, 569-574, 2017.

Deep learning

  • A general approximation lower bound in L^p norm, with applications to feed-forward neural networks.   [proceedings] [full paper]
    El Mehdi Achour, Armand Foucault, Sébastien Gerchinovitz, and François Malgouyres.
    Proceedings of NeurIPS 2022.
  • The loss landscape of deep linear neural networks: a second-order analysis.   [preprint]
    El Mehdi Achour, François Malgouyres, and Sébastien Gerchinovitz.
    Preprint, 2022.
  • Numerical influence of ReLU'(0) on backpropagation.   [paper with erratum]
    David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, and Edouard Pauwels.
    Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 468-479, 2021.

Uncertainty quantification for safety-critical systems

  • Object detection with probabilistic guarantees: a conformal prediction approach   [preprint]
    Florence de Grancey, Jean-Luc Adam, Lucian Alecu, Sébastien Gerchinovitz, Franck Mamalet, and David Vigouroux.
    Proceedings of WAISE 2022 (best paper award).
  • Can we reconcile safety objectives with machine learning performances?   [proceedings]
    Hugues Bonnin, Eric Jenn, Lucian Alecu, Thomas Fel, Laurent Gardes, Sébastien Gerchinovitz, Ludovic Ponsolle, Franck Mamalet, Vincent Mussot, Cyril Cappi, Kévin Delmas and Baptiste Lefevre.
    Proceedings of ERTS 2022 (best paper award).
  • A High-Probability Safety Guarantee for Shifted Neural Network Surrogates.   [pdf] [video]
    Mélanie Ducoffe, Sébastien Gerchinovitz, and Jayant Sen Gupta.
    Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), 74-82, 2020.

(Non)parametric regression and classification

  • Optimal functional supervised classification with separation condition.   [journal] [preprint, with supplement]
    Sébastien Gadat, Sébastien Gerchinovitz, and Clément Marteau.
    Bernoulli 26(3), 1797-1831, 2020.
  • Uniform regret bounds over R^d for the sequential linear regression problem with the square loss.   [proceedings]
    Pierre Gaillard, Sébastien Gerchinovitz, Malo Huard, and Gilles Stoltz.
    Proceedings of the 30th international conference on Algorithmic Learning Theory (ALT 2019), PMLR 98:404-432, 2019.
  • Algorithmic chaining and the role of partial feedback in online nonparametric learning.
    Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile, and Sébastien Gerchinovitz. [long] [short]
    Proceedings of the 2017 Conference on Learning Theory (COLT 2017), PMLR 65:465-481, 2017.
  • A chaining algorithm for online nonparametric regression.   [proceedings] [video]
    Pierre Gaillard and Sébastien Gerchinovitz.
    Proceedings of the 28th Conference on Learning Theory (COLT 2015), PMLR 40:764-796, 2015.
  • Adaptive and optimal online linear regression on L1-balls.   [journal] [preprint]
    Sébastien Gerchinovitz and Jia Yuan Yu.
    Theoretical Computer Science 519:4-28, 2014.
    NB : A shorter version appeared in the proceedings of ALT 2011.
  • Sparsity regret bounds for individual sequences in online linear regression.   [journal]
    Sébastien Gerchinovitz.
    Journal of Machine Learning Research 14(22):729-769, 2013.
    NB : A shorter version appeared in the proceedings of COLT 2011. Here is a video of the talk.

Lower bounds theory

  • Fano's inequality for random variables.   [journal] [preprint]
    Sébastien Gerchinovitz, Pierre Ménard, and Gilles Stoltz.
    Statistical Science 35(2), 178-201, 2020.
  For specific proofs of lower bounds in learning, approximation, or optimization, see also:

PhD thesis

Technical reports

Last update: December 2023