## 4 événements

• Séminaire de Probabilités

### Mardi 27 mars 09:45-10:45 - Stéphane Villeneuve - IMT TSE

Problème d’arrêt issu de la théorie principal-agent

Résumé : La théorie du principal-agent désigne un ensemble de problèmes de décisions économiques où l’objectif d’un acteur (le principal) dépend d’une action non observable d’un autre acteur (l’agent). Dans cet exposé, on revisite le vieux problème de temps de sortie optimal d’un mouvement brownien avec dérive lorsque la dérive est contrôlée par un agent. On montre comment le problème de décision se plonge dans un problème d’arrêt bi-dimensionnel que nous résolvons explicitement.

• Séminaire Modélisation, Analyse et Calcul

### Mardi 27 mars 11:00-12:00 - Luis Vega - Departamento de Matematicas, Universidad del Pais Vasco/EHU

The Vortex Filament Equation : the Talbot effect and the transfer of energy and momentum

Résumé : I will present some recent results obtained in collaboration with V. Banica and F. de la Hoz on the evolution of vortex filaments according to the so called Localized Induction Approximation (LIA). This approximation is given by a non-linear geometric partial differential equation, that is known under the name of the Vortex Filament Equation (VFE). The aim of the talk is threefold. First, I will recall the Talbot effect of linear optics. Secondly, I will give some explicit solutions of VFE where this Talbot effect is also present. Finally, I will consider some questions concerning the transfer of energy and momentum for these explicit solutions.

Lieu : Salle MIP

• Séminaire de Statistique

### Mardi 27 mars 11:00-12:00 - Guillaume Lecué - CREST

Robust machine learning by median-of-means : theory and practice

Résumé : We introduce new estimators of least-squares minimizers based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of (number of observations) * (rate of convergence). For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance sigma^2 is sigma^2d and it becomes sigma^2 s \log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is (number of outliers divided by (number of observations).
Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice and are therefore well suited for robust machine learning. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that a first version of our algorithm can be improved a lot in practice by randomizing the blocks on which local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.

Lieu : Salle 132 Bât 1R2

• Homotopie en Géométrie Algébrique

### Mardi 27 mars 14:00-15:00 - Marcello Bernardara - IMT

Fibrations en variétés de Segre et del Pezzo de degré 6

Résumé : Soit B une variété projective. Je montrerai comment construire, à partir de la donnée d’un revêtement double S de B et d’une algèbre d’Azumaya A de degré 9 sur S, une fibration X -> B dont la fibre générique est isomorphe à P2xP2. De plus, si B est lisse, X est lisse aussi. Ces fibrations permettent de construire des fibrations Y -> B dont les fibres sont des surfaces de del Pezzo de degré 6 et d’expliquer la présence de la catégorie D(S,A) dans la catégorie D(Y).