Institut de Mathématiques de Toulouse

Accueil > Événements Scientifiques > Séminaires & Groupes de Travail > Séminaires > Séminaire de Statistique

Séminaire de Statistique

par Agnès Lagnoux - publié le , mis à jour le

Organisateurs : Agnès Lagnoux, Jean-Marc Azais et François Bachoc

Jour et lieu habituels : le mardi à 11h00 en salle 106 (bâtiment 1R1).




  • Mardi 6 mars 11:00 - Jérôme Morio - Onera Toulouse

    Reliability-based sensitivity estimators of rare event probability in the presence of distribution parameter uncertainty

    Résumé : This paper aims at presenting sensitivity estimators of a rare event probability in the context of uncertain distribution
    parameters (which are often not known precisely or poorly estimated due to limited data). Since the distribution parameters
    are also affected by uncertainties, a possible solution consists in considering a second probabilistic uncertainty level. Then,
    by propagating this bi-level uncertainty, the failure probability becomes a random variable and one can use the mean
    estimator of the distribution of the failure probabilities (i.e. the “predictive failure probability”, PFP) as a new measure of
    safety. In this paper, the use of an augmented framework (composed of both basic variables and their probability distribution
    parameters) coupled with an Adaptive Importance Sampling strategy is proposed to get an efficient estimation strategy of the
    PFP. Consequently, double-loop procedure is avoided and the computational cost is decreased. Thus, sensitivity estimators
    of the PFP are derived with respect to some deterministic hyper-parameters parametrizing a priori modeling choice.

    Lieu : Salle 106, Bat 1R1


  • Mardi 13 mars 11:00-12:00 - Fabrice Gamboa - IMT

    Approximate Optimal Designs for Multivariate Polynomial Regression

    Résumé : We introduce a new approach aiming at computing approximate optimal designs for multivariate polynomial regressions on compact (semi-algebraic) design spaces. We use the moment-sum-of-squares hierarchy of semidefinite programming problems to solve numerically the approximate optimal design problem. The geometry of the design is recovered via semidefinite programming duality theory. This work shows that the hierarchy converges to the approximate optimal design as the order of the hierarchy increases. Furthermore, we provide a dual certificate ensuring finite convergence of the hierarchy and showing that the approximate optimal design can be computed numerically with our method. As a byproduct, we revisit the equivalence theorem of the experimental design theory : it is linked to the Christoffel polynomial and it characterizes finite convergence of the moment-sum-of-square hierarchies.

    Lieu : Salle 106 1R1


  • Mardi 27 mars 11:00-12:00 - Guillaume Lecué - CREST

    Robust machine learning by median-of-means : theory and practice

    Résumé : We introduce new estimators of least-squares minimizers based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of (number of observations) * (rate of convergence). For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance sigma^2 is sigma^2d and it becomes sigma^2 s \log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is (number of outliers divided by (number of observations).
    Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice and are therefore well suited for robust machine learning. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the ``MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that a first version of our algorithm can be improved a lot in practice by randomizing the blocks on which ``local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.

    Lieu : Salle 106 1R1


iCal