Institut de Mathématiques de Toulouse

Les événements de la journée


3 événements


  • Séminaire de Probabilités

    Mardi 9 janvier 09:45-10:45 - Franziska Kuehn - IMT

    An introduction to Feller processes

    Résumé : In this talk we give a brief introduction to Feller processes. Roughly speaking, a Feller process is a space-inhomogeneous Markov process which behaves locally like a Lévy process, but the Lévy triplet depends on the current position of the process. For this reason Feller processes are also called Lévy-type processes.
    Feller processes can be characterized by their symbol which is the analogue of the characteristic exponent in the Lévy case. In this talk we motivate that the symbol is a powerful tool to describe distributional properties and path properties of a Feller process. Moreover, we present examples of Feller processes (such as stable-like processes and stochastic differential equations driven by a Lévy process) and discuss some open problems.

    Lieu : Salle MIP

    [En savoir plus]


  • Séminaire MIP

    Mardi 9 janvier 11:00-12:00 -

    Pas de séminaire en raison de la journée nouveaux entrants MIP

    [En savoir plus]


  • Séminaire de Statistique

    Mardi 9 janvier 11:00-12:00 - Sébastien Gadat - UT1

    Non-asymptotic bound for stochastic averaging (joint work with F. Panloup)

    Résumé : This work is devoted to the non-asymptotic control of the mean-squared error for the
    Ruppert-Polyak stochastic averaged gradient descent introduced in the seminal contributions
    of [1] and [2]. Starting from a standard stochastic gradient descent algorithm introduced by
    Robbins and Monro to minimize a smooth convex function f with noisy inputs, they define a
    sequence (θ_n ) n≥1 through :
    θ_n+1} = θ_n − γ_n+1} ∇f (θ_n ) + γ_n+1} ∆M_n+1}
    where −∇f (θ_n ) + ∆M_n+1} stands for a gradient step corrupted by an additive noise at each step.
    The averaging procedure introduced in [1, 2] consists in computing a Cesaro average :
    In our main results, we establish non-asymptotic tight bounds (optimal with respect to the
    Cramer-Rao lower bound) in a very general framework that includes the uniformly strongly
    convex case as well as the one where the function f to be minimized satisfies a weaker Kurdyka-
    Łojiasewicz-type condition [3, 4]. In particular, it makes it possible to recover some pathological
    examples such as on-line learning for logistic regression (see [5]) and recursive quantile estimation
    (an even non-convex situation). Finally, our bound is optimal when the decreasing step (γ_n ) n≥1
    satisfies : γ_n = n ^−β with β = 3/4, leading to a second-order term in O(n^−5/4}).
    References
    [1] D. Ruppert, Efficient estimations from a slowly convergent Robbins-Monro process, Techni-
    cal Report, 781, Cornell University Operations Research and Industrial Engineering, (1988).
    [2] B. T. Polyak and A. Juditsky, Acceleration of Stochastic Approximation by Averaging,
    SIAM Journal on Control and Optimization, vol. 30 (4), 838-855 (1992).
    [3] S. Łojasiewicz, Une propriété topologique des sous-ensembles analytiques réels, Editions du
    CNRS, Paris, Les Équations aux Dérivées Partielles, 87-89 (1963).
    [4] K. Kurdyka, On gradients of functions definable in o-minimal structures, Ann. Inst. Fourier,
    Université de Grenoble. Annales de l’Institut Fourier, vol. 48 (3), 769-783 (1998).
    [5] F. Bach, Adaptivity of averaged stochastic gradient descent to local strong convexity for
    logistic regression,Journal of Machine Learning Research, vol. 15, 595-627 (2014).

    Lieu : Salle 106 Bat 1R1

    [En savoir plus]