~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
/* THIS FILE WAS GENERATED */
/* EDIT THE FILE "indexheader" INSTEAD */
/* OR ACCESS THE DATABASE */
{{page>.:indexheader}}
\\ ==== Next talks ====
[[en:seminaires:StatP6P7:index|Statistics seminar]]\\
Tuesday November 18, 2025, 10:45AM, Jussieu en salle 15-16 201\\
**Vincent Rivoirard** (CEREMADE) //PCA for point processes//
\\
We introduce a novel statistical framework for the analysis of replicated point processes that allows for the study of point pattern variability at a population level. By treating point process realizations as random measures, we adopt a functional analysis perspective and propose a form of functional Principal Component Analysis (fPCA) for point processes. The originality of our method is to base our analysis on the cumulative mass functions of the random measures which gives us a direct and interpretable analysis. Key theoretical contributions include establishing a Karhunen-Loève expansion for the random measures and a Mercer Theorem for covariance measures. We establish convergence in a strong sense, and introduce the concept of principal measures, which can be seen as latent processes governing the dynamics of the observed point patterns. We propose an easy-to-implement estimation strategy of eigenelements for which parametric rates are achieved. We fully characterize the solutions of our approach to Poisson and Hawkes processes and validate our methodology via simulations and diverse applications in seismology, single-cell biology and neurosiences, demonstrating its versatility and effectiveness.
Joint work with Victor Panaretos (EPFL), Franck Picard (ENS de Lyon) and Angelina Roche (Université Paris Cité).
[[en:seminaires:StatP6P7:index|Statistics seminar]]\\
Tuesday December 2, 2025, 10:45AM, Sophie Germain en salle 1013\\
**Clément Royer** (Dauphine - LAMSADE) //Line-search methods with restarting for nonconvex optimization//
\\
Complexity guarantees have grown in importance in smooth nonconvex optimization over recent years, fueled by interest in machine learning and theoretical computer science. Gradient descent is arguably the simplest method that can be endowed with complexity results in this setting, yet numerous algorithmic variants outperform gradient descent in practice. When only noisy estimates of functions and derivatives are available, variants on gradient descent with complexity have also been proposed, though other strategies have again proven more efficient in practice.
In this talk, I will present a line-search algorithmic framework with restarting that is endowed with complexity guarantees. Using nonlinear conjugate gradient as a special case, I will show that the proper restarting condition has minimal impact on the practical performance while enabling complexity results to be proven. I will then explain how the restarting approach extends to other schemes such as quasi-Newton methods, as well as recently proposed line-search techniques for noisy optimization. In the latter setting, I will discuss which conditions on the noise allow for obtaining complexity guarantees, and study the practical effect of noise on restarting.
This talk is based on joint works with Albert Berahas, Rémi Chan--Renous-Legoubin and Michael O'Neill.
[[en:seminaires:StatP6P7:index|Statistics seminar]]\\
Tuesday December 9, 2025, 10:45AM, Jussieu en salle 15-16 201\\
**Pallavi Basu** (Indian School of Business) //To be announced.//
\\
[[en:seminaires:StatP6P7:index|Statistics seminar]]\\
Tuesday January 20, 2026, 10:45AM, Sophie Germain en salle 1013\\
**Laurent Oudre** (ENS Paris Saclay) //To be announced.//
\\
{{page>.:info}}
\\ ==== Previous talks ====
\\ === Year 2025 ===
{{page>.:statp6p72025}}
\\ === Year 2024 ===
{{page>.:statp6p72024}}
\\ === Year 2023 ===
{{page>.:statp6p72023}}
\\ === Year 2022 ===
{{page>.:statp6p72022}}
\\ === Year 2021 ===
{{page>.:statp6p72021}}