~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
/* THIS FILE WAS GENERATED */
/* EDIT THE FILE "indexheader" INSTEAD */
/* OR ACCESS THE DATABASE */
{{page>.:indexheader}}
\\ ==== Prochaines séances ====
[[seminaires:SeminaireMAV:index|Séminaire Modélisation aléatoire du vivant]]\\
Mercredi 1 octobre 2025, 11 heures, 16-26.209\\
**Bixuan Liu** (LPSM) //Identifiability of VAR(1) model in a stationary setting//
\\
[[seminaires:SeminaireMAV:index|Séminaire Modélisation aléatoire du vivant]]\\
Mercredi 5 novembre 2025, 11 heures, 16-26.209\\
**Tba** //Non encore annoncé.//
\\
[[seminaires:SeminaireMAV:index|Séminaire Modélisation aléatoire du vivant]]\\
Mercredi 3 décembre 2025, 11 heures, 16-26.209\\
**Luis Almeida** (LPSM) //Non encore annoncé.//
\\
[[seminaires:SeminaireMAV:index|Séminaire Modélisation aléatoire du vivant]]\\
Mercredi 7 janvier 2026, 11 heures, 16-26.209\\
**Valentin Schmutz** (Univ. College London) //Concentration of measure in "low-rank" biological neural networks//
\\
Recurrent neural networks with low-rank connectivity matrices are general and tractable models of collective dynamics in large networks. This class of models dates back to the seminal works of J. Hopfield (1982) and S. Amari (1972), and it still plays an instrumental role in computational neuroscience today. To highlight the analytical tractability of these models, I will first review some recent theoretical results concerning the case where the low-rank connectivity is random, the rank is kept fixed, and the number of neurons tends to infinity. In this case, we find that (i) the dynamics of the network converges to a neural field equation, (ii) the dynamics can be reduced to a latent, low-dimensional dynamical system, and (iii) the latent dynamics can be fully solved in certain special cases.
In the second part of the presentation, I will show that low-rank connectivity is associated with remarkable concentration of measure phenomena in networks of biological neurons. Considering a feedforward network setup where neurons in the first layer, modelled as Cox processes, transmit stochastic spikes, I will present a theorem stating that the network can behave as if each spiking neuron were transmitting its subthreshold membrane potential as both the rank of the connectivity and the number of neurons tend to infinity. This result could explain how, at the network level, neurons can transmit their subthreshold membrane potential fluctuations through sparse spikes. The proof of the theorem involves the so-called thin shell phenomenon, a well-known concentration phenomenon in high-dimensional probability.
{{page>.:info}}
\\ ==== Séances passées ====
\\ === Année 2025 ===
{{page>.:seminairemav2025}}
\\ === Année 2024 ===
{{page>.:seminairemav2024}}
\\ === Année 2022 ===
{{page>.:seminairemav2022}}