2024 Fall
The seminar of this semester is organized by Jingyu Liu and Yujie Chen, and co-organized by the graduate student union in the School of Mathematical Sciences at Fudan. This section is partially sponsored by Shanghai Key Laboratory for Contemporary Applied Mathematics.
2024-09-12 16:10:00 - 17:00:00 @ Rm 1801, Guanghua East Tower
[poster]
- Title:
SOC-MartNet: A Martingale Neural Network for the Hamilton-Jacobi-Bellman Equation without Explicit $\inf_u H$ in Stochastic Optimal Controls
- Speaker: Shuixin Fang (Chinese Academy of Sciences)
- Advisor: Tao Zhou (Chinese Academy of Sciences)
Abstract: Click to expand
In this talk, we introduce a martingale-based neural network, SOC-MartNet, for solving high-dimensional Hamilton-Jacobi-Bellman (HJB) equations where no explicit expression is needed for the infimum of the Hamiltonian, $\inf_u H(t, x, u, z, p)$, and stochastic optimal control problems (SOCPs) with controls on both drift and volatility. We reformulate the HJB equations for the value function by training two neural networks, one for the value function and one for the optimal control with the help of two stochastic processes - a Hamiltonian process and a cost process. The control and value networks are trained such that the associated Hamiltonian process is minimized to satisfy the minimum principle of a feedback SOCP, and the cost process becomes a martingale, thus, ensuring the value function network as the solution to the corresponding HJB equation. Moreover, to enforce the martingale property for the cost process, we employ an adversarial network and construct a loss function characterizing the projection property of the conditional expectation condition of the martingale. Numerical results show that the proposed SOC-MartNet is effective and efficient for solving HJB-type equations and SOCPs with a dimension up to 2000 in a small number of iteration steps (less than 2000) of training.
2024-09-19 16:10:00 - 17:00:00 @ Rm 1801, Guanghua East Tower
[poster]
- Title:
A Noncomputational Strategy Modulating Biological Rhythms
- Speaker: Zhaoyue Zhong (Fudan University)
- Advisor: Wei Lin (Fudan University)
Abstract: Click to expand
In this talk, we introduce a mathematically rigorous control strategy for the concurrent modulation of amplitude and frequency in nonlinear dynamic systems, with a focus on oscillatory signals. The central challenge is the independent adjustment of one parameter while constraining the other, a problem of theoretical importance across various complex systems. By leveraging system nonlinearity, we decouple these parameters using a noncomputational approach. This method, supported by a robust mathematical framework, has been validated in representative biophysical systems, demonstrating its potential for future applications in controlling oscillatory dynamics across a wider range of complex systems.
Past Presentations
2024-09-05 16:10:00 - 17:00:00 @ Rm 1801, Guanghua East Tower
[poster]
- Title:
Inverse Approximation Theory of Recurrent Models for Learning Sequences
- Speaker: Shida Wang (National University of Singapore)
- Advisor: Qianxiao Li (National University of Singapore)
Abstract: Click to expand
Learning long-term dependencies remains a significant challenge in sequence modelling. Despite extensive empirical evidence showing the difficulties recurrent models face in capturing such dependencies, the underlying theoretical reasons are not fully understood. In this talk, we present inverse approximation theorems for nonlinear recurrent neural networks and state-space models. Our analysis reveals that appropriate reparameterizations of recurrent weights are crucial for stably approximating targets with long-term memory. We demonstrate that a broad class of stable reparameterizations allows state-space models to consistently approximate any target functional sequence with decaying memory. Additionally, these reparameterizations mitigate the vanishing and exploding gradient problems commonly encountered in training recurrent models.