TopMath Alumni Speakers Series

Andre Milzarek: Convergence properties of stochastic optimization methods

21 June 2023 16:00 – 18:00
On the image you see Andre Milzarek sitting on a desk and talking.

Prof. Andre Milzarek (6th TopMath year) from the School of Data Science (SDS),
The Chinese University of Hong Kong will give a talk on "Convergence properties of stochastic optimization methods under the Kurdyka-Lojasiewicz inequality" within the TopMath Alumni Speakers Series. The talk will take place on Wednesday, 21st June 2023, at 4 PM in Garching, MI 00.10.011. If you'd like to attend, please send an email to topmath@ma.tum.de by Monday, 19th June, for organizational reasons.

As part of the TopMath Alumni Speakers Series, TopMath regularly invites graduates of its program to share their experiences with students and doctoral candidates. They present their current projects in research or industry, speak about their career path and are then available for an informal exchange.

For the third Alumnus Talk of the year 2023, Prof. Andre Milzarek could be won. Andre Milzarek, TopMath year 2009/10, was a postdoctoral researcher at the Beijing International Center for Mathematical Research at the Peking University. Since 2019, he's Assistant Professor at the School of Data Science, Chinese University of Hong Kong, Shenzhen.

Andre Milzarek: Convergence properties of stochastic optimization methods under the Kurdyka-Lojasiewicz inequality

We present recent applications and extensions of the Kurdyka-Lojasiewicz (KL)-based analysis framework to stochastic optimization methodologies. The KL property has been utilized extensively in the past decades to study the limiting behavior of optimization algorithms and of (sub)gradient flows in dynamical systems. Despite its popularity and overall success in the deterministic setting, applicability of KL-based analysis techniques to stochastic algorithms is significantly hindered by the generally missing descent and by the more intricate dynamics of stochastic errors and step sizes. In this talk, we discuss the standard KL inequality-based convergence framework as well as novel applications to random reshuffling (RR) and other SGD-type algorithms.

I will also share some of my experiences as an (tenure-track) assistant professor in China at the Chinese University of Hong Kong, Shenzhen.