ÇÐÈ¸¼Ò°³
ÇÐÈ¸¼Ò°³
ÀÌ»çÈ¸
ÇÐ¼úºÐ°úÀ§¿øÈ¸
±¹Á¦±³·ùÀ§¿øÈ¸
Á¤°ü ¹× ¼¼Ä¢
¼ö»ó
¡¤ KSIAM Prizes
¡¤ KSIAM±Ý°î ÇÐ¼ú»ó
¡¤ KSIAM Àú³Î ³í¹®»ó
¡¤ KSIAM ½ÅÁø¿¬±¸ÀÚ ¿ì¼ö³í¹®»ó
¡¤ KSIAM ÀþÀº ¿¬±¸ÀÚ»ó
¡¤ KSIAM ÇÐ¼ú´ëÈ¸ Æ÷½ºÅÍ ¿ì¼ö»ó
¡¤ KSIAMMathWorks Problem Challenge
ÇÐ¼úÇà»ç
KSIAM Á¤±âÇÐ¼ú´ëÈ¸
KSIAMNIMS Thematic Program
¡¤ ÇÁ·Î±×·¥ ¼Ò°³
ÃâÆÇ¹°
JKSIAM
KSIAM ´º½º·¹ÅÍ
³í¹®ÀÚ·á
È¸¿ø¾È³»
È¸¿ø°¡ÀÔ¾È³»
È¸¿ø°Ë»ö
Á¤º¸±¤Àå
ÇÐÈ¸°øÁö
Ãë¾÷/Ã¤¿ë
Çà»çÁ¤º¸
´ëÁß°¿¬´Ü
±âÅ¸
°¶·¯¸®
Login

Join
ÇÐ¼úÇà»ç
ÇÐ¼úÇà»ç
KSIAM Á¤±âÇÐ¼ú´ëÈ¸
KSIAMNIMS Thematic Program
¡¤ÇÁ·Î±×·¥ ¼Ò°³
KSIAMMINDSNIMS International Conference on Machine Learning and PDEs
Welcome
Organizers/Sponsors
Plenary Talks
Program Schedule
¡á Venue
¡á Participate Online
Accommodation
COVID19 Information
Registration
Tourism
> ÇÐ¼úÇà»ç >
KSIAMNIMS Thematic Program
KSIAMNIMS Thematic Program
Plenary Talks
Marco Cuturi (Apple and ENSAE/IP Paris)
Marco Cuturi is a research scientist in the MLR team at Apple, in Paris, as well as a professor of statistics at ENSAE / IP Paris. Before that he was at Google Brain, and an associate professor at Kyoto University. His research interests revolve around optimal transport and differentiable programming.
Differentiability Matchings, Mappings and JKO Steps
I will present in this talk a few recent results linking several aspects of optimal transport with differentiable programming. I will present in particular how regularized approaches coupled with innovative neural architectures can be used to model population dynamics, and provide a new applicative perspective on the JKO flow of probability measures.
Weinan E (Peking University)
Weinan E is a professor at the Center for Machine Learning Research (CMLR) and the School of Mathematical Sciences at Peking University. He is also a professor at the Department of Mathematics and Program in Applied and Computational Mathematics at Princeton University. His main research interest is numerical algorithms, machine learning and multiscale modeling, with applications to chemistry, material sciences and fluid mechanics.
Weinan E was awarded the ICIAM Collatz Prize in 2003, the SIAM Kleinman Prize in 2009 and the SIAM von Karman Prize in 2014, the SIAMETH Peter Henrici Prize in 2019, and the ACM GordonBell Prize in 2020. He is a member of the Chinese Academy of Sciences, a fellow of SIAM, AMS and IOP. Weinan E is an invited plenary speaker at ICM 2022. He has also been an invited speaker at ICM 2002, ICIAM 2007 as well as the AMS National Meeting in 2003. In addition, he has been an invited speaker at APS, ACS, AIChe annual meetings, the World Congress of Computational Mechanics, and the American Conference of Theoretical Chemistry.
Machine Learning and PDEs
I will give an overview of machine learningbased algorithms for solving PDEs. I will give a critical assessment of the success and the lack of success so far in this direction. I will discuss the critical issues that need to be addressed.
Jan Hesthaven (EPFL)
After receiving his PhD in 1995 from the Technical University of Denmark, Professor Hesthaven joined Brown University, USA where he became Professor of Applied Mathematics in 2005. In 2013 he joined EPFL as Chair of Computational Mathematics and Simulation Science and from 20172020 as Dean of the School of Basic Sciences. From 2021, he serves as Provost at EPFL. His research interests focus on the development, analysis, and application of highorder accurate methods for the solution of complex timedependent problems, often requiring highperformance computing. A particular focus of his research has been on the development of computational methods for problems of linear and nonlinear wave problems with recent emphasis on combining traditional methods with machine learning and neural networks. He has published 4 monographs and more than 175 research papers. He serves on the editorial board of several leading journals and served as the EditorinChief of SIAM J. Scientific Computing until end of 2021. He is an Alfred P Sloan Fellow, a SIAM Fellow, a member at the Royal Danish Academy of Sciences and Letters, and an ICM invited speaker (2022).
Nonintrusive Reduced Order Models Using Physics Informed Neural Networks
The development of reduced order models for complex applications, offering the promise for rapid and accurate evaluation of the output of complex models under parameterized variation, remains a very active research area. Applications are found in problems which require many evaluations, sampled over a potentially large parameter space, such as in optimization, control, uncertainty quantification, and in applications where a near realtime response is needed.
However, many challenges remain unresolved to secure the flexibility, robustness, and efficiency needed for general largescale applications, in particular for nonlinear and/or timedependent problems.
After giving a brief general introduction to projection based reduced order models, we discuss the use of artificial feedforward neural networks to enable the development of fast and accurate nonintrusive models for complex problems. We demonstrate that this approach offers substantial flexibility and robustness for general nonlinear problems and enables the development of fast reduced order models for complex applications.
In the second part of the talk, we discuss how to use residual based neural networks in which knowledge of the governing equations is built into the network and show that this has advantages both for training and for the overall accuracy of the model.
Time permitting, we finally discuss the use of reduced order models in the context of prediction, i.e. to estimate solutions in regions of the parameter beyond that of the initial training. With an emphasis on the MoriZwansig formulation for timedependent problems, we discuss how to accurately account for the effect of the unresolved and truncated scales on the long term dynamics and show that accounting for these through a memory term significantly improves the predictive accuracy of the reduced order model.
George Karniadakis (Brown University, MIT & PNNL)
George Karniadakis is from Crete. He is a member of the National Academy of Engineering. He received his S.M. and Ph.D. from Massachusetts Institute of Technology (1984/87). He was appointed Lecturer in the Department of Mechanical Engineering at MIT and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continued to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018), Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010), Fellow of the American Physical Society (APS, 2004), Fellow of the American Society of Mechanical Engineers (ASME, 2003) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006). He received the SIAM/ACM Prize on Computational Science & Engineering (2021), the Alexander von Humboldt award in 2017, the SIAM Ralf E Kleinman award (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His hindex is 121 and he has been cited over 67,000 times.
Polymorphic Neural Operators for Fast Predictions
We will also introduce new bioinspired neural networks (NNs) that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. We will also demonstrate how different neural operators appeared in the literature recently are simply subcases of the general DeepOnet that provides a plurality of neural operators for diverse applications and realtime inference.
Benedict Leimkuhler (University of Edinburgh)
Prof. Benedict Leimkuhler has research interests in algorithms for MCMC sampling, e.g. Langevin integration algorithms and advanced sampling methods (simulated tempering, ensemble quasiNewton samplers, diffusionmap based enhanced sampling, sampling on manifolds, and adaptive/generalised Langevin) as well as specialised methods for deep neural network training. He has held fellowships from the Leverhulme Foundation and the Alan Turing Institute, and is a Fellow of the Royal Society of Edinburgh. He currently leads the MACMIGS Centre for Doctoral Training which is to train around 80 PhD students in mathematical modelling, applied analysis and computing and is on editorial boards of the SIAM Journal of Uncertainty Quantification, the IMA Journal on Numerical Analysis and the AIMS Journals of Computational Dynamics and Foundations of Data Science.
Constrained Dynamics Algorithms for Large Scale Statistical and Machine Learning Computation
My group is developing numerical algorithms for high dimensional sampling and optimisation challenges arising in statistics and machine learning applications, including methods for sampling in compact domains, methods for constraint manifolds, multirate schemes, as well as foundational work on Langevin and generalised Langevin dynamics. Our work builds on ordinary and stochastic differential equation schemes that have been developed in the setting of molecular simulation but now find widespread application in statistical computation.
In this talk I will focus particularly on the use of constraints in statistical inference and three related recent works: (i) large timestep Langevin algorithms on manifolds developed particularly for biomolecular configurational sampling applications [1]; (ii) a randomized time Riemannian Hamiltonian Monte Carlo algorithm (RTRMHMC) which improves robustness in many applications compared to RMHMC [2]; and (iii) the use of constrained dynamics as a regularization strategy in neural network training [3].
[1] Efficient molecular dynamics using geodesic integration and solvent–solute splitting, B. Leimkuhler and C. Matthews, Proc. Roy. Soc. A, 472, 20160138, 2016.
https://doi.org/10.1098/rspa.2016.0138
[2] Randomized Time Riemannian Manifold Hamiltonian Monte Carlo, P. Whalley, D. Paulin, B. Leimkuhler, arXiv preprint 2206.04554, 2022.
https://arxiv.org/abs/2206.04554
[3] Better training using weightconstrained stochastic dynamics, B. Leimkuhler, T. Vlaar, T. Pouchon and A. Storkey, Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 2021.
https://proceedings.mlr.press/v139/leimkuhler21a/leimkuhler21a.pdf
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
Shi Jin (Shanghai Jiao Tong University)
Shi Jin is the Director of Institute of Natural Sciences, and Chair Professor of Mathematics, at Shanghai Jiao Tong University. He also serves as a codirector of the Shanghai National Center for Applied Mathematics.
He received a Feng Kang Prize of Scientific Computing in 2001. He is an inaugural Fellow of the American Mathematical Society (AMS) (2012), a Fellow of Society of Industrial and Applied Mathematics (SIAM) (2013), an inaugural Fellow of the Chinese Society of Industrial and Applied Mathematics (CSIAM) (2020), and an Invited Speaker at the International Congress of Mathematicians in 2018. In 2021 he was elected a Foreign Member of Academia Europaea and a Fellow of European Academy of Sciences.
Random Batch Methods for Interacting Particle Systems and Molecular Dynamics
We first develop random batch methods for classical interacting particle systems with large number of particles. These methods use small but random batches for particle interactions, thus the computational cost is reduced from O(N^2) per time step to O(N), for a system with N particles with binary interactions. For one of the methods, we give a particle number independent error estimate under some special interactions.
This method is also extended to molecular dynamics with Coulomb interactions, in the framework of Ewald summation. We will show its superior performance compared to the current stateoftheart methods (for example PPPM) for the corresponding problems, in the computational efficiency and parallelizability.
ÀÎÀå ÀÌ¹ÌÁö 1, ÀÌ¹ÌÁö 2, ÀÌ¹ÌÁö 3