ÇÐȸ¼Ò°³
ÇÐȸ¼Ò°³
ÀÌ»çȸ
ÇмúºÐ°úÀ§¿øȸ
±¹Á¦±³·ùÀ§¿øȸ
Á¤°ü ¹× ¼¼Ä¢
¼ö»ó
¡¤ KSIAM Prizes
¡¤ KSIAM-±Ý°î Çмú»ó
¡¤ KSIAM Àú³Î ³í¹®»ó
¡¤ KSIAM Â÷¼¼´ë ¿¬±¸ÀÚ»ó
¡¤ KSIAM ÀþÀº ¿¬±¸ÀÚ»ó
¡¤ KSIAM Çмú´ëȸ Æ÷½ºÅÍ ¿ì¼ö»ó
¡¤ KSIAM-NIMS Math Challenge
ÇмúÇà»ç
KSIAM Á¤±âÇмú´ëȸ
KSIAM-NIMS Thematic Program
¡¤ ÇÁ·Î±×·¥ ¼Ò°³
ICIAM 2023 Satellite Meetings
KSIAM ÇмúºÐ°ú Çà»ç
ÃâÆǹ°
JKSIAM
KSIAM ´º½º·¹ÅÍ
ȸ¿ø¾È³»
ȸ¿ø°¡ÀԾȳ»
ȸ¿ø°Ë»ö
Á¤º¸±¤Àå
ÇÐȸ°øÁö
Ãë¾÷/ä¿ë
Çà»çÁ¤º¸
´ëÁß°¿¬´Ü
±âŸ
°¶·¯¸®
Login
|
Join
ÇмúÇà»ç
ÇмúÇà»ç
KSIAM Á¤±âÇмú´ëȸ
KSIAM-NIMS Thematic Program
¡¤ÇÁ·Î±×·¥ ¼Ò°³
ICIAM 2023 Satellite Meetings
KSIAM ÇмúºÐ°ú Çà»ç
2022 KSIAM-MINDS-NIMS International Conference on Machine Learning and PDEs
Welcome
Organizers/Sponsors
Plenary Talks
Program Schedule
¡á Venue
¡á Participate Online
Accommodation
COVID-19 Information
Registration
Tourism
> ÇмúÇà»ç >
KSIAM-NIMS Thematic Program
KSIAM-NIMS Thematic Program
Plenary Talks
Marco Cuturi (Apple and ENSAE/IP Paris)
Marco Cuturi is a research scientist in the MLR team at Apple, in Paris, as well as a professor of statistics at ENSAE / IP Paris. Before that he was at Google Brain, and an associate professor at Kyoto University. His research interests revolve around optimal transport and differentiable programming.
Differentiability Matchings, Mappings and JKO Steps
I will present in this talk a few recent results linking several aspects of optimal transport with differentiable programming. I will present in particular how regularized approaches coupled with innovative neural architectures can be used to model population dynamics, and provide a new applicative perspective on the JKO flow of probability measures.
Weinan E (Peking University)
Weinan E is a professor at the Center for Machine Learning Research (CMLR) and the School of Mathematical Sciences at Peking University. He is also a professor at the Department of Mathematics and Program in Applied and Computational Mathematics at Princeton University. His main research interest is numerical algorithms, machine learning and multi-scale modeling, with applications to chemistry, material sciences and fluid mechanics.
Weinan E was awarded the ICIAM Collatz Prize in 2003, the SIAM Kleinman Prize in 2009 and the SIAM von Karman Prize in 2014, the SIAM-ETH Peter Henrici Prize in 2019, and the ACM Gordon-Bell Prize in 2020. He is a member of the Chinese Academy of Sciences, a fellow of SIAM, AMS and IOP. Weinan E is an invited plenary speaker at ICM 2022. He has also been an invited speaker at ICM 2002, ICIAM 2007 as well as the AMS National Meeting in 2003. In addition, he has been an invited speaker at APS, ACS, AIChe annual meetings, the World Congress of Computational Mechanics, and the American Conference of Theoretical Chemistry.
Machine Learning and PDEs
I will give an overview of machine learning-based algorithms for solving PDEs. I will give a critical assessment of the success and the lack of success so far in this direction. I will discuss the critical issues that need to be addressed.
Jan Hesthaven (EPFL)
After receiving his PhD in 1995 from the Technical University of Denmark, Professor Hesthaven joined Brown University, USA where he became Professor of Applied Mathematics in 2005. In 2013 he joined EPFL as Chair of Computational Mathematics and Simulation Science and from 2017-2020 as Dean of the School of Basic Sciences. From 2021, he serves as Provost at EPFL. His research interests focus on the development, analysis, and application of high-order accurate methods for the solution of complex time-dependent problems, often requiring high-performance computing. A particular focus of his research has been on the development of computational methods for problems of linear and non-linear wave problems with recent emphasis on combining traditional methods with machine learning and neural networks. He has published 4 monographs and more than 175 research papers. He serves on the editorial board of several leading journals and served as the Editor-in-Chief of SIAM J. Scientific Computing until end of 2021. He is an Alfred P Sloan Fellow, a SIAM Fellow, a member at the Royal Danish Academy of Sciences and Letters, and an ICM invited speaker (2022).
Nonintrusive Reduced Order Models Using Physics Informed Neural Networks
The development of reduced order models for complex applications, offering the promise for rapid and accurate evaluation of the output of complex models under parameterized variation, remains a very active research area. Applications are found in problems which require many evaluations, sampled over a potentially large parameter space, such as in optimization, control, uncertainty quantification, and in applications where a near real-time response is needed.
However, many challenges remain unresolved to secure the flexibility, robustness, and efficiency needed for general large-scale applications, in particular for nonlinear and/or time-dependent problems.
After giving a brief general introduction to projection based reduced order models, we discuss the use of artificial feedforward neural networks to enable the development of fast and accurate nonintrusive models for complex problems. We demonstrate that this approach offers substantial flexibility and robustness for general nonlinear problems and enables the development of fast reduced order models for complex applications.
In the second part of the talk, we discuss how to use residual based neural networks in which knowledge of the governing equations is built into the network and show that this has advantages both for training and for the overall accuracy of the model.
Time permitting, we finally discuss the use of reduced order models in the context of prediction, i.e. to estimate solutions in regions of the parameter beyond that of the initial training. With an emphasis on the Mori-Zwansig formulation for time-dependent problems, we discuss how to accurately account for the effect of the unresolved and truncated scales on the long term dynamics and show that accounting for these through a memory term significantly improves the predictive accuracy of the reduced order model.
George Karniadakis (Brown University, MIT & PNNL)
George Karniadakis is from Crete. He is a member of the National Academy of Engineering. He received his S.M. and Ph.D. from Massachusetts Institute of Technology (1984/87). He was appointed Lecturer in the Department of Mechanical Engineering at MIT and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continued to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018-), Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the SIAM/ACM Prize on Computational Science & Engineering (2021), the Alexander von Humboldt award in 2017, the SIAM Ralf E Kleinman award (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 121 and he has been cited over 67,000 times.
Polymorphic Neural Operators for Fast Predictions
We will also introduce new bio-inspired neural networks (NNs) that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. We will also demonstrate how different neural operators appeared in the literature recently are simply subcases of the general DeepOnet that provides a plurality of neural operators for diverse applications and real-time inference.
Benedict Leimkuhler (University of Edinburgh)
Prof. Benedict Leimkuhler has research interests in algorithms for MCMC sampling, e.g. Langevin integration algorithms and advanced sampling methods (simulated tempering, ensemble quasi-Newton samplers, diffusion-map based enhanced sampling, sampling on manifolds, and adaptive/generalised Langevin) as well as specialised methods for deep neural network training. He has held fellowships from the Leverhulme Foundation and the Alan Turing Institute, and is a Fellow of the Royal Society of Edinburgh. He currently leads the MAC-MIGS Centre for Doctoral Training which is to train around 80 PhD students in mathematical modelling, applied analysis and computing and is on editorial boards of the SIAM Journal of Uncertainty Quantification, the IMA Journal on Numerical Analysis and the AIMS Journals of Computational Dynamics and Foundations of Data Science.
Constrained Dynamics Algorithms for Large Scale Statistical and Machine Learning Computation
My group is developing numerical algorithms for high dimensional sampling and optimisation challenges arising in statistics and machine learning applications, including methods for sampling in compact domains, methods for constraint manifolds, multirate schemes, as well as foundational work on Langevin and generalised Langevin dynamics. Our work builds on ordinary and stochastic differential equation schemes that have been developed in the setting of molecular simulation but now find widespread application in statistical computation.
In this talk I will focus particularly on the use of constraints in statistical inference and three related recent works: (i) large time-step Langevin algorithms on manifolds developed particularly for bio-molecular configurational sampling applications [1]; (ii) a randomized time Riemannian Hamiltonian Monte Carlo algorithm (RT-RMHMC) which improves robustness in many applications compared to RMHMC [2]; and (iii) the use of constrained dynamics as a regularization strategy in neural network training [3].
[1] Efficient molecular dynamics using geodesic integration and solvent–solute splitting, B. Leimkuhler and C. Matthews, Proc. Roy. Soc. A, 472, 20160138, 2016.
https://doi.org/10.1098/rspa.2016.0138
[2] Randomized Time Riemannian Manifold Hamiltonian Monte Carlo, P. Whalley, D. Paulin, B. Leimkuhler, arXiv preprint 2206.04554, 2022.
https://arxiv.org/abs/2206.04554
[3] Better training using weight-constrained stochastic dynamics, B. Leimkuhler, T. Vlaar, T. Pouchon and A. Storkey, Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 2021.
https://proceedings.mlr.press/v139/leimkuhler21a/leimkuhler21a.pdf
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
Shi Jin (Shanghai Jiao Tong University)
Shi Jin is the Director of Institute of Natural Sciences, and Chair Professor of Mathematics, at Shanghai Jiao Tong University. He also serves as a co-director of the Shanghai National Center for Applied Mathematics.
He received a Feng Kang Prize of Scientific Computing in 2001. He is an inaugural Fellow of the American Mathematical Society (AMS) (2012), a Fellow of Society of Industrial and Applied Mathematics (SIAM) (2013), an inaugural Fellow of the Chinese Society of Industrial and Applied Mathematics (CSIAM) (2020), and an Invited Speaker at the International Congress of Mathematicians in 2018. In 2021 he was elected a Foreign Member of Academia Europaea and a Fellow of European Academy of Sciences.
Random Batch Methods for Interacting Particle Systems and Molecular Dynamics
We first develop random batch methods for classical interacting particle systems with large number of particles. These methods use small but random batches for particle interactions, thus the computational cost is reduced from O(N^2) per time step to O(N), for a system with N particles with binary interactions. For one of the methods, we give a particle number independent error estimate under some special interactions.
This method is also extended to molecular dynamics with Coulomb interactions, in the framework of Ewald summation. We will show its superior performance compared to the current state-of-the-art methods (for example PPPM) for the corresponding problems, in the computational efficiency and parallelizability.
ÀÎÀå À̹ÌÁö 1, À̹ÌÁö 2, À̹ÌÁö 3