本期报告具体信息如下:
时间:2022年10月17日 9:00—10:00
Lecture 5 Neural Nets and Numerical PDEs
神经网络与偏微分方程数值解
蔡智强
Speaker Bio
Dr. Cai received his B.S. and M.S. from Huazhong University of Science and Technology, China in the respective Computer Science and Applied Mathematics, and his Ph.D from University of Colorado in Applied Mathematics in 1990. He went to Purdue as an associate professor in 1996 after serving as a postdoctoral fellow in the Brookhaven National Laboratory and the Courant Institute of New York University and as an assistant professor in the University of Southern California. He has been a summer visiting faculty at the Lawrence Livermore National Laboratory since 2003. His research is on numerical solution of partial differential equations with applications in fluid and solid mechanics. His primary interest was on accuracy control of computer simulations and self-adaptive numerical methods for complex systems before recently focusing on neural network for solving challenging partial differential equations.
蔡智强教授在美国普渡大学数学系任教。他在华中科技大学获计算机科学学士学位和应用数学硕士学位,并于1990年在美国科罗拉多大学获得应用数学博士学位。1996年,他赴普渡大学任副教授,此前在布鲁克海文国家实验室和纽约大学库朗研究所任博士后研究员,并在南加州大学担任助理教授。自2003年以来,他一直是劳伦斯利弗莫尔国家实验室的暑期访问教授。他的研究方向是偏微分方程数值解及其在流体和固体力学中的应用。他的主要兴趣是计算机模拟的精度控制和复杂系统的自适应数值方法,最近,他专注于研究神经网络以解决具有挑战性的偏微分方程。
Abstract
In this talk, I will present our recent works on neural networks (NNs) and its application in numerical PDEs. The first part of the talk is to use NNs to numerically solve scalar linear and nonlinear hyperbolic conservation laws whose solutions are discontinuous. I will show that the NN-based method for this type of problems has an advantage over the mesh-based methods in terms of the number of degrees of freedom.
The second part of the talk is on our adaptive network enhancement (ANE) method. The ANE method is developed to address a fundamental, open question on how to automatically design an optimal NN architecture for approximating functions and solutions of PDEs within a prescribed accuracy. Moreover, to train the resulting non-convex optimization problem, the ANE method provides a natural process of obtaining a good initialization.
在本次报告中,我将介绍我们最近关于神经网络的工作及其在偏微分方程数值解中的应用。 演讲的第一部分是使用神经网络数值求解线性和非线性标量双曲守恒定律方程,这类问题的解具有不连续性。针对此类问题,我将展示基于神经网络的方法在自由度数量方面优于基于网格的方法。
演讲的第二部分是关于我们的自适应网络增强 (ANE) 方法。 开发 ANE 方法是为了解决一个基本的、开放的问题,即如何自动设计一个最优的神经网络架构,以在规定的精度内逼近函数和偏微分方程的解。 此外,为了训练产生的非凸优化问题,ANE 方法提供了一个获得良好初始化的自然过程。