396

Date | 2024-08-06 |
---|---|

Speaker | 고승찬 |

Dept. | 인하대학교 |

Room | 27-220 |

Time | 15:00-18:00 |

In recent years, modern machine learning techniques using deep neural networks have achieved tremendous success in various fields. From a mathematical point of view, deep learning essentially involves approximating a target function, relying on the approximation power of deep neural networks. Therefore, it is important to understand the approximation and generalization properties of neural networks in high dimensions. The primary objective of this talk is to mathematically analyze the approximation of neural networks within the classical numerical analysis framework. We will explore the proper regularity of target functions which is suitable for the neural network approximation, and investigate how these properties are reflected in the approximation and learning complexity of neural networks. Next, I will apply these theories to my recent work on the operator learning method for solving parametric PDEs. I will analyze the intrinsic structure of the proposed method through the theory described above, deriving some useful results both theoretically and practically. Furthermore, I will demonstrate some relevant numerical experiments, confirming that these theory-guided strategies can be utilized to significantly improve the performance of the method.

TEL 02-880-5857,6530,6531 / FAX 02-887-4694