Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier–Stokes Equations
<p>Navier–Stokes: A snapshot of analytical solution (<b>top</b>) prediction solution (<b>middle</b>) and error (<b>bottom</b>) at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p> "> Figure 2
<p>Navier–Stokes: Dynamic weights <math display="inline"><semantics> <msub> <mi>w</mi> <mi>f</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mi>u</mi> </msub> </semantics></math> diagrams are shown.</p> "> Figure 3
<p>Navier–Stokes: The history of relative L2 error (<b>left</b>) of dwPINNs and PINNs and the training process of dynamic weights (<b>right</b>).</p> "> Figure 4
<p>Flow in a lid-driven cavity: Reference solution using a comsol solver, prediction of dynamic weights strategy of PINNs, and absolute point-wise error. The relative L2 error of u is <math display="inline"><semantics> <mrow> <mn>6.512</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>. The relative L2 error of v is <math display="inline"><semantics> <mrow> <mn>8.973</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>.</p> "> Figure 5
<p>The training process of dynamic weights (<b>left</b>) and the relative L2 error of <span class="html-italic">u</span> and <span class="html-italic">v</span> (<b>right</b>).</p> "> Figure 6
<p>Flow past a circular cylinder: Loss history (<b>left</b>) two methods and dynamic weights history (<b>right</b>).</p> "> Figure 7
<p>Flow past a circular cylinder: A snapshot of reference solution (<b>left</b>) with the result approximated by dwPINNs (<b>middle</b>) based on the error (<b>right</b>) at t = 10.</p> ">
Abstract
:1. Introduction
2. Preliminaries
2.1. Partial Differential Equations
2.2. Fully Connected Neural Networks
2.3. Optimization Method
3. Methodology
3.1. Dynamic Weights Strategy for Physics-Informed Neural Networks
Algorithm 1: Dynamic weights strategy for PINNs |
3.2. A Brief Note on the Errors Involved in the dwPINNs Methodology
3.3. Advantages of Dynamic Weight Strategy for Physics-Informed Neural Networks
- 1.
- The optimization error can be reduced by using the dynamic weight strategy for physics-informed neural networks. During training, each part of the loss function can be dropped more evenly, and the loss can become smaller and converge faster.
- 2.
- This method can reduce the generalization error by increasing the weights of hard-to-train points during training. It also makes the error of such hard-to-train points smaller.
4. Numerical Examples
4.1. Navier–Stokes Equations with Analytic Solution
4.2. Comparison of the Different PINNs Methods for 2D Navier–Stokes Equations
4.3. Inverse Problem: Two-Dimensional Navier-Stokes Equations
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Huang, P.; Feng, X.; He, Y. Two-level defect-correction Oseen iterative stabilized finite element methods for the stationary Navier–Stokes equations. Appl. Math. Model. 2013, 37, 728–741. [Google Scholar] [CrossRef]
- Feng, X.; Li, R.; He, Y.; Liu, D. P1-Nonconforming quadrilateral finite volume methods for the semilinear elliptic equations. J. Sci. Comput. 2012, 52, 519–545. [Google Scholar] [CrossRef]
- Feng, X.; He, Y. H1-Super-convergence of center finite difference method based on P1-element for the elliptic equation. Appl. Math. Model. 2014, 38, 5439–5455. [Google Scholar] [CrossRef]
- Elia, M.D.; Perego, M.; Yeneziani, A. A variational data assimilation procedure for the incompressible four equations in hemodynamics. J. Sci. 2012, 52, 340–359. [Google Scholar]
- Baker, N.; Alexander, F.; Bremer, T.; Hagberg, A.; Kevrekidis, Y.; Najm, H.; Parashar, M.; Patra, A.; Sethian, J.; Wild, S.; et al. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence; USDOE Office of Science (SC): Washington, DC, USA, 2019. [Google Scholar]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
- Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic differentiation in machine learning: A survey. J. Mach. Learn. Res. 2018, 18, 1–43. [Google Scholar]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Wight, C.L.; Zhao, J. Solving allen-cahn and cahn-hilliard equations using the adaptive physics informed neural networks. arXiv 2020, arXiv:2007.04542. [Google Scholar]
- Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Method. Appl. Mech. Eng. 2019, 365, 113028. [Google Scholar] [CrossRef]
- Mao, Z.; Jagtap, A.D.; Karniadakis, G.E. Physics-informed neural network for high-speed flows. Comput. Method. Appl. Mech. Eng. 2020, 360, 112789. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Karniadakis, G.E. Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar]
- Yang, L.; Zhang, D.; Karniadakis, G.E. Physics-informed generative adversarial networks for stochastic differential equations. SIAM J. Sci. Comput. 2020, 42, A292–A317. [Google Scholar] [CrossRef]
- Raissi, M. Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations. arXiv 2018, arXiv:1804.07010. [Google Scholar]
- Zhang, D.; Lu, L.; Guo, L.; Karniadakis, G.E. Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. J. Comput. Phys. 2019, 397, 108850. [Google Scholar] [CrossRef]
- Zhang, D.; Guo, L.; Karniadakis, G.E. Learning in modal space: Solving time-dependent stochastic PDEs using physics-informed neural networks. SIAM J. Sci. Comput. 2020, 42, A639–A665. [Google Scholar] [CrossRef]
- Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
- Elhamod, M.; Bu, J.; Singh, C.; Redell, M.; Ghosh, A.; Podolskiy, V.; Lee, W.C.; Karpatne, A. CoPhy-PGNN: Learning physics-guided neural networks with competing loss functions for solving eigenvalue problems. ACM Trans. Intell. Syst. Technol. 2020. [Google Scholar] [CrossRef]
- Wang, S.; Yu, X.; Perdikaris, P. When and why PINNSs fail to train: A neural tangent kernel perspective. J. Comput. Phys. 2022, 449, 110768. [Google Scholar] [CrossRef]
- Mcclenny, L.; Braga-Neto, U. Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. In Proceedings of the AAAI-MLPS 2021, Stanford, CA, USA, 22–24 March 2021. [Google Scholar]
- Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
- Wang, F.; Jiang, M.; Qian, C.; Yang, S.; Li, C.; Zhang, H.; Wang, X.; Tang, X. Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3156–3164. [Google Scholar]
- Dissanayake, M.; Phan-Thien, N. Neural-network-based approximations for solving partial differential equations. Commun. Numer. Methods Eng. 1994, 10, 195–201. [Google Scholar] [CrossRef]
- Barakat, A.; Bianchi, P. Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization. SIAM J. Optim. 2021, 31, 244–274. [Google Scholar] [CrossRef]
- Li, J.; Cheng, J.H.; Shi, J.Y.; Huang, F. Brief introduction of back propagation (BP) neural network algorithm and its improvement. In Advances in Computer Science and Information Engineering; Springer: Berlin/Heidelberg, Germany, 2012; pp. 553–558. [Google Scholar]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. Comput. Sci. 2014. [Google Scholar] [CrossRef]
- Liu, D.C.; Nocedal, J. On the limited memory bfgs method for large scale optimization. Math. Program 1989, 45, 503–528. [Google Scholar] [CrossRef]
- Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signal Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
Error u | Error v | Error p | Training Time (s) | |
---|---|---|---|---|
dwPINNs | 5412.53 | |||
PINNs | 5314.67 |
dwPINNs | PINNs | SAPINNs | Learning Rate Annealing for PINNs | |
---|---|---|---|---|
Relative L2 error |
2000 | 4000 | 8000 | 10,000 | |
---|---|---|---|---|
200 | ||||
1000 | ||||
3000 |
20 | 30 | 40 | 50 | |
---|---|---|---|---|
2 | ||||
3 | ||||
4 |
u | v | Training Time (s) | |||
---|---|---|---|---|---|
dwPINNs (clean) | 0.06% | 0.9% | 30,574 | ||
dwPINNs ( noise) | 0.23% | 2.1% | 30,575 | ||
PINNs | 0.99% | 2.30% | 51,475 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, S.; Feng, X. Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier–Stokes Equations. Entropy 2022, 24, 1254. https://doi.org/10.3390/e24091254
Li S, Feng X. Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier–Stokes Equations. Entropy. 2022; 24(9):1254. https://doi.org/10.3390/e24091254
Chicago/Turabian StyleLi, Shirong, and Xinlong Feng. 2022. "Dynamic Weight Strategy of Physics-Informed Neural Networks for the 2D Navier–Stokes Equations" Entropy 24, no. 9: 1254. https://doi.org/10.3390/e24091254