Numerical Analysis meets Machine Learning
- 1st Edition, Volume 25 - June 13, 2024
- Editors: Siddhartha Mishra, Alex Townsend
- Language: English
- Hardback ISBN:9 7 8 - 0 - 4 4 3 - 2 3 9 8 4 - 7
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 2 3 9 8 5 - 4
Numerical Analysis Meets Machine Learning series, highlights new advances in the field, with this new volume presenting interesting chapters. Each chapter is written by an intern… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quoteNumerical Analysis Meets Machine Learning series, highlights new advances in the field, with this new volume presenting interesting chapters. Each chapter is written by an international board of authors.
- Provides the authority and expertise of leading contributors from an international board of authors
- Presents the latest release in the Handbook of Numerical Analysis series
- Updated release includes the latest information on the Numerical Analysis Meets Machine Learning
Mathematicians, Computational Scientists, Engineers, Computer Scientists, Academic Instructors and Educators, Machine Learning Researchers, Data Analysts, Scientific Programmers. Quantitative Analysts
- Cover image
- Title page
- Table of Contents
- Copyright
- Contributors
- Preface
- Chapter 1: Learning smooth functions in high dimensions
- Abstract
- 1. Introduction
- 2. Problem statement and notation
- 3. Holomorphic functions of infinitely many variables
- 4. Best s-term polynomial approximation
- 5. Limits of learnability from data
- 6. Learning sparse polynomial approximations from data
- 7. DNN existence theory
- 8. Practical existence theory: near-optimal DL
- 9. Epilogue
- References
- Chapter 2: Weak form-based data-driven modeling
- Abstract
- 1. Introduction
- 2. Weak form-based equation discovery
- 3. Theoretical results
- 4. Weak form-based parameter estimation
- 5. Weak form-based reduced order modeling
- 6. Conclusions
- Acknowledgements
- References
- Chapter 3: A mathematical guide to operator learning
- Abstract
- 1. Introduction
- 2. From numerical linear algebra to operator learning
- 3. Neural operator architectures
- 4. Learning neural operators
- 5. Conclusions and future challenges
- Acknowledgements
- References
- Chapter 4: The multiverse of dynamic mode decomposition algorithms
- Abstract
- 1. Introduction
- 2. The basics of DMD
- 3. Variants from the regression perspective
- 4. Variants from the Galerkin perspective
- 5. Variants that preserve structure
- 6. Further topics and open problems
- Acknowledgements
- References
- Chapter 5: Deep learning variational Monte Carlo for solving the electronic Schrödinger equation
- Abstract
- 1. Introduction
- 2. Mathematical preliminaries
- 3. Introduction to variational Monte Carlo (VMC)
- 4. Deep learning VMC
- 5. Results
- References
- Chapter 6: Theoretical foundations of physics-informed neural networks and deep neural operators
- Abstract
- 1. Introduction
- 2. Neural networks
- 3. Mathematical formulations
- 4. Approximation error for PINN in strong formulations
- 5. Training/optimization methods
- 6. Approximation theory with small weights
- 7. PINN with observational data
- 8. Deep operator networks
- Acknowledgements
- Appendix 6.A. Approximation of elementary functions with ReLU NNs
- Appendix 6.B. Approximation of piecewise polynomials
- Appendix 6.C. Approximation of horizon functions
- Appendix 6.D. Proof of Theorem 6.1
- References
- Chapter 7: Computability of optimizers for AI and data science
- Abstract
- 1. Introduction
- 2. Basic notions
- 3. Deep learning as a key technique of artificial intelligence
- 4. Computability of optimal values and existence of computable optimizers
- 5. Finding the optimizer is not effectively solvable
- Acknowledgements
- References
- Chapter 8: Neural Galerkin schemes for sequential-in-time solving of partial differential equations with deep networks
- Abstract
- 1. Introduction
- 2. The need for nonlinear parametrizations in approximating solution fields of PDEs
- 3. Neural Galerkin schemes based on the Dirac-Frenkel variational principle and deep networks
- 4. Adaptive sampling in Neural Galerkin schemes
- 5. Randomized sparse Neural Galerkin schemes
- 6. Conclusions
- References
- Chapter 9: Operator learning
- Abstract
- 1. Introduction
- 2. Operator learning
- 3. Specific supervised learning architectures
- 4. Universal approximation
- 5. Quantitative error and complexity estimates
- 6. Conclusions
- Acknowledgements
- References
- Chapter 10: A structure-preserving domain decomposition method for data-driven modeling
- Abstract
- 1. Introduction
- 2. Relation to previous work
- 3. Local learning of Whitney form elements
- 4. Mortar method
- 5. Numerical results
- Appendix 10.A. Technical details
- References
- Chapter 11: Two-layer neural networks for partial differential equations: optimization and generalization theory
- Abstract
- 1. Introduction
- 2. Deep learning-based PDE solvers
- 3. Main results
- 4. Global convergence of gradient descent
- 5. A priori estimates of generalization error for two-layer neural networks
- 6. Conclusion
- Acknowledgements
- References
- Index
- No. of pages: 588
- Language: English
- Edition: 1
- Volume: 25
- Published: June 13, 2024
- Imprint: North Holland
- Hardback ISBN: 9780443239847
- eBook ISBN: 9780443239854
Read Numerical Analysis meets Machine Learning on ScienceDirect