
Data Assimilation for the Geosciences
From Theory to Application
- 2nd Edition - November 16, 2022
- Imprint: Elsevier
- Author: Steven J. Fletcher
- Language: English
- Paperback ISBN:9 7 8 - 0 - 3 2 3 - 9 1 7 2 0 - 9
- eBook ISBN:9 7 8 - 0 - 3 2 3 - 9 7 2 5 3 - 6
Data Assimilation for the Geosciences: From Theory to Application, Second Edition brings together all of the mathematical and statistical background knowledge needed to formul… Read more

Purchase options

Institutional subscription on ScienceDirect
Request a sales quoteData Assimilation for the Geosciences: From Theory to Application, Second Edition brings together all of the mathematical and statistical background knowledge needed to formulate data assimilation systems into one place. It includes practical exercises enabling readers to apply theory in both a theoretical formulation as well as teach them how to code the theory with toy problems to verify their understanding. It also demonstrates how data assimilation systems are implemented in larger scale fluid dynamical problems related to land surface, the atmosphere, ocean and other geophysical situations. The second edition of Data Assimilation for the Geosciences has been revised with up to date research that is going on in data assimilation, as well as how to apply the techniques. The new edition features an introduction of how machine learning and artificial intelligence are interfacing and aiding data assimilation. In addition to appealing to students and researchers across the geosciences, this now also appeals to new students and scientists in the field of data assimilation as it will now have even more information on the techniques, research, and applications, consolidated into one source.
- Includes practical exercises and solutions enabling readers to apply theory in both a theoretical formulation as well as enabling them to code theory
- Provides the mathematical and statistical background knowledge needed to formulate data assimilation systems into one place
- New to this edition: covers new topics such as Observing System Experiments (OSE) and Observing System Simulation Experiments; and expanded approaches for machine learning and artificial intelligence
Students at the graduate level, those starting to work on DA at a research of operational prediction center that do not a have a DA background, those who are starting to work in industry where they need to understand what DA does and how to implement a version for their requirements
- Title of Book
- Cover image
- Title page
- Table of Contents
- Copyright
- Chapter 1: Introduction
- Chapter 2: Overview of Linear Algebra
- 2.1. Properties of Matrices
- 2.1.1. Matrix Multiplication
- 2.1.2. Transpose of a Matrix
- 2.1.3. Determinants of Matrices
- 2.1.4. Inversions of Matrices
- 2.1.5. Rank, Linear Independence and Dependence
- 2.1.6. Matrix Structures
- 2.2. Matrix and Vector Norms
- 2.2.1. Vector Norms
- 2.2.2. Matrix Norms
- 2.2.3. Conditioning of Matrices
- 2.2.4. Matrix Condition Number
- 2.3. Eigenvalues and Eigenvectors
- 2.4. Matrix Decompositions
- 2.4.1. Gaussian Elimination and the LU Decomposition
- 2.4.2. Cholesky Decomposition
- 2.4.3. The QR Decomposition
- 2.4.4. Diagonalization
- 2.4.5. Singular Value Decomposition
- 2.5. Sherman-Morrison-Woodbury Formula
- 2.6. Summary
- Chapter 3: Univariate Distribution Theory
- 3.1. Random Variables
- 3.2. Discrete Probability Theory
- 3.2.1. Discrete Random Variables
- 3.3. Continuous Probability Theory
- 3.4. Discrete Distribution Theory
- 3.4.1. Binomial Distribution
- 3.4.2. Geometric Distribution
- 3.4.3. Poisson Distribution
- 3.4.4. Discrete Uniform Distribution
- 3.5. Expectation and Variance of Discrete Random Variables
- 3.5.1. Mean of the Binomial Distribution
- 3.5.2. Mean of the Geometric Distribution
- 3.5.3. Mean of the Poisson Distribution
- 3.5.4. Mean of the Discrete Uniform Distribution
- 3.5.5. Variance of a Discrete Probability Mass Function
- 3.5.6. Variance of the Binomial Distribution
- 3.5.7. Variance of the Geometric Distribution
- 3.5.8. Variance of the Poisson Distribution
- 3.5.9. Variance of the Discrete Uniform Distribution
- 3.6. Moments and Moment-Generating Functions
- 3.6.1. Moment-Generating Functions for Probability Mass Functions
- 3.6.2. Binomial Distribution Moment-Generating Function
- 3.6.3. Geometric Distribution Moment-Generating Function
- 3.6.4. Poisson Moment-Generating Function
- 3.6.5. Discrete Uniform Distribution Moment-Generating Function
- 3.7. Continuous Distribution Theory
- 3.7.1. Gaussian (Normal) Distribution
- 3.7.2. Moments of the Gaussian Distribution
- 3.7.3. Moment-Generating Functions for Continuous Probability Density Functions
- 3.7.4. Median of the Gaussian Distribution
- 3.7.5. Mode of the Univariate Gaussian Distribution
- 3.8. Lognormal Distribution
- 3.8.1. Moments of the Lognormal Distribution
- 3.8.2. Geometric Behavior of the Lognormal
- 3.8.3. Median of the Univariate Lognormal Distribution
- 3.8.4. Mode of the Lognormal Distribution
- 3.9. Reverse Lognormal Distribution
- 3.9.1. Mean of the Reverse Lognormal Distribution
- 3.9.2. Variance of the Reverse Lognormal Distribution
- 3.9.3. Skewness of the Reverse Lognormal Distribution
- 3.9.4. Kurtosis of the Reverse Lognormal Distribution
- 3.9.5. Median of the Reverse Lognormal Distribution
- 3.9.6. Mode of the Reverse Lognormal Distribution
- 3.10. Exponential Distribution
- 3.11. Gamma Distribution
- 3.11.1. Moment-Generating Function for the Gamma Distribution
- 3.11.2. Skewness of the Gamma Distribution
- 3.11.3. Kurtosis of the Gamma Distribution
- 3.11.4. Median of the Gamma Distribution
- 3.11.5. Mode of the Gamma Distribution
- 3.11.6. Remarks About the Gamma Distribution and the Gaussian Distribution
- 3.11.7. Properties of Gamma-Distributed Random Variables
- 3.12. Inverse Gamma Distribution
- 3.12.1. Moments of the Inverse-Gamma Distribution
- 3.12.2. Skewness of the Inverse-Gamma Distribution
- 3.12.3. Kurtosis of the Inverse-Gamma Distribution
- 3.12.4. Mode of the Inverse-Gamma Distribution
- 3.13. Beta Distribution
- 3.13.1. Moments of the Beta Distribution
- 3.13.2. Median of the Beta Distribution
- 3.13.3. Mode of the Beta Distribution
- 3.14. Chi-Squared (χ2) Distribution
- 3.14.1. Moments of the Chi-Squared Distribution
- 3.14.2. Median of the Chi-Squared Distribution
- 3.14.3. Mode of the Chi-Squared Distribution
- 3.14.4. Relationships to Other Distributions
- 3.15. Rayleigh Distribution
- 3.15.1. Moment-Generating Function for the Rayleigh Distribution
- 3.15.2. Moments of the Rayleigh Distribution
- 3.15.3. Skewness of the Rayleigh Distribution
- 3.15.4. Kurtosis of the Rayleigh Distribution
- 3.15.5. Median of the Rayleigh Distribution
- 3.15.6. Mode of the Rayleigh Distribution
- 3.16. Weibull Distribution
- 3.16.1. Moments of the Weibull Distribution
- 3.16.2. Skewness and Kurtosis of the Weibull Distribution
- 3.16.3. Mode of the Weibull Distribution
- 3.17. Gumbel Distribution
- 3.17.1. Moments of the Gumbel Distribution
- 3.17.2. Differentiating Gamma Functions
- 3.17.3. Returning to the Moments of the Gumbel Distribution
- 3.17.4. Skewness of a Gumbel Distribution
- 3.17.5. Kurtosis of the Gumbel Distribution
- 3.17.6. Median of the Gumbel Distribution
- 3.17.7. Mode of the Gumbel Distribution
- 3.18. Summary of the Descriptive Statistics, Moment-Generating Functions, and Moments for the Univariate Distribution
- 3.19. Summary
- Chapter 4: Multivariate Distribution Theory
- 4.1. Descriptive Statistics for Multivariate Density Functions
- 4.1.1. Multivariate Moment-Generating Functions
- 4.1.2. Moments of Multivariate Distributions
- 4.1.3. Second-Order Moments: Variance and Covariance
- 4.1.4. Third-Order Moments: Skewness and Co-Skewness
- 4.1.5. Fourth-Order Moments: Kurtosis and Co-kurtosis
- 4.1.6. Mode of Multivariate Distribution
- 4.1.7. Median of Multivariate Distribution
- 4.2. Gaussian Distribution
- 4.2.1. Bivariate Gaussian Distribution
- 4.2.2. Medians of the Bivariate Gaussian Distribution
- 4.2.3. Mode of the Bivariate Lognormal
- 4.2.4. Multivariate Gaussian Distribution
- 4.3. Lognormal Distribution
- 4.3.1. Bivariate Lognormal Distribution
- 4.3.2. Moments of the Bivariate Lognormal Distribution
- 4.3.3. Median of the Bivariate Lognormal Distribution
- 4.3.4. Maximum Likelihood State of a Bivariate Lognormal Distribution
- 4.3.5. Multivariate Lognormal Distribution
- 4.4. Mixed Gaussian-Lognormal Distribution
- 4.4.1. Moments of the Bivariate Mixed Gaussian-Lognormal Distribution
- 4.4.2. Median of the Mixed Gaussian-Lognormal Distribution
- 4.4.3. Maximum Likelihood Estimate for the Mixed Gaussian and Lognormal Distribution
- 4.4.4. Diagrams of the Bivariate Gaussian-Lognormal Distribution
- 4.5. Multivariate Mixed Gaussian-Lognormal Distribution
- 4.5.1. Trivariate and Quadvariate Mixed Distribution
- 4.5.2. Mode of the Multivariate Mixed Distribution
- 4.6. Reverse Lognormal Distribution
- 4.6.1. Multivariate Reverse Lognormal Distribution
- 4.6.2. Combining With Gaussian Distribution
- 4.6.3. Combining With a Lognormal Distribution
- 4.6.4. Combining Multivariate Gaussian, Lognormal, and Reverse-Lognormal Distributions
- 4.7. Gamma Distribution
- 4.7.1. Bivariate Gamma Distribution
- 4.7.2. Multivariate Gamma Distribution
- 4.8. Summary
- Chapter 5: Introduction to Calculus of Variation
- 5.1. Examples of Calculus of Variation Problems
- 5.1.1. Shortest/Minimum Distance
- 5.1.2. Brachistochrone Problem
- 5.1.3. Minimum Surface Area
- 5.1.4. Dido's Problem—Maximum Enclosed Area for a Given Perimeter Length
- 5.1.5. General Form of Calculus of Variation Problems
- 5.2. Solving Calculus of Variation Problems
- 5.2.1. Special Cases for Euler's Equations
- 5.2.2. Transversality Conditions
- 5.3. Functional With Higher-Order Derivatives
- 5.4. Three-Dimensional Problems
- 5.5. Functionals With Constraints
- 5.6. Functional With Extremals That Are Functions of Two or More Variables
- 5.6.1. Three-Dimensional Problems
- 5.7. Summary
- Chapter 6: Introduction to Control Theory
- 6.1. The Control Problem
- 6.2. The Uncontrolled Problem
- 6.2.1. Fundamental Solutions
- 6.2.2. Properties of the State Transition Matrix
- 6.2.3. Time-Invariant Case
- 6.2.4. Properties of Exponential Matrices
- 6.2.5. Eigenvalues/Vectors Approach for Finding the State Transition Matrix
- 6.3. The Controlled Problem
- 6.3.1. Controllability
- 6.3.2. Equivalence
- 6.4. Observability
- 6.5. Duality
- 6.6. Stability
- 6.6.1. Algebraic Stability Conditions
- 6.7. Feedback
- 6.7.1. Observers and State Estimators
- 6.8. Summary
- Chapter 7: Optimal Control Theory
- 7.1. Optimizing Scalar Control Problems
- 7.2. Multivariate Case
- 7.3. Autonomous (Time-Invariant) Problem
- 7.4. Extension to General Boundary Conditions
- 7.4.1. Extension of Calculus of Variation Theory
- 7.4.2. Optimal Control Problems With General Boundary Conditions
- 7.5. Free End Time Optimal Control Problems
- 7.5.1. Extension of the Calculus of Variation Theory
- 7.5.2. Applying the Theory to Control Problems
- 7.6. Piecewise Smooth Calculus of Variation Problems
- 7.6.1. Extension of Calculus of Variation Techniques
- 7.6.2. Application to the Optimal Control Problem
- 7.7. Maximization of Constrained Control Problems
- 7.7.1. Constrained Control Problems
- 7.8. Two Classical Optimal Control Problems
- 7.9. Summary
- Chapter 8: Numerical Solutions to Initial Value Problems
- 8.1. Local and Truncation Errors
- 8.2. Linear Multistep Methods
- 8.3. Stability
- 8.4. Convergence
- 8.4.1. Explicit and Implicit Numerical Scheme
- 8.4.2. Dahlquist Convergence Theorem Example
- 8.5. Runge-Kutta Schemes
- 8.5.1. Explicit Runge-Kutta Methods
- 8.5.2. Consistency and Stability of Explicit Runge-Kutta Methods
- 8.5.3. Derivation of the Fourth-Order Runge-Kutta Scheme
- 8.6. Numerical Solutions to Initial Value Partial Differential Equations
- 8.6.1. Heat Equation
- 8.6.2. Numerical Approach
- 8.6.3. Norms and the Maximum Principle
- 8.6.4. Implementing and Solving the Implicit Equation
- 8.6.5. θ-Methods
- 8.6.6. More Generous Stability Condition
- 8.7. Wave Equation
- 8.7.1. Forward-Time, Centered-Space
- 8.7.2. Explicit Upwind
- 8.7.3. Implicit Upwind
- 8.7.4. Box Scheme
- 8.7.5. Lax-Wendroff Scheme
- 8.8. Courant Friedrichs Lewy Condition
- 8.9. Summary
- Chapter 9: Numerical Solutions to Boundary Value Problems
- 9.1. Types of Differential Equations
- 9.2. Shooting Methods
- 9.2.1. Nonlinear Problems
- 9.3. Finite Difference Methods
- 9.3.1. Truncation Error
- 9.3.2. Mixed Boundary Conditions
- 9.4. Self-Adjoint Problems
- 9.5. Error Analysis
- 9.5.1. Irreducibility
- 9.6. Partial Differential Equations
- 9.6.1. Truncation Error
- 9.6.2. General Natural Ordering
- 9.6.3. Error Bound on Numerical Solution
- 9.6.4. Mixed Boundary Conditions
- 9.7. Self-Adjoint Problem in Two Dimensions
- 9.7.1. Solution Methods for Linear Matrix Equations
- 9.7.2. Jacobi Method
- 9.7.3. Gauss-Seidel
- 9.7.4. Successive Over-Relaxation Method
- 9.8. Periodic Boundary Conditions
- 9.9. Summary
- Chapter 10: Introduction to Semi-Lagrangian Advection Methods
- 10.1. History of Semi-Lagrangian Approaches
- 10.2. Derivation of Semi-Lagrangian Approach
- 10.3. Interpolation Polynomials
- 10.3.1. Lagrange Interpolation Polynomials
- 10.3.2. Newton Divided Difference Polynomials
- 10.3.3. Hermite Interpolating Polynomials
- 10.3.4. Cubic Spline Interpolation Polynomials
- 10.3.5. Shape-Conserving Semi-Lagrangian Advection
- 10.4. Stability of Semi-Lagrangian Schemes
- 10.4.1. Stability Analysis of the Linear Lagrange Interpolation
- 10.4.2. Stability Analysis of the Quadratic Lagrange Interpolation
- 10.4.3. Stability Analysis of the Cubic Lagrange Interpolation
- 10.4.4. Stability Analysis of the Cubic Hermite Semi-Lagrangian Interpolation Scheme
- 10.4.5. Stability Analysis of the Cubic Spline Semi-Lagrangian Interpolation Scheme
- 10.5. Consistency Analysis of Semi-Lagrangian Schemes
- 10.6. Semi-Lagrangian Schemes for Non-Constant Advection Velocity
- 10.7. Semi-Lagrangian Scheme for Non-Zero Forcing
- 10.8. Example: 2D Quasi-Geostrophic Potential Vorticity (Eady Model)
- 10.8.1. Numerical Approximations for the Eady Model
- 10.8.2. Numerical Approximations to the Advection Equation
- 10.8.3. Numerical Approximation to the Laplace Equation in the Interior
- 10.8.4. Buoyancy Advection on the Boundaries: b0′=0, b1′=αsin(KΔx)
- 10.8.5. Conditioning
- 10.8.6. QGPV ≠0
- 10.9. Summary
- Chapter 11: Introduction to Finite Element Modeling
- 11.1. Solving the Boundary Value Problem
- 11.2. Weak Solutions of Differential Equation
- 11.2.1. Heat Development Due to Hydration of Concrete
- 11.2.2. Torsion of a Bar of Equilateral Triangle Cross Section
- 11.3. Accuracy of the Finite Element Approach
- 11.4. Pin Tong
- 11.5. Finite Element Basis Functions
- 11.5.1. One Dimension
- 11.5.2. Two Dimensions
- 11.6. Coding Finite Element Approximations for Triangle Elements
- 11.6.1. Square Elements
- 11.7. Isoparametric Elements
- 11.8. Summary
- Chapter 12: Numerical Modeling on the Sphere
- 12.1. Vector Operators in Spherical Coordinates
- 12.1.1. Spherical Unit Vectors
- 12.2. Spherical Vector Derivative Operators
- 12.3. Finite Differencing on the Sphere
- 12.3.1. Map Projections
- 12.3.2. Grid-Point Representations of the Sphere
- 12.3.3. Different Grid Configuration
- 12.3.4. Vertical Staggering Grids
- 12.4. Introduction to Fourier Analysis
- 12.4.1. Fourier Series
- 12.4.2. Fourier Transforms
- 12.4.3. Laplace Transforms
- 12.5. Spectral Modeling
- 12.5.1. Sturm-Liouville Theory
- 12.5.2. Legendre Differential Equation
- 12.5.3. Legendre Polynomials
- 12.5.4. Spherical Harmonics
- 12.5.5. Legendre Transforms
- 12.5.6. Spectral Methods on the Sphere
- 12.6. Summary
- Chapter 13: Tangent Linear Modeling and Adjoints
- 13.1. Additive Tangent Linear and Adjoint Modeling Theory
- 13.1.1. Derivation of the Linearized Model
- 13.1.2. Adjoints
- 13.1.3. Differentiating the Code to Derive the Adjoint
- 13.1.4. Test of the Tangent Linear and Adjoints Models
- 13.2. Multiplicative Tangent Linear and Adjoint Modeling Theory
- 13.3. Examples of Adjoint Derivations
- 13.3.1. Lorenz 63 Model
- 13.3.2. Eady Model
- 13.3.3. Tangent Linear Approximations to Semi-Lagrangian Schemes
- 13.3.4. Adjoint of Spectral Transforms
- 13.4. Perturbation Forecast Modeling
- 13.4.1. Example With a 1D Shallow Water Equations Model
- 13.5. Adjoint Sensitivities
- 13.6. Singular Vectors
- 13.6.1. Observational Impact
- 13.7. Summary
- Chapter 14: Observations
- 14.1. Conventional Observations
- 14.1.1. Radiosondes
- 14.1.2. Microwave Radiometer
- 14.1.3. Infrared Sky Imager
- 14.1.4. Micropulse Lidar
- 14.1.5. Photometer
- 14.1.6. SNOTEL
- 14.1.7. SCAN
- 14.1.8. Airborne Observations
- 14.1.9. Ocean
- 14.1.10. Radar
- 14.2. Remote Sensing
- 14.2.1. Radiative Transfer Modeling
- 14.2.2. Satellite Characteristics
- 14.2.3. Infrared
- 14.2.4. Microwave
- 14.2.5. Visible
- 14.2.6. Lidar
- 14.2.7. Global Positioning System
- 14.3. Quality Control
- 14.3.1. Variational Quality Control
- 14.3.2. Variational Bias Correction
- 14.4. Summary
- Chapter 15: Non-Variational Sequential Data Assimilation Methods
- 15.1. Direct Insertion
- 15.2. Nudging
- 15.3. Successive Correction
- 15.3.1. Bergthórsson and Döös [32]
- 15.3.2. Cressman [79]
- 15.3.3. Barnes [24]
- 15.4. Linear and Nonlinear Least Squares
- 15.4.1. Univariate Linear Least Squares
- 15.4.2. Multidimensional Least Squares
- 15.4.3. Nonlinear Least Squares Theory
- 15.5. Regression
- 15.5.1. Linear Regression Involving Two or More Variables
- 15.5.2. Nonlinear Regression
- 15.6. Optimal (Optimum) Interpolation/Statistical Interpolation/Analysis Correction
- 15.6.1. Derivation of the Optimum Interpolation From Alaka and Elvander [3]
- 15.6.2. Matrix Version of Optimum Interpolation
- 15.6.3. Implementation of OI
- 15.6.4. Analysis Correction (AC)
- 15.7. Summary
- Chapter 16: Variational Data Assimilation
- 16.1. Sasaki and the Strong and Weak Constraints
- 16.2. Three-Dimensional Data Assimilation
- 16.2.1. Gaussian Framework
- 16.3. Four-Dimensional Data Assimilation
- 16.4. Incremental VAR
- 16.4.1. Incremental Spatial VAR, 1D, 2D, and 3D VAR
- 16.4.2. Incremental Temporal 4D VAR
- 16.4.3. Inner and Outer Loops
- 16.4.4. Nonlinearities and Outer Loops
- 16.4.5. First Guess at Appropriate Time
- 16.5. Weak Constraint—Model Error 4D VAR
- 16.5.1. Model-Bias Control Variable
- 16.5.2. Modeling the Model Error Covariance Matrix
- 16.5.3. Model Error Forcing Control Variable
- 16.5.4. Model State Control Variable
- 16.5.5. Time Lag Model Error Modeling
- 16.6. Observational Errors
- 16.6.1. Correlated Measurement Errors
- 16.7. Forecast Sensitivity Observation Impact (FSOI)
- 16.8. Saddle Point 4D VAR
- 16.9. Rapid Update Cycling (RUC)
- 16.10. Regularization
- 16.10.1. Optimal Transport
- 16.10.2. Lp-Norm Regularization
- 16.11. 4D VAR as an Optimal Control Problem
- 16.12. Summary
- Chapter 17: Subcomponents of Variational Data Assimilation
- 17.1. Balance
- 17.1.1. Linear and Nonlinear Balances
- 17.1.2. Linear and Nonlinear Normal Mode Initialization
- 17.2. Control Variable Transforms
- 17.2.1. Kinematic Approach
- 17.2.2. Spectral-Based CVT
- 17.2.3. Wavelet
- 17.2.4. Nonlinear Balance on the Sphere
- 17.2.5. Ellipticity Conditions for Continuous PDEs
- 17.2.6. Higher-Order Balance Conditions
- 17.2.7. Geostrophic Coordinates
- 17.2.8. Linearization
- 17.3. Background Error Covariance Modeling
- 17.3.1. Error Modeling Functions
- 17.3.2. Determining Variances and Decorrelation Lengths
- 17.4. Preconditioning
- 17.4.1. Time-Parallel Preconditioning
- 17.5. Minimization Algorithms
- 17.5.1. Newton-Raphson
- 17.5.2. Quasi-Newton Methods
- 17.5.3. Steepest Descent
- 17.5.4. Conjugate Gradient
- 17.5.5. Lanczos Methods
- 17.6. Performance Metrics
- 17.6.1. Scorecard
- 17.7. Summary
- Chapter 18: Observation Space Variational Data Assimilation Methods
- 18.1. Derivation of Observation Space-Based 3D VAR
- 18.2. 4D VAR in Observation Space
- 18.2.1. Solution to the Coupled Linear Euler-Lagrange System
- 18.2.2. Representer Solution to a Coupled Linearized Euler-Lagrange System
- 18.3. Duality of the VAR and PSAS Systems
- 18.4. Summary
- Chapter 19: Kalman Filter and Smoother
- 19.1. Derivation of the Kalman Filter
- 19.2. Kalman Filter Derivation From a Statistical Approach
- 19.3. Extended Kalman Filter
- 19.4. Square Root Kalman Filter
- 19.5. Smoother
- 19.5.1. Forward Step: Kalman Filter
- 19.5.2. Backward Step: Reverse-Time Information Filter
- 19.5.3. Smoothing
- 19.6. Properties and Equivalencies of the Kalman Filter and Smoother
- 19.7. Summary
- Chapter 20: Ensemble-Based Data Assimilation
- 20.1. Stochastic Dynamical Modeling
- 20.2. Ensemble Kalman Filter
- 20.2.1. Perturbed Observations-Based EnKF
- 20.3. Ensemble Square Root Filters
- 20.3.1. Localization and Inflation
- 20.4. Ensemble and Local Ensemble Transform Kalman Filter
- 20.4.1. ETKF
- 20.4.2. LETKF
- 20.5. Maximum Likelihood Ensemble Filter
- 20.5.1. Forecast Step
- 20.5.2. Analysis Step
- 20.5.3. Lyapunov and Bred Vectors
- 20.5.4. Hybrid Lyapunov-Bred Vectors
- 20.5.5. MLEF, Information Theory, and Entropy Reduction
- 20.6. Hybrid Ensemble and Variational Data Assimilation Methods
- 20.6.1. α Control Variables
- 20.6.2. Hybrid Ensemble Transform PSAS
- 20.6.3. Ensembles of 4D VARs (EDA)
- 20.7. NDEnVAR
- 20.8. Scale Dependent Background Error Covariance Localization
- 20.9. Ensemble Kalman Smoother
- 20.10. Ensemble Sensitivity
- 20.11. Ensemble Forecast Sensitivity to Observations (EFSO)
- 20.12. Local Ensemble Tangent Linear Model (LETLM)
- 20.13. Summary
- Chapter 21: Non-Gaussian Based Data Assimilation
- 21.1. Error Definitions
- 21.2. Full Field Lognormal 3D VAR
- 21.2.1. Lognormal Observational Error
- 21.2.2. Lognormal Background Errors
- 21.3. Logarithmic Transforms
- 21.4. Mixed Gaussian-Lognormal 3D VAR
- 21.4.1. Experiments With the Lorenz 1963 Model
- 21.5. Lognormal Calculus of Variation-Based 4D VAR
- 21.5.1. Near Weighted Least Squares Functional Formulation for Non-Gaussian 4D VAR
- 21.5.2. Functional Form of a Modal Approach for Non-Gaussian Distribution-Based 4D VAR
- 21.6. Bayesian-Based 4D VAR
- 21.6.1. Bayesian Networks
- 21.6.2. Equivalence of the Weighted Least Squares and Probability Models for Multivariate Gaussian Errors
- 21.6.3. Equivalence of the Lognormal Functional Approach
- 21.6.4. Mixed Distribution Equivalency to Weighted Least Squares Approach
- 21.7. Bayesian Networks Formulation of Weak Constraint/Model Error 4D VAR
- 21.8. Results of the Lorenz 1963 Model for 4D VAR
- 21.9. Incremental Lognormal and Mixed 3D and 4D VAR
- 21.9.1. Multiplicative Incremental 3D VAR
- 21.9.2. Multiplicative Incremental 4D VAR
- 21.9.3. Mixed Additive and Multiplicative Incremental VAR
- 21.9.4. Analysis Mean of a Lognormal Data Assimilation System Not Equal to Zero
- 21.9.5. Comparison of a Mixed Incremental System With Gaussian-Only Scheme
- 21.10. Reverse Lognormal Variational Data Assimilation
- 21.10.1. 3D and 4D Mixed Gaussian-Reverse Lognormal Cost Functions
- 21.10.2. 3D and 4D Mixed Lognormal-Reverse Lognormal Cost Functions
- 21.10.3. 3D and 4D Mixed Gaussian-Lognormal-Reverse-Lognormal Cost Functions
- 21.11. Lognormal and Mixed Gaussian-Lognormal Kalman Filters
- 21.11.1. Attempted Derivation at a Lognormal Based Kalman Filter
- 21.11.2. Lognormal Kalman Filter - Median Based Approach
- 21.11.3. Mixed Gaussian-Lognormal Kalman Filter (MXKF)
- 21.12. Gaussian Anamorphosis
- 21.13. Gamma-Inverse-Gamma-Gaussian (GIGG) Filter
- 21.14. Regions of Optimality for Lognormal Descriptive Statistics
- 21.15. Summary
- Chapter 22: Markov Chain Monte Carlo, Particle Filters, Particle Smoothers, and Sigma Point Filters
- 22.1. Markov Chain Monte Carlo Methods
- 22.1.1. MC Methods for Inverse Problems
- 22.1.2. Sample Methods
- 22.1.3. Application of MCMC in the Geosciences
- 22.2. Particle Filters
- 22.2.1. Resampling
- 22.2.2. Proposal Densities
- 22.2.3. Optimal Proposal Density
- 22.2.4. Implicit Particle Filter
- 22.2.5. Transportation Particle Filters
- 22.2.6. Tempering of the Likelihood
- 22.2.7. Particle Flow Filters
- 22.3. Local Particle Filter
- 22.4. Particle Smoother
- 22.5. Sigma Point Kalman Filters (SPKF)
- 22.5.1. Sigma-Point Unscented KF (SP-UKF)
- 22.5.2. Sigma Point Central Difference KF (SP-CDKF)
- 22.6. Summary
- Chapter 23: Lagrangian Data Assimilation
- 23.1. Extended Kalman Filter Approach
- 23.2. Variational Lagrangian Data Assimilation
- 23.2.1. Converting Lagrangian Data to Eulerian to Assimilate
- 23.2.2. Direct Assimilation of Lagrangian Observations
- 23.2.3. Direct Lagrangian Trajectory Variational Assimilation
- 23.3. Lagrangian Ensemble Kalman Filter
- 23.4. Localized Ensemble Transform Kalman Filter Lagrangian Data Assimilation (LETKF-LaDA)
- 23.5. Hybrid Particle Filters and Ensemble Kalman Filters Lagrangian Data Assimilation
- 23.6. Summary
- Chapter 24: Artificial Intelligence and Data Assimilation
- 24.1. Helpful Definitions
- 24.2. Introduction to Machine Learning Algorithms
- 24.2.1. Linear Regression
- 24.2.2. Logistic Regression
- 24.2.3. Support Vector Machine
- 24.2.4. Classification and Regression Trees (CART)
- 24.2.5. K-Nearest Neighbors
- 24.2.6. Random Forests
- 24.3. Introduction to Deep Learning
- 24.3.1. Neural Networks (NN)
- 24.3.2. Restricted Boltzmann Machine (RBM)
- 24.3.3. Training Algorithms
- 24.4. Applications of Artificial Intelligence With Data Assimilation
- 24.4.1. Detection of Non-Gaussian Signals
- 24.4.2. Deep Data Assimilation
- 24.4.3. Latent Space Data Assimilation by Using Deep Learning
- 24.4.4. Deep Learning for Fast Radiative Transfer
- 24.4.5. Using ML to Correct Model Error
- 24.4.6. k-Nearest Neighbor for Data Driven Data Assimilation (DD-DA)
- 24.4.7. Other Applications
- 24.5. Summary
- Chapter 25: Applications of Data Assimilation in the Geosciences
- 25.1. Atmospheric Science
- 25.1.1. Operational Numerical Weather Prediction Centers
- 25.1.2. Limited Area Synoptic Scale Data Assimilation
- 25.1.3. Mesoscale Data Assimilation
- 25.1.4. Cloud Resolving Data Assimilation
- 25.1.5. Retrievals
- 25.1.6. Atmospheric Chemistry and Aerosols Assimilation
- 25.2. Joint Effort for Data Assimilation Integration (JEDI)
- 25.2.1. OOPS Abstract Interfaces
- 25.2.2. Observations Space Interfaces
- 25.2.3. Error Covariances
- 25.2.4. UFO, IODA, and SABER
- 25.3. Observing-System Experiments (OSE)
- 25.4. Observing System Simulation Experiments (OSSE)
- 25.5. Oceans
- 25.5.1. Global Ocean Data Assimilation
- 25.5.2. Regional Ocean Data Assimilation
- 25.5.3. Sea Ice Data Assimilation
- 25.6. Hydrological Applications
- 25.7. Coupled Data Assimilation
- 25.7.1. Coupled Atmosphere-Ocean Data Assimilation
- 25.7.2. Coupled Land and Atmosphere Data Assimilation
- 25.7.3. Coupled Atmosphere-Land-Ocean-Sea Ice Data Assimilation
- 25.8. Reanalysis
- 25.9. Ionospheric Data Assimilation
- 25.10. Renewable Energy Data Application
- 25.11. Earthquakes
- 25.11.1. Optimal Interpolation
- 25.11.2. Greens Function Data Assimilation
- 25.12. Oil and Natural Gas
- 25.13. Biogeoscience Application of Data Assimilation
- 25.14. Other Applications of Data Assimilation
- 25.15. Summary
- Chapter 26: Solutions to Select Exercise
- Chapter 2
- Chapter 3
- Chapter 5
- Chapter 6
- Chapter 7
- Chapter 8
- Chapter 9
- Bibliography
- Index
- Edition: 2
- Published: November 16, 2022
- No. of pages (eBook): 1128
- Imprint: Elsevier
- Language: English
- Paperback ISBN: 9780323917209
- eBook ISBN: 9780323972536
SF
Steven J. Fletcher
Steven J. Fletcher is a Research Scientist III at the Cooperative Institute for Research in the Atmosphere (CIRA) at Colorado State University, where he is the lead scientist on the development of non-Gaussian based data assimilation theory for variational, PSAS, and hybrid systems. He has worked extensively with the Naval Research Laboratory in Monterey in development of their data assimilation system, as well as working with the National Atmospheric and Oceanic Administration (NOAA)’s Environmental Prediction Centers (EMC) data assimilation system. Dr. Fletcher is extensively involved with the American Geophysical Union (AGU)’s Fall meeting planning committee, having served on the committee since 2013 as the representative of the Nonlinear Geophysics section. He has also been the lead organizer and science program committee member for the Joint Center for Satellite Data Assimilation Summer Colloquium on Satellite Data Assimilation since 2016. Dr. Fletcher is the author of Data Assimilation for the Geosciences: From Theory to Application (Elsevier, 2017). In 2017 Dr. Fletcher became a fellow of the Royal Meteorological Society.
Affiliations and expertise
Research Scientist III, Cooperative Institute for Research in the Atmosphere (CIRA), Colorado State University – Fort Collins, Colorado, USARead Data Assimilation for the Geosciences on ScienceDirect