# Dynamic Data Assimilation

0( 0 REVIEWS )
4 STUDENTS

Introduction:Data Mining, Data Assimilation, Inverse problems and Prediction – Static vs. dynamic and deterministic vs. stochastic problems- formulation & classification;Mathematical tools:Finite dimensional vector space – basic concepts – Overview of properties and operations on matrices – Special classes of matrices, Eigen decomposition, and matrix square root – Gradient, Jacobian, Hessian, Quadratic forms and their properties;Static, deterministic models: least Squares method – formulation and properties – Linear least squares (LLS) – over determined case, weighted and unweighted formulation, orthogonal and oblique projections – LLS- Underdetermined case -Lagrangian multiplier – Nonlinear least squares problem (NLS) – formulation – Approximation – first and second-order methods for solving NLS – Examples of LLS and NLS – satellite retrieval

Matrix methods solving LLS:Normal equations – symmetric positive definite (SPD) systems – multiplicative matrix decomposition – Cholesky decomposition- matrix square root – Gramm-Schmidt orthogonalization process – QR decomposition – Singular value decomposition (SVD) – Solution of retrieval problems;Direct minimization methods for solving LLS:LLS as a quadratic minimization problem – Gradient method, its properties – Convergence and speed of convergence of gradient method – Conjugate gradient and Quasi-Newton methods – Practice problems and programming exercises;Deterministic, dynamic models:adjoint method – Dynamic models, role of observations, and least squares objective function, estimation of initial condition (IC) and parameters, adjoint sensitivity – A straight line problem – a warm up – Linear model, first-order adjoint dynamics and computation of the gradient of the least squares objective function – Nonlinear model and first-order adjoint dynamics – Illustrative examples and practice problems, programming exercises

Deterministic, Dynamic models:Other methods – Forward sensitivity method for estimation of IC and parameters, forward sensitivity dynamics – Example of Carbon dynamics,Relation between adjoint and forward sensitivity, Predictability, Lyapunov index – Method of nudging and overview of nudging methods;Static, stochastic models:Bayesian framework – Bayesian method – linear, Gaussian case – Linear minimum variance estimation (LMVE) and prelude to Kalman filter – Model space vs. observation space formulation -Duality between Bayesian and LMVE;Dynamic, Stochastic models:Kalman filtering – Derivation of the Kalman filter equations – Derivation of Nonlinear filter – Computational requirements – Ensemble Kalman filtering;Dynamic stochastic models:Other methods – Unscented Kalman filtering – Particle filtering – An overview and assessment of methods

### Course Curriculum

 An Overview Details 1:16 Data Mining, Data assimilation and prediction Details 1:4:56 A classification of forecast errors Details 27:8 Finite Dimensional Vector Space Details 48:31 Matrices Details 1:17:30 Matrices Continued Details 45:11 Multi-variate Calculus Details 50:15 Optimization in Finite Dimensional Vector spaces Details 59:46 Deterministic, Static, linear Inverse (well-posed) Problems Details 1:3:27 Deterministic, Static, Linear Inverse (Ill-posed) Problems Details 31:21 A Geometric View – Projections Details 33:21 Deterministic, Static, nonlinear Inverse Problems Details 35:22 On-line Least Squares Details 37:20 Examples of static inverse problems Details 50:28 Interlude and a Way Forward Details 14:29 Matrix Decomposition Algorithms Details 1:3:1 Matrix Decomposition Algorithms Continued Details 50:51 Minimization algorithms Details 1:10:48 Minimization algorithms Continued Details 1:6:46 Inverse problems in deterministic Details 1:10:50 Inverse problems in deterministic Continued Details 54:25 Forward sensitivity method Details 1:2:7 Relation between FSM and 4DVAR Details 44:28 Statistical Estimation Details 1:26:41 Statistical Least Squares Details 52:29 Maximum Likelihood Method Details 28:59 Bayesian Estimation Details 1:17:34 From Gauss to Kalman-Linear Minimum Variance Estimation Details 1:9:41 Initialization Classical Method Details 1:13:59 Optimal interpolations Details 59:7 A Bayesian Formation-3D-VAR methods Details 54:1 Linear Stochastic Dynamics – Kalman Filter Details 1:4:55 Linear Stochastic Dynamics – Kalman Filter Continued Details 27:52 Linear Stochastic Dynamics – Kalman Filter Continued. Details 39:48 Covariance Square Root Filter Details 49:49 Nonlinear Filtering Details 2:30:54 Ensemble Reduced Rank Filter Details 1:37:2 Basic nudging methods Details 1:5:22 Deterministic predictability Details 1:20:29 Predictability A stochastic view and Summary Details 1:17:19

## N.A

ratings
• 5 stars0
• 4 stars0
• 3 stars0
• 2 stars0
• 1 stars0

No Reviews found for this course.

• FREE
• UNLIMITED ACCESS