Environmental Data Analysis with MatLab
- 2nd Edition - March 7, 2016
- Authors: William Menke, Joshua Menke
- Language: English
- Hardback ISBN:9 7 8 - 0 - 1 2 - 8 0 4 4 8 8 - 9
- eBook ISBN:9 7 8 - 0 - 1 2 - 8 0 4 5 5 0 - 3
Environmental Data Analysis with MatLab is a new edition that expands fundamentally on the original with an expanded tutorial approach, new crib sheets, and problem sets providing… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quoteEnvironmental Data Analysis with MatLab is a new edition that expands fundamentally on the original with an expanded tutorial approach, new crib sheets, and problem sets providing a clear learning path for students and researchers working to analyze real data sets in the environmental sciences. Since publication of the bestselling Environmental Data Analysis with MATLAB®, many advances have been made in environmental data analysis. One only has to consider the global warming debate to realize how critically important it is to be able to derive clear conclusions from often noisy data drawn from a broad range of sources. The work teaches the basics of the underlying theory of data analysis and then reinforces that knowledge with carefully chosen, realistic scenarios.
MATLAB®, a commercial data processing environment, is used in these scenarios. Significant content is devoted to teaching how it can be effectively used in an environmental data analysis setting. This new edition, though written in a self-contained way, is supplemented with data and MATLAB® scripts that can be used as a data analysis tutorial.
New features include boxed crib sheets to help identify major results and important formulas and give brief advice on how and when they should be used. Numerical derivatives and integrals are derived and illustrated. Includes log-log plots with further examples of their use. Discusses new datasets on precipitation and stream flow. Topical enhancement applies the chi-squared test to the results of the generalized least squares method. New coverage of cluster analysis and approximation techniques that are widely applied in data analysis, including Taylor Series and low-order polynomial approximations; non-linear least-squares with Newton’s method; and pre-calculation and updating techniques applicable to real time data acquisition.
- Provides a clear learning path for researchers and students using data analysis techniques which build upon one another, choosing the right order of presentation to substantially aid the reader in learning material
- Includes crib sheets to summarize the most important data analysis techniques, results, procedures, and formulas, serving to organize the material in such a way that its sequence is more apparent
- Uses real-world environmental examples and case studies formulated using the readily-available software environment in MATLAB®
- Includes log-log plots with further examples of their use
Upper-level undergraduate students, graduate students and researchers in environmental science and environmental engineering, broadly construed
1: Data analysis with MatLab
- Abstract
- 1.1 Why MatLab?
- 1.2 Getting started with MatLab
- 1.3 Getting organized
- 1.4 Navigating folders
- 1.5 Simple arithmetic and algebra
- 1.6 Vectors and matrices
- 1.7 Multiplication of vectors of matrices
- 1.8 Element access
- 1.9 Representing functions
- 1.10 To loop or not to loop
- 1.11 The matrix inverse
- 1.12 Loading data from a file
- 1.13 Plotting data
- 1.14 Saving data to a file
- 1.15 Some advice on writing scripts
- Problems
2: A first look at data
- Abstract
- 2.1 Look at your data!
- 2.2 More on MatLab graphics
- 2.3 Rate information
- 2.4 Scatter plots and their limitations
- Problems
3: Probability and what it has to do with data analysis
- Abstract
- 3.1 Random variables
- 3.2 Mean, median, and mode
- 3.3 Variance
- 3.4 Two important probability density functions
- 3.5 Functions of a random variable
- 3.6 Joint probabilities
- 3.7 Bayesian inference
- 3.8 Joint probability density functions
- 3.9 Covariance
- 3.10 Multivariate distributions
- 3.11 The multivariate Normal distributions
- 3.12 Linear functions of multivariate data
- Problems
4: The power of linear models
- Abstract
- 4.1 Quantitative models, data, and model parameters
- 4.2 The simplest of quantitative models
- 4.3 Curve fitting
- 4.4 Mixtures
- 4.5 Weighted averages
- 4.6 Examining error
- 4.7 Least squares
- 4.8 Examples
- 4.9 Covariance and the behavior of error
- Problems
5: Quantifying preconceptions
- Abstract
- 5.1 When least square fails
- 5.2 Prior information
- 5.3 Bayesian inference
- 5.4 The product of Normal probability density distributions
- 5.5 Generalized least squares
- 5.6 The role of the covariance of the data
- 5.7 Smoothness as prior information
- 5.8 Sparse matrices
- 5.9 Reorganizing grids of model parameters
- Problems
6: Detecting periodicities
- Abstract
- 6.1 Describing sinusoidal oscillations
- 6.2 Models composed only of sinusoidal functions
- 6.3 Going complex
- 6.4 Lessons learned from the integral transform
- 6.5 Normal curve
- 6.6 Spikes
- 6.7 Area under a function
- 6.8 Time-delayed function
- 6.9 Derivative of a function
- 6.10 Integral of a function
- 6.11 Convolution
- 6.12 Nontransient signals
- Problems
7: The past influences the present
- Abstract
- 7.1 Behavior sensitive to past conditions
- 7.2 Filtering as convolution
- 7.3 Solving problems with filters
- 7.4 An example of an empirically-derived filter
- 7.5 Predicting the future
- 7.6 A parallel between filters and polynomials
- 7.7 Filter cascades and inverse filters
- 7.8 Making use of what you know
- Problems
8: Patterns suggested by data
- Abstract
- 8.1 Samples as mixtures
- 8.2 Determining the minimum number of factors
- 8.3 Application to the Atlantic Rocks dataset
- 8.4 Spiky factors
- 8.5 Weighting of elements
- 8.6 Q-mode factor analysis and spatial clustering
- 8.7 Time-Variable functions
- Problems
9: Detecting correlations among data
- Abstract
- 9.1 Correlation is covariance
- 9.2 Computing autocorrelation by hand
- 9.3 Relationship to convolution and power spectral density
- 9.4 Cross-correlation
- 9.5 Using the cross-correlation to align time series
- 9.6 Least squares estimation of filters
- 9.7 The effect of smoothing on time series
- 9.8 Band-pass filters
- 9.9 Frequency-dependent coherence
- 9.10 Windowing before computing Fourier transforms
- 9.11 Optimal window functions
- Problems
10: Filling in missing data
- Abstract
- 10.1 Interpolation requires prior information
- 10.2 Linear interpolation
- 10.3 Cubic interpolation
- 10.4 Kriging
- 10.5 Interpolation in two-dimensions
- 10.6 Fourier transforms in two dimensions
- Problems
11: “Approximate” is not a pejorative word
- Abstract
- 11.1 The value of approximation
- 11.2 Polynomial approximations and Taylor series
- 11.3 Small number approximations
- 11.4 Small number approximation applied to distance on a sphere
- 11.5 Small number approximation applied to variance
- 11.6 Taylor series in multiple dimensions
- 11.7 Small number approximation applied to covariance
- 11.8 Solving nonlinear problems with iterative least squares
- 11.9 Fitting a sinusoid of unknown frequency
- 11.10 The gradient method
- 11.11 Precomputation of a function and table lookups
- 11.12 Artificial neural networks
- 11.13 Information flow in a neural net
- 11.14 Training a neural net
- 11.15 Neural net for a nonlinear response
- Problems
12: Are my results significant?
- Abstract
- 12.1 The difference is due to random variation!
- 12.2 The distribution of the total error
- 12.3 Four important probability density functions
- 12.4 A hypothesis testing scenario
- 12.5 Chi-squared test for generalized least squares
- 12.6 Testing improvement in fit
- 12.7 Testing the significance of a spectral peak
- 12.8 Bootstrap confidence intervals
- Problems
13: Notes
- Abstract
- Note 1.1 On the persistence of MatLab variables
- Note 2.1 On time
- Note 2.2 On reading complicated text files
- Note 3.1 On the rule for error propagation
- Note 3.2 On the eda_draw() function
- Note 4.1 On complex least squares
- Note 5.1 On the derivation of generalized least squares
- Note 5.2 On MatLab functions
- Note 5.3 On reorganizing matrices
- Note 6.1 On the MatLab atan2() function
- Note 6.2 On the orthonormality of the discrete Fourier data kernel
- Note 6.3 On the expansion of a function in an orthonormal basis
- Note 8.1 On singular value decomposition
- Note 9.1 On coherence
- Note 9.2 On Lagrange multipliers
- Note 11.1 On the chain rule for partial derivatives
- No. of pages: 342
- Language: English
- Edition: 2
- Published: March 7, 2016
- Imprint: Academic Press
- Hardback ISBN: 9780128044889
- eBook ISBN: 9780128045503
WM
William Menke
JM