
Deep Learning for Multi-Sensor Earth Observation
- 1st Edition - February 1, 2025
- Imprint: Elsevier
- Editor: Sudipan Saha
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 2 6 4 8 4 - 9
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 2 6 4 8 5 - 6
Deep Learning for Multi-Sensor Earth Observation addresses the need for transformative Deep Learning techniques to navigate the complexity of multi-sensor data fusion. With insigh… Read more

Purchase options

Institutional subscription on ScienceDirect
Request a sales quoteStructured for clarity, the book builds upon its own concepts, leading readers through introductory explanations, sensor-specific insights, and ultimately to advanced concepts and specialized applications. By bridging the gap between theory and practice, this volume equips researchers, geoscientists, and enthusiasts with the knowledge to reshape Earth observation through the dynamic lens of deep learning.
- Addresses the problem of unwieldy datasets from multi-sensor observations, applying Deep Learning to multi-sensor data integration from disparate sources with different resolution and quality
- Provides a thorough foundational reference to Deep Learning applications for handling Earth Observation multi-sensor data across a variety of geosciences
- Includes case studies and real-world data/examples allowing readers to better grasp how to put Deep Learning techniques and methods into practice
- Title of Book
- Cover image
- Title page
- Table of Contents
- Copyright
- Contributors
- Preface
- Acknowledgments
- Part I: Introduction to multi-sensor data and artificial intelligence
- Chapter 1: Deep learning for multi-sensor Earth observation: introductory notes
- 1.1. Introduction
- 1.2. Different sensors and modalities
- 1.3. Benefits of fusion
- 1.4. Traditional multi-sensor fusion
- 1.5. Emergence of deep learning
- 1.6. Motivation and contents of the book
- Chapter 2: A basic introduction to deep learning
- 2.1. Introduction
- 2.2. Different learning paradigms
- 2.3. CNN
- 2.4. Image classification
- 2.4.1. LeNet-5
- 2.4.2. AlexNet
- 2.4.3. VGGNet
- 2.4.4. InceptionNet
- 2.4.5. ResNet
- 2.4.6. DenseNet
- 2.4.7. MobileNet
- 2.5. Semantic segmentation
- 2.6. Target detection
- 2.7. From attention to transformers
- 2.8. Autoencoder and generative models
- 2.9. Practices and tricks
- 2.9.1. Appropriate learning task
- 2.9.2. Data quality
- 2.9.3. Choice of appropriate model
- 2.9.4. Hyperparameter tuning
- 2.9.5. Regularization
- 2.9.6. Loss functions
- 2.10. Conclusion
- Part II: Artificial intelligence for sensor-specific data analysis and fusion
- Chapter 3: Deep learning processing of remotely sensed multi-spectral images
- 3.1. Introduction
- 3.1.1. Remotely sensed multi-spectral images
- 3.1.2. Deep learning methods taxonomy
- 3.1.3. Chapter overview
- 3.2. Image preprocessing techniques
- 3.2.1. Super-resolution
- 3.2.2. Image fusion
- 3.2.3. Segmentation
- 3.2.4. Image registration and enhancement
- 3.3. Image analysis
- 3.3.1. Change detection
- 3.3.2. Object detection and recognition
- 3.3.3. Scene classification
- 3.4. Conclusions and perspectives
- Chapter 4: Deep learning and hyperspectral images
- 4.1. Introduction
- 4.2. Image classification
- 4.2.1. Limited label learning for HSI classification
- 4.3. Dimensionality reduction
- 4.3.1. Band selection
- 4.3.2. Feature extraction
- 4.4. Unmixing
- 4.5. Image enhancement
- 4.5.1. Denoising
- 4.5.2. Super-resolution
- 4.5.3. Inpainting
- 4.6. Change detection
- 4.7. Future direction
- 4.8. Summary
- Chapter 5: Synthetic aperture radar image analysis in era of deep learning
- 5.1. Introduction
- 5.2. SAR despeckling
- 5.3. SAR image classification
- 5.4. SAR semantic segmentation
- 5.5. SAR target detection
- 5.6. SAR domain adaptation
- 5.7. SAR-optical transcoding
- 5.8. PolSAR image analysis
- 5.9. InSAR data analysis
- 5.10. SAR analysis benefits from other sensors
- 5.11. Conclusion
- Chapter 6: Deep learning with lidar for Earth observation
- 6.1. Introduction
- 6.2. Deep learning for 3D object detection with lidar point cloud data
- 6.2.1. 3D object detection models
- 6.2.2. Semi-automatic annotations for individual trees
- 6.2.3. 3D detection of individual trees
- 6.3. Deep learning for 3D semantic segmentation of lidar point cloud data
- 6.3.1. 3D semantic segmentation models
- 6.3.2. Point MLP and point convolution
- 6.3.3. 3D semantic segmentation of urban structures
- 6.4. Benchmark point cloud datasets
- 6.4.1. Real versus synthetic benchmark datasets
- 6.4.2. Annotation strategy for benchmark datasets
- 6.4.3. Benchmark point cloud datasets for Earth observation
- 6.5. Conclusion
- Chapter 7: Several sensors and modalities
- 7.1. Introduction
- 7.2. Multi-sensor differences
- 7.3. Geographic differences
- 7.4. Domain adaptation and generalization
- 7.5. Fusion
- 7.6. Multi-modal analysis
- 7.7. Architectures
- 7.8. Conclusion
- Part III: Advanced concepts and architectures
- Chapter 8: Self-supervised learning for multi-modal Earth observation data
- 8.1. Motivation
- 8.2. Self-supervised learning principle
- 8.2.1. Pre-training by predicting geometric patch relations
- 8.2.2. Contrastive learning
- 8.2.3. Masked autoencoding
- 8.3. Adaptation to remote sensing
- 8.3.1. Contrastive learning for Earth observation data
- 8.3.2. Masked autoencoders for Earth observation data
- 8.3.3. Foundation models and evaluation datasets
- 8.3.4. Conclusion
- Chapter 9: Vision transformers and multi-sensor Earth observation
- 9.1. Introduction
- 9.2. Basics of transformers
- 9.3. Vision transformer
- 9.4. Applications in Earth observation
- 9.5. Applications in multi-sensor Earth observation
- 9.6. Conclusion
- Chapter 10: Graph neural networks for multi-sensor Earth observation
- 10.1. Introduction
- 10.2. Graph in remote sensing image analysis
- 10.3. Basics of GNN
- 10.4. GNN in Earth observation
- 10.5. GNN in multi-sensor Earth observation
- 10.5.1. Multi-sensor domain adaptation
- 10.5.2. Multi-sensor classification with limited samples
- 10.5.3. Pansharpening
- 10.5.4. Multi-sensor big data analysis
- 10.5.5. Challenges
- 10.6. Conclusion
- Chapter 11: Uncertainty quantification in deep neural networks for multi-sensor Earth observation
- 11.1. Introduction
- 11.2. Foundations of uncertainty quantification
- 11.2.1. Problem formulation
- 11.2.2. Sources of uncertainty
- 11.2.3. Types of uncertainty
- 11.3. Uncertainty quantification techniques
- 11.3.1. Single deterministic methods
- 11.3.2. Bayesian methods
- 11.3.3. Ensemble methods
- 11.3.4. Test-time augmentation methods
- 11.4. Uncertainty measures
- 11.5. Applications in multi-sensor Earth observation
- 11.5.1. Distributional uncertainty
- 11.5.2. Uncertainty-aware learning with label noise
- 11.5.3. Semantic segmentation
- 11.5.4. Optical-SAR fusion
- 11.5.5. Multi-sensor input importance estimation
- 11.6. Conclusion
- Part IV: Multi-sensor deep learning applications
- Chapter 12: Multi-sensor deep learning for change detection
- 12.1. Introduction
- 12.2. Problem formulation and data scenarios
- 12.3. Traditional CD methods
- 12.3.1. Algebra-based methods
- 12.3.2. Transformation-based methods
- 12.3.3. Classification-based methods
- 12.3.4. Object-based and texture-based methods
- 12.4. Deep learning in CD
- 12.4.1. Unsupervised methods
- 12.4.2. Supervised methods
- 12.4.3. Semi-supervised methods
- 12.5. Multi-sensor CD
- 12.5.1. Without deep learning
- 12.5.2. With deep learning
- 12.6. Datasets
- 12.7. A case study on multi-sensor confident pixel selection
- 12.7.1. Objective
- 12.7.2. Scope
- 12.7.3. Audience
- 12.7.4. Rationale
- 12.7.5. Expected results
- 12.7.6. Safety considerations
- 12.7.7. Method
- 12.7.8. Results
- 12.7.9. Learning outcomes
- 12.8. A case study on unsupervised multi-modal deep learning burned-area change detection
- 12.8.1. Objective
- 12.8.2. Datasets
- 12.8.3. Experimental settings
- 12.8.4. Experimental results
- 12.9. Conclusion
- Chapter 13: Multi-sensor deep learning for glacier mapping
- 13.1. Introduction
- Motivation
- Structure
- 13.2. Glacier mapping with deep learning
- Glacier mapping
- Deep learning
- 13.3. Data modalities
- Optical (multi-spectral) imagery
- Synthetic aperture radar (SAR)
- Digital elevation model
- 13.4. Literature overview
- 13.4.1. Glacier extent mapping
- 13.4.2. Calving front detection
- 13.5. Discussion
- 13.5.1. Recommendations
- 13.5.2. Deep learning for modelling glacier mass balance and their evolution
- 13.5.3. Outlook
- 13.6. Summary
- 13.7. Resources
- Chapter 14: Deep learning in multi-sensor agriculture and crop management
- 14.1. Introduction
- 14.2. Data collection and management
- 14.2.1. Data types essential for multi-sensor agriculture
- 14.3. Traditional methods for precision agriculture
- 14.4. Deep learning in precision agriculture: concepts and applications
- 14.4.1. Deep learning with single-sensor data
- 14.4.2. Deep learning with multi-sensor data
- 14.5. Conclusion
- Chapter 15: Miscellaneous applications of deep learning based multi-sensor Earth observation
- 15.1. Introduction
- 15.2. Urban monitoring
- 15.3. Mineral exploration
- 15.4. Marine applications
- 15.5. Forestry applications
- 15.6. Other environmental applications
- 15.7. Case study on tailings dam disaster detection
- 15.7.1. Objective
- 15.7.2. Scope
- 15.7.3. Audience
- 15.7.4. Rationale
- 15.7.5. Expected results
- 15.7.6. Method
- 15.7.7. Results
- 15.7.8. Learning outcomes
- 15.8. Case study on multi-sensor built area segmentation
- 15.8.1. Objective
- 15.8.2. Scope
- 15.8.3. Audience
- 15.8.4. Rationale
- 15.8.5. Expected results
- 15.8.6. Method
- 15.8.7. Results
- 15.8.8. Learning outcomes
- 15.9. Case study on cross-band wildfire impact detection
- 15.9.1. Objective
- 15.9.2. Scope
- 15.9.3. Audience
- 15.9.4. Rationale
- 15.9.5. Expected results
- 15.9.6. Method
- 15.9.7. Results
- 15.9.8. Learning outcomes
- 15.10. Conclusion
- Chapter 16: Multi-sensor Earth observation: outlook
- 16.1. Introduction
- 16.2. Vision foundation models
- 16.3. Segment anything
- 16.4. Case study on built area segmentation from SAR image using segment anything
- 16.4.1. Objective
- 16.4.2. Scope
- 16.4.3. Audience
- 16.4.4. Rationale
- 16.4.5. Expected results
- 16.4.6. Workflow
- 16.4.7. Results
- 16.4.8. Learning outcomes
- 16.5. CLIP
- 16.6. Generative vision via diffusion models
- 16.7. Explainable AI
- 16.8. Computationally efficient Earth observation
- 16.9. Final words
- Index
- Edition: 1
- Published: February 1, 2025
- Imprint: Elsevier
- No. of pages: 350
- Language: English
- Paperback ISBN: 9780443264849
- eBook ISBN: 9780443264856
SS
Sudipan Saha
Sudipan Saha is currently an Assistant Professor at Yardi School of Artificial Intelligence, Indian Institute of Technology (IIT) Delhi, New Delhi, India. Previously, he worked as a postdoctoral researcher at the Artificial Intelligence for Earth Observation (AI4EO) Lab, Technical University of Munich, Germany (2020-2022). He received a Ph.D. degree in Information and Communication Technologies from the University of Trento and Fondazione Bruno Kessler (FBK), Trento, Italy in 2020, working with Dr. Francesca Bovolo and Prof. Lorenzo Bruzzone. He is the recipient of FBK Best Student Award 2020. Previously, he obtained the M.Tech. degree in Electrical Engineering from IIT Bombay, Mumbai, India in 2014 where he is recipient of Postgraduate Color. He worked as an Engineer with TSMC Limited, Hsinchu, Taiwan, from 2015 to 2016. His research interests are related to multi-temporal and multi-sensor satellite image analysis, uncertainty quantification, deep learning, and climate change.