Principles and Labs for Deep Learning
- 1st Edition - June 25, 2021
- Authors: Shih-Chia Huang, Trung-Hieu Le
- Language: English
- Paperback ISBN:9 7 8 - 0 - 3 2 3 - 9 0 1 9 8 - 7
- eBook ISBN:9 7 8 - 0 - 3 2 3 - 9 0 1 9 9 - 4
Principles and Labs for Deep Learning provides the knowledge and techniques needed to help readers design and develop deep learning models. Deep Learning techniques are introduce… Read more

Purchase options
Institutional subscription on ScienceDirect
Request a sales quotePrinciples and Labs for Deep Learning provides the knowledge and techniques needed to help readers design and develop deep learning models. Deep Learning techniques are introduced through theory, comprehensively illustrated, explained through the TensorFlow source code examples, and analyzed through the visualization of results. The structured methods and labs provided by Dr. Huang and Dr. Le enable readers to become proficient in TensorFlow to build deep Convolutional Neural Networks (CNNs) through custom APIs, high-level Keras APIs, Keras Applications, and TensorFlow Hub. Each chapter has one corresponding Lab with step-by-step instruction to help the reader practice and accomplish a specific learning outcome.
Deep Learning has been successfully applied in diverse fields such as computer vision, audio processing, robotics, natural language processing, bioinformatics and chemistry. Because of the huge scope of knowledge in Deep Learning, a lot of time is required to understand and deploy useful, working applications, hence the importance of this new resource. Both theory lessons and experiments are included in each chapter to introduce the techniques and provide source code examples to practice using them. All Labs for this book are placed on GitHub to facilitate the download. The book is written based on the assumption that the reader knows basic Python for programming and basic Machine Learning.
- Introduces readers to the usefulness of neural networks and Deep Learning methods
- Provides readers with in-depth understanding of the architecture and operation of Deep Convolutional Neural Networks
- Demonstrates the visualization needed for designing neural networks
- Provides readers with an in-depth understanding of regression problems, binary classification problems, multi-category classification problems, Variational Auto-Encoder, Generative Adversarial Network, and Object detection
Chapter 0: Environment Installation 0.1. Python Installation 0.2. TensorFlow Installation 0.3. Python extension installation 0.4. Jupyter Notebook 0.5. GitHub code 0.6. PyCharm IDE
Chapter 1: Introduction to TensorFlow2.0 1.1. What is Deep Learning 1.2. Create a new project 1.3. Introduction to TensorFlow 1.4. Eager Execution 1.4.1. Introduction to Eager Execution 1.4.2. Basic operations 1.5. Keras 1.5.1. Introduction to Keras 1.5.2. Sequential Model 1.5.3. Functional API 1.6. tf.data 1.6.1. Introduction to tf.data 1.6.2. Basic Operations
Chapter 2: Regression Problem 2.1. Deep Neural Network 2.1.1. A brief history of neural networks 2.1.2. Principle of Neural Network 2.1.3. Fully connected 2.1.4. MSE and MAE Loss functions 2.1.5. Neural network weight update 2.1.6. Neural network training steps 2.2. Introduction to Kaggle 2.3. Experiment 1: House price prediction model 2.3.1. Introduction to the dataset 2.3.2. Create Project 2.3.3. Source code 2.4. Introduction to TensorBoard 2.5. Experiment 2: Overfitting 2.5.1. Introduction to the dataset 2.5.2. Create Project 2.5.3. Source code
Chapter 3: Binary classification problem 3.1. Machine Learning Algorithms 3.2. Binary Classification Problem 3.2.1. Logistic Regression 3.2.2. Sigmoid 3.2.3. Binary Cross-Entropy3.2.4. One-hot Encoding 3.3. Experiment: Pokémon Battle Prediction 3.3.1. Introduction to the dataset 3.3.2. Create Project 3.3.3. Source code
Chapter 4: Multi-category Classification Problem 4.1. Convolutional Neural Network 4.1.1. Introduction to Convolutional Neural Network 4.1.2. The architecture of Convolutional Neural Network 4.1.3. Principle of Convolutional Neural Network 4.2. Multi-category Classification Problem 4.2.1. Softmax 4.2.2. Categorical Cross-Entropy 4.2.3. Data enhancement 4.3. Experiment: CIFAR-10 image recognition 4.3.1. Introduction to the dataset 4.3.2. TensorFlow Datasets 4.3.3. Create Project 4.3.4. Source code
Chapter 5: Training Neural Network 5.1. Backpropagation 5.2. Weight Initialization 5.2.1. Normal Distribution 5.2.2. Xavier/Glorot initialization 5.2.3. He initialization 5.3. Batch Normalization 5.3.1. Introduction to Batch Normalization 5.3.2. Neural Network with Batch Normalization 5.4. Experiment 1: Using CIFAR-10 dataset to verify three weight initialization methods 5.5. Experiment 2: Using CIFAR-10 dataset to verify Batch Normalization method 5.6. Performance Comparison of Different Neural Networks
Chapter 6: Advanced TensorFlow2.0 6.1. Advanced TensorFlow 6.1.1. Customized network layer6.1.2. Customized loss function 6.1.3. Customized metric function 6.1.4. Customized callback function 6.2. Comparison of high-level Keras API and customized API 6.2.1. Network Layer 6.2.2. Loss Function 6.2.3. Metric Function 6.2.4. Callback Function 6.3. Experiment: Implementation of two network models using high-level Keras API and Customized API 6.3.1. Create Project 6.3.2. Source Code
Chapter 7: Advanced TensorBoard 7.1. Advanced TensorBoard 7.1.1. tf.summary 7.1.2. tf.summary.scalar 7.1.3. tf.summary.image 7.1.4. tf.summary.text 7.1.5. tf.summary.audio 7.1.6. tf.summary.histogram 7.2. Experiment 1: Using tf.summary.image to visualize training results7.2.1. Create a project 7.2.2. Source Code 7.3. Experiment 2: Using hyperparameter tuning with TensorBoard HParams to analyze multiple training models 7.3.1. Open TensorBoard 7.3.2. Source Code
Chapter 8: Convolutional Neural Network Architectures 8.1. Most Popular Convolutional Neural Network Architectures 8.1.1. LeNet 8.1.2. AlexNet 8.1.3. VGG 8.1.4. GoogLeNet 8.1.5. ResNet 8.1.6. Comparison of network architectures 8.2. Experiment: Implementation of InceptionV3 network architecture8.2.1. Create a project 8.2.2. Keras Applications 8.2.3. TensorFlow Hub
Chapter 9: Transfer Learning 9.1. Transfer Learning 9.1.1. Introduction to Transfer Learning 9.1.2. Transfer learning methods 9.2. Experiment: Using Inception V3 for Transfer Learning 9.2.1. Create Project 9.2.2. Introduction to the Dataset 9.2.3. Source Code
Chapter 10: Variational Auto-Encoder 10.1. Introduction to Auto-Encoder 10.2. Introduction to Variational Auto-Encoder 10.3. Variational Auto-Encoder loss function10.4. Experiment: Implementation of Variational Auto-Encoder Model10.4.1. Create Project 10.4.2. Introduction to Dataset 10.4.3. Building Variational Auto-Encoder model. 10.4.4. Training Variational Auto-Encoder model
Chapter 11: WGAN-GP 11.1. Generative Adversarial Network 11.1.1. Introduction to Generative Adversarial Network 11.1.2. Generative Adversarial Network Training and Loss Function11.2. Evaluation of GAN, WGAN, and WGAN-GP 11.2.1. Problems of Generative Adversarial Network 11.2.2. Introduction to Wasserstein distance 11.2.3. WGAN-GP Loss Function11.3. Experiment:Implementation of WGAN-GP11.3.1. Create Project11.3.2. Introduction to Dataset11.3.3. Explanation of WGAN-GP Project
Chapter 12: Object Detection12.1. Computer Vision12.2. Introduction to Object Detection12.3. Historical Progress of Object Detection12.3.1. R-CNN12.3.2. Fast R-CNN12.3.3. Faster R-CNN12.3.4. YOLO v112.3.5. SSD12.3.6. YOLO v212.3.7. FPN12.3.8. RetinaNet12.3.9. YOLO v3 12.3.10. CFF-SSD12.3.11. DSNet12.4. Experiment: Implementation of YOLO v312.4.1. Load project12.4.2. Introduction to Dataset12.4.3. Explanation of YOLO v3 project12.4.4. YOLO v3 Training and Testing
- No. of pages: 366
- Language: English
- Edition: 1
- Published: June 25, 2021
- Imprint: Academic Press
- Paperback ISBN: 9780323901987
- eBook ISBN: 9780323901994
SH
Shih-Chia Huang
TL