Machine Learning for Low-Latency Communications
- 1st Edition - October 10, 2024
- Authors: Yong Zhou, Yinan Zou, Youlong Wu, Yuanming Shi, Jun Zhang
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 2 2 0 7 3 - 9
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 2 2 0 7 4 - 6
Machine Learning for Low-Latency Communications presents the principles and practice of various deep learning methodologies for mitigating three critical latency component… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quoteMachine Learning for Low-Latency Communications presents the principles and practice of various deep learning methodologies for mitigating three critical latency components: access latency, transmission latency, and processing latency. In particular, the book develops learning to estimate methods via algorithm unrolling and multiarmed bandit for reducing access latency by enlarging the number of concurrent transmissions with the same pilot length. Task-oriented learning to compress methods based on information bottleneck are given to reduce the transmission latency via avoiding unnecessary data transmission.
Lastly, three learning to optimize methods for processing latency reduction are given which leverage graph neural networks, multi-agent reinforcement learning, and domain knowledge. Low-latency communications attracts considerable attention from both academia and industry, given its potential to support various emerging applications such as industry automation, autonomous vehicles, augmented reality and telesurgery. Despite the great promise, achieving low-latency communications is critically challenging. Supporting massive connectivity incurs long access latency, while transmitting high-volume data leads to substantial transmission latency.
- Presents the challenges and opportunities of leveraging data and model-driven machine learning methodologies for achieving low-latency communications
- Explains the principles and practices of modern machine learning algorithms (e.g., algorithm unrolling, multiarmed bandit, graph neural network, and multi-agent reinforcement learning) for achieving low-latency communications
- Gives design, modeling, and optimization methods for low-latency communications that apply appropriate learning methods to solve longstanding problems
- Provides full details of the simulation setup and benchmarking algorithms, with downloadable code
- Outlines future research challenges and directions
- Title of Book
- Cover image
- Title page
- Table of Contents
- Copyright
- List of figures
- List of tables
- Biographies
- Preface
- Acknowledgments
- Part 1: Introduction and overview
- Chapter 1: Introduction and overview
- 1.1. Low-latency communication
- 1.1.1. Grant-free random access
- 1.1.2. Task-oriented data compression
- 1.1.3. Large-scale resource allocation
- 1.2. Machine learning for large-scale optimization
- 1.2.1. Challenges of optimization-based algorithms
- 1.2.2. Machine learning models and algorithms
- 1.2.3. Advantages of machine learning algorithms
- 1.3. Organization
- Part 2: Learning to estimate for access latency reduction
- Chapter 2: Learning to estimate via group-sparse based algorithm unrolling
- 2.1. Background on grant-free random access
- 2.2. System model and problem formulation
- 2.2.1. System model
- 2.2.2. Problem formulation
- 2.3. Group-sparse based algorithm unrolling
- 2.3.1. Unrolled neural network architectures
- 2.3.2. Unrolled neural networks training
- 2.4. Numerical results
- 2.4.1. Simulation setting and performance metrics
- 2.4.2. Analysis validation
- 2.4.3. Performance evaluation
- 2.5. Summary
- Chapter 3: Learning to detect via multi-armed bandit
- 3.1. Background
- 3.2. System model and problem formulation
- 3.2.1. System model
- 3.2.2. Problem analysis
- 3.3. Coordinate descent with Bernoulli sampling
- 3.3.1. Reward function
- 3.3.2. Algorithm and analysis
- 3.4. Coordinate descent with Thompson sampling
- 3.4.1. Stochastic MAB for optimizing ε
- 3.4.2. Thompson sampling
- 3.4.3. CD-Thompson
- 3.5. Application to massive connectivity with low-precision ADCs
- 3.6. Simulation results
- 3.6.1. Simulation settings and performance metric
- 3.6.2. Convergence rate
- 3.6.3. Probability of error
- 3.6.4. Applications in low-precision ADCs
- 3.7. Conclusions
- Part 3: Learning to compress for transmission latency reduction
- Chapter 4: Learning to compress via information bottleneck
- 4.1. Background and motivation
- 4.2. System model and problem formulation
- 4.2.1. Task-oriented communication for edge inference
- 4.2.2. Information bottleneck principle
- 4.3. Variational feature encoding
- 4.3.1. Variational information bottleneck reformulation
- 4.3.2. Redundancy reduction and feature sparsification
- 4.3.3. Variational pruning on dimension importance
- 4.4. Variable-length variational feature encoding
- 4.4.1. Background on dynamic neural networks
- 4.4.2. Selective activation for dynamic channel conditions
- 4.4.3. Training procedure for the dynamic neural network
- 4.5. Simulation results
- 4.5.1. Experimental setup
- 4.5.2. Static channel conditions
- 4.5.3. Dynamic channel conditions
- 4.6. Summary
- Chapter 5: Learning to compress via robust information bottleneck with digital modulation
- 5.1. Background and motivation
- 5.2. System model and problem formulation
- 5.2.1. Task-oriented communication with digital modulation model
- 5.2.2. Robust information bottleneck (RIB) principle
- 5.3. Problem description
- 5.4. Robust encoding for task-oriented communication
- 5.4.1. Robust encoding with RIB
- 5.4.2. Variational upper bounds of RIB objective
- 5.5. Joint source-channel coding with digital modulation (DT-JSCC)
- 5.5.1. The DT-JSCC framework and algorithm
- 5.5.2. Practical advantages of DT-JSCC
- 5.6. Simulation results
- 5.6.1. Experimental setup
- 5.6.2. Inference performance
- 5.6.3. Robustness performance
- 5.6.4. Ablation experiments
- 5.7. Summary
- Chapter 6: Learning to compress for multi-device cooperative edge inference
- 6.1. Background and motivation
- 6.2. System model and problem formulation
- 6.2.1. Multi-device cooperative edge inference systems
- 6.2.2. Task-relevant feature extraction
- 6.2.3. Distributed feature encoding
- 6.2.4. Distributed information bottleneck principle
- 6.3. Variational approximation for feature extraction
- 6.3.1. Variational information bottleneck
- 6.3.2. DNN parameterization
- 6.4. Variational distributed feature encoding
- 6.4.1. Distributed deterministic information bottleneck (VDDIB) reformulation
- 6.4.2. Variational distributed deterministic information bottleneck
- 6.5. Distributed feature encoding with selective retransmission
- 6.5.1. Selective retransmission mechanism
- 6.5.2. The objective of VDDIB with selective retransmission
- 6.6. Simulation results
- 6.6.1. Experimental setup
- 6.6.2. Multi-view image classification
- 6.6.3. Multi-view object recognition
- 6.6.4. Ablation experiments
- 6.7. Summary
- Part 4: Learning to optimize for processing latency reduction
- Chapter 7: Learning to optimize via graph neural networks
- 7.1. Background on graph neural networks
- 7.2. Graph modeling of wireless networks
- 7.2.1. Directed graphs and permutation equivariance property
- 7.2.2. Wireless network as a graph
- 7.2.3. Graph modeling of multi-user interference channels
- 7.3. Neural network architecture design for radio resource management
- 7.3.1. Message passing graph neural networks
- 7.3.2. Key properties of MPGNNs
- 7.3.3. An effective implementation of MPGNNs
- 7.4. Theoretical analysis of MPGNN-based radio resource management
- 7.4.1. Simplifications
- 7.4.2. Equivalence of MPGNNs and distributed optimization
- 7.4.3. Performance and generalization of MPGNNs
- 7.5. Simulation results
- 7.5.1. Sum rate maximization
- 7.5.2. Weighted sum rate maximization
- 7.5.3. Beamforming design
- 7.6. Conclusions
- Chapter 8: Learning to optimize via knowledge guidance
- 8.1. Background and motivation
- 8.2. System model
- 8.2.1. Federated learning model
- 8.2.2. Over-the-air aggregation
- 8.3. Convergence analysis and problem formulation
- 8.3.1. Convergence analysis
- 8.3.2. Problem formulation
- 8.4. Alternating optimization algorithm
- 8.4.1. Receive factor optimization
- 8.4.2. Transmit power optimization
- 8.5. Knowledge-guided learning algorithm
- 8.5.1. Knowledge-guided learning for transceiver design
- 8.5.2. Deep neural network design
- 8.5.3. Deep neural network training
- 8.6. Simulation results
- 8.6.1. Simulation setup
- 8.6.2. Performance comparison
- 8.7. Conclusion
- Chapter 9: Learn to optimize via decentralized multi-agent reinforcement learning
- 9.1. Introduction
- 9.2. System model and problem formulation
- 9.2.1. System model
- 9.2.2. Problem formulation
- 9.2.3. Multi-agent system design
- 9.3. Proposed DEC-MAPC algorithm
- 9.3.1. Value decomposition technique
- 9.3.2. Network structure
- 9.4. Performance evaluation
- 9.4.1. Simulation setup
- 9.4.2. Sum-rate maximization
- 9.5. Conclusions
- Part 5: Final part: conclusions
- Chapter 10: Conclusions
- 10.1. Conclusions
- Index
- No. of pages: 365
- Language: English
- Edition: 1
- Published: October 10, 2024
- Imprint: Academic Press
- Paperback ISBN: 9780443220739
- eBook ISBN: 9780443220746
YZ
Yong Zhou
YZ
Yinan Zou
YW
Youlong Wu
YS
Yuanming Shi
JZ