
Explainable AI for Communications and Networking
Toward Responsible Automation
- 1st Edition - April 8, 2025
- Imprint: Academic Press
- Editors: Hatim Chergui, Melike Erol-Kantarci, Christos Verikoukis
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 2 9 1 3 5 - 7
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 2 9 1 3 6 - 4
Explainable AI for Communications and Networking: Toward Responsible Automation gives a tour into the realm of Explainable Artificial Intelligence (XAI) and its impact on transp… Read more

Purchase options

Institutional subscription on ScienceDirect
Request a sales quote- Includes XAI graphical taxonomies, metrics, formulations and code snippets.
- Provides practical examples and use-cases from a telecom industry perspective.
- Covers implementation guidelines (XAI libraries/implementation tools) tailored to a communications and networking context.
- Highlights the application of XAI in wireless communications, network management, generative AI for telecom and cybersecurity.
- Presents a thorough synthesis of the regulatory and ethical implications of XAI worldwide.
- Title of Book
- Cover image
- Title page
- Table of Contents
- Copyright
- Dedication
- List of figures
- List of tables
- Contributors
- About the editors
- Preface
- Chapter One: AI-driven network automation
- 1.1. Overview, benefits, and challenges
- 1.1.1. Major benefits
- 1.1.2. Main challenges
- 1.2. Use cases
- 1.2.1. Seamless immersive reality
- 1.2.2. Cooperative mobile robots and smart industries
- 1.2.3. Digital twins
- 1.3. Sustainability
- 1.3.1. Carbon telemetry
- 1.3.2. Sustainable AI
- 1.3.3. AI for network sustainability
- 1.4. Related standardization to network automation
- Chapter Two: Explainable AI overview
- 2.1. Why transparency in AI?
- 2.2. Concepts and definitions
- 2.3. Taxonomy of XAI techniques with examples
- 2.3.1. Model transparency
- 2.3.2. Model agnosticity
- 2.3.3. Scope
- 2.3.4. Explanation basis
- 2.4. XAI methods
- 2.4.1. SHAP
- 2.4.2. Integrated gradient (IG)
- 2.4.3. Gradient × input
- 2.4.4. Saliency maps
- 2.4.5. LRP
- 2.4.6. DeepLIFT
- 2.4.7. LIME
- 2.4.8. Rulefit
- 2.4.9. Counterfactual
- 2.4.10. GNN explainer
- 2.4.11. Reward shaping
- 2.4.12. XRL
- 2.4.13. Attention flow analysis
- 2.4.14. Machine reasoning (MR)
- 2.4.15. Structural causal models (SCM)
- 2.4.16. Caption generation
- 2.4.17. Knowledge graphs
- 2.4.18. Explanation-guided learning
- 2.5. Ranking of XAI methods in O-RAN prediction tasks
- 2.6. Key considerations in implementing XAI
- Annex: python code for generating results of Fig. 2.4
- Chapter Three: XAI metrics and frameworks
- 3.1. Introduction
- 3.2. Need for evaluation framework for XAI mechanisms
- 3.3. Metric ingredients
- 3.4. XAI metrics
- 3.4.1. Confidence/faithfulness
- 3.4.2. Log-odds
- 3.4.3. Comprehensiveness
- 3.4.4. Sufficiency
- 3.4.5. Robustness/sensitivity
- 3.4.6. Ambiguity
- 3.4.7. Infidelity
- 3.4.8. Relative consistency
- 3.4.9. Evaluation metrics for large language models in explainable AI
- 3.4.10. Hallucination semantic entropy
- 3.5. Application of XAI evaluation approaches
- 3.5.1. Human-centered evaluation for XAI mechanisms
- 3.5.2. Case study 1: PoC for explanation analysis using XAI metrics and properties
- 3.5.3. Case study 2: Evaluation of some XAI metrics in a DNN-based CPU prediction task for o-RAN
- 3.5.4. Case study 3: in-hoc federated learning confidence
- 3.5.5. Case study 4: Metrics-enabled XAI pipelines
- 3.6. XAI frameworks: Python libraries
- 3.6.1. DeepExplain
- 3.6.2. LIME (Local Interpretable Model-agnostic Explanations)
- 3.6.3. SHAP (SHapley Additive exPlanations)
- 3.6.4. ELI5 (Explain Like I am 5)
- 3.6.5. Captum
- 3.6.6. AI explainability 360 (AIX360)
- 3.6.7. Alibi
- 3.6.8. DALEX (Descriptive mAchine Learning EXplanations)
- Annex: Python Code for generating results of Table 3.1
- Chapter Four: XAI for wireless communications
- 4.1. Introduction
- 4.2. Toward XAI-based solutions
- 4.3. Signal model
- 4.4. FNN-based channel estimation schemes
- 4.4.1. STA-FNN
- 4.4.2. TRFI-FNN
- 4.5. XAI for channel estimation
- 4.6. Performance evaluation
- 4.6.1. Simulation setup
- 4.6.2. BER performance analysis
- 4.7. Conclusion
- Chapter Five: Neuro-symbolic XAI for communications
- 5.1. Introduction
- 5.2. Neuro-symbolic AI
- 5.2.1. Learning for reasoning model
- 5.2.2. Reasoning for learning model
- 5.2.3. Learning–reasoning model
- 5.3. Reinforcement learning
- 5.4. A practical guide to neuro-symbolic integration in RL
- 5.4.1. Complex symbolic rules
- 5.4.2. Symbolic state representation
- 5.4.3. Symbolic reward shaping
- 5.4.4. Symbolic policy constraints
- 5.4.5. Logical inference integration
- 5.4.6. Hybrid state-action space
- 5.5. Advancing from neuro-symbolic AI to XAI
- 5.6. Neuro-symbolic XAI opportunities in 6G networks
- 5.6.1. Enhanced data processing and network optimization
- 5.6.2. Security and privacy
- 5.6.3. User experience personalization
- 5.6.4. Sustainable network operations
- 5.7. Neuro-symbolic XAI challenges in 6G networks
- 5.7.1. Complexity of integration
- 5.7.2. Scalability issues
- 5.7.3. Data quality and availability
- 5.8. Conclusion
- Chapter Six: XAI for network management
- 6.1. Introduction
- 6.2. Motivation
- 6.3. AIChronoLens.
- 6.3.1. The design
- 6.3.2. The rationale of explanations
- 6.4. Use cases of AIChronoLens.
- 6.4.1. Evaluation settings
- 6.4.2. Error analysis
- 6.4.3. Model benchmarking
- 6.5. Discussion and concluding remarks
- 6.5.1. Applications of AI/ML to network management
- 6.5.2. Concluding remarks
- Chapter Seven: Responsible generative AI for telecom
- 7.1. Multimodal generative AI for telecom
- 7.1.1. Intent-based networking
- 7.1.2. Log anomaly detection
- 7.1.3. Multimodal self-network optimization
- 7.2. Generative AI vulnerabilities
- 7.3. Use case: interpreting emerging communication via GenAI
- 7.3.1. Framework setup
- 7.3.2. Evaluation
- 7.4. Conclusions
- Annex: GPT-neo fine-tuning and perplexity evaluation code
- Chapter Eight: Explainable AI for cybersecurity decisions: challenges and opportunities
- 8.1. Introduction and motivation
- 8.2. XAI for cybersecurity: objectives, positioning, and challenges
- 8.3. Human-oriented explanations to augment SOC operations
- 8.3.1. Preliminaries
- 8.3.2. Computing candidate explanations
- 8.3.3. Generating the GR catalog
- 8.3.4. Alignment of production data with the ground truth catalog
- 8.4. Limitation and current research challenges
- 8.5. Conclusions
- Chapter Nine: Explainable AI for telecom use cases
- 9.1. Introduction
- 9.2. Remote electrical tilt optimization use case using eXplainable reinforcement learning
- 9.3. Proactive assurance of network slices using eXplainable machine learning, XAI-quantification, and counterfactuals
- 9.4. Explainable RL in collaborative robotics use case for industry 4.0
- 9.5. Auto-node provisioning for recommending configuration
- 9.6. Enhanced decision support in 6G BRICKS EU project
- 9.7. Counterfactual analysis & recourse action for energy savings in telecom networks
- Chapter Ten: XAI: regulatory and ethical aspects
- 10.1. The regulatory landscape for explainable AI
- 10.1.1. Governmental approaches
- 10.1.2. Standardization efforts for integrating explainability
- 10.2. Ethical theories, applications, and risks in XAI
- 10.2.1. Major ethical theories
- 10.2.2. Contextual ethical risks in XAI
- 10.3. Conclusion
- Chapter Eleven: Future directions and challenges
- 11.1. Performance–explainability trade-off
- 11.2. GenAI for telecom: an explainability perspective
- 11.3. Lack of standardization
- 11.4. Interoperable XAI models for the complex multi-vendor 6G networks
- 11.5. Complexity of the 6G networks systems
- 11.6. Real-time constraints
- 11.7. Heterogeneity of target audiences in XAI
- 11.8. Privacy–explainability trade-off
- 11.9. The two facets of XAI and security
- 11.10. Enhancing XAI methods
- Chapter Twelve: Conclusions: a human-centric perspective
- Index
- Edition: 1
- Published: April 8, 2025
- Imprint: Academic Press
- No. of pages: 252
- Language: English
- Paperback ISBN: 9780443291357
- eBook ISBN: 9780443291364
HC
Hatim Chergui
Hatim Chergui (IEEE SM'22) received the Ph.D. degree (summa cum laude) in electrical engineering and telecommunications from IMT-Atlantique (Télécom-Bretagne), Brest, France. He is a Senior Researcher at i2CAT Foundation, Barcelona, Spain. He was the project manager of the H2020 MonB5G European project and a researcher at CTTC, Spain. He served as a RAN expert at both INWI and Huawei Technologies, Morocco. He was the recipient of the IEEE ComSoc CSIM 2021 Best Journal Paper Award and the IEEE ICC 2020 Best Paper Award. He is an Associate Editor of IEEE Networking Letters.
ME
Melike Erol-Kantarci
Dr. Melike Erol-Kantarci is Canada Research Chair (Tier 2) in AI-Enabled Next-Generation Wireless Networks and Full Professor at the School of Electrical Engineering and Computer Science at the University of Ottawa, ON, Canada. She is also Chief Cloud RAN AI\ML Data Scientist at Ericsson.
She is the founding director of the Networked Systems and Communications Research (NETCORE) laboratory. She is a Faculty Affiliate at the Vector Institute and the Institute for Science, Society and Policy. She is also a courtesy faculty at the Department of Electrical and Computer Engineering at Clarkson University, Potsdam, NY.
CV
Christos Verikoukis
Prof. Christos Verikoukis got his PhD from the Signal Theory and Communications Department of the Technical University of Catalonia (UPC), Barcelona, in 2000, in Broadband Indoor Wireless Communications. He is an Associate Professor with the University of Patras, CEI Department.
He has published 147 journal papers, more than 220 conference papers and 18 chapters in different books. He is also co-author of 4 Books and he filled 4 patents. His publications received more than 7500 citations.