
Interdependent Human-Machine Teams
The Path to Autonomy
- 1st Edition - December 5, 2024
- Editors: William Lawless, Ranjeev Mittu, Donald Sofge, Hesham Fouad
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 2 9 2 4 6 - 0
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 2 9 2 4 7 - 7
Interdependent Human-Machine Teams: The Path to Autonomy examines the foundations, metrics, and applications of human-machine systems, the legal ramifications of autonomy,… Read more

Purchase options

Institutional subscription on ScienceDirect
Request a sales quoteInterdependent Human-Machine Teams: The Path to Autonomy examines the foundations, metrics, and applications of human-machine systems, the legal ramifications of autonomy, trust by the public, and trust by the users and AI systems of their users, integrating concepts from various disciplines such as AI, machine learning, social sciences, quantum mechanics, and systems engineering. In this book, world-class researchers, engineers, ethicists, and social scientists discuss what machines, humans, and systems should discuss with each other, to policymakers, and to the public.
It establishes the meaning and operation of “shared contexts” between humans and machines, policy makers, and the public and explores how human-machine systems affect targeted audiences (researchers, machines, robots, users, regulators, etc.) and society, as well as future ecosystems composed of humans, machines, and systems.
- Investigates how interdependence is the missing ingredient necessary to produce operational autonomous systems
- Integrates concepts from a wide range of disciplines, including applied and theoretical AI, quantum mechanics, social sciences, and systems engineering
- Presents debates, models, and concepts of mutual dependency for autonomous human-machine teams, challenging assumptions across AI, systems engineering, data science, and quantum mechanics
- Title of Book
- Cover image
- Title page
- Table of Contents
- Copyright
- List of contributors
- Preface
- 1. Introduction to “autonomous human–machine teams”
- 1.1 Introduction
- 1.1.1 Background
- 1.2 Tradeoffs between structure and performance of teams
- 1.3 Introduction to the chapters
- 1.4 Conclusion
- 2. Toward a new foundation for AI
- 2.1 Introduction
- 2.1.1 NNs are not new
- 2.2 Current position of neural networks in the ACM1 Computer Science Curriculum
- 2.3 Anticipated Impact of NNs
- 2.4 Engineered intelligence
- 2.4.1 IBM Watson
- 2.4.2 Cyc
- 2.4.3 Deep Blue
- 2.5 Learned Intelligence
- 2.5.1 AlphaGoZero
- 2.5.2 AlphaZero
- 2.5.3 ChatGPT
- 2.6 Early criticism of symbol-manipulating AI
- 2.7 Reasoning and problem-solving with NN-based systems
- 2.7.1 LEGO
- 2.7.2 Simple logic puzzles
- 2.8 Simple common-sense reasoning
- 2.8.1 Reasoning with LLMs
- 2.9 Pattern recognition
- 2.9.1 Convolutional NNs (CNNs)
- 2.9.2 Transformers
- 2.9.3 Evaluation
- 2.10 Explainable AI
- 2.11 Autonomy
- 2.12 Conclusions
- 3. Human–machine teaming using large language models∗
- 3.1 Language: A most natural interface
- 3.2 The dream of talking machines
- 3.3 The importance of being human
- 3.3.1 The understanding of understanding
- 3.3.2 The Chinese room in the age of large language models
- 3.3.3 Living in the mental states
- 3.3.4 Who needs mental states
- 3.4 Language for human–machine teaming
- 3.5 Natural language and machines
- 3.6 Design of conversational machines
- 3.6.1 System language understanding
- 3.6.2 System vernacular
- 3.7 Risks of large language models
- 3.7.1 LLM application accuracy through better dialog
- 3.8 Concepts of LLM application design
- 3.8.1 Choosing an LLM
- 3.8.2 LLMs as components in systems
- 3.8.3 Principles of prompt engineering
- 3.9 Retrieval-augmented generation
- 3.10 Conclusion: The future of LLMs for HMI
- 4. Development of a team cohesion scale for use in human-autonomy team research
- 4.1 Introduction
- 4.1.1 Team cohesion in human autonomy teams background? Do we need background?
- 4.2 Scale development process
- 4.2.1 Phase 1: Item development
- 4.2.1.1 Function-based task cohesion
- 4.2.1.2 Structural cohesion (four dimensions)
- 4.2.1.3 Interpersonal cohesion (four dimensions)
- 4.2.1.4 Perceived team complementarity
- 4.2.1.5 Team resilience (three dimensions)
- 4.3 Phase 2: Scale development of initial item pool
- 4.4 Phase 3: Scale evaluation
- 4.5 Method
- 4.5.1 Instrumentation and facilities
- 4.5.2 Materials, tests, task, and stimuli
- 4.5.2.1 Questionnaire
- 4.5.2.2 Experimental task
- 4.5.3 Subjects
- 4.5.3.1 Sample size justification
- 4.5.4 Procedure
- 4.5.5 Experimental design
- 4.5.6 Data analysis
- 4.6 Results
- 4.6.1 Cohesion scale evaluation
- 4.6.2 Partial invariance
- 4.6.2.1 GEQ
- 4.6.2.2 Function
- 4.6.2.3 Exclusivity
- 4.6.2.4 Complementarity
- 4.6.2.5 Pride
- 4.6.2.6 Morale
- 4.6.2.7 Belongingness
- 4.6.2.8 Attraction to the group
- 4.6.2.9 Social
- 4.6.2.10 Leadership direction
- 4.6.2.11 Resilience: Team learning orientation
- 4.6.2.12 Resilience: Shared language
- 4.6.2.13 Resilience: Team flexibility
- 4.6.2.14 Resilience: Perceived efficacy of collective team action (PECTA)
- 4.6.2.15 Results summary
- 4.7 Discussion
- 4.8 Limitations and future directions
- 4.9 Conclusions and path forward
- Appendix A: Original item pool with item retention recommendations
- Function-based cohesion
- a Task cohesion
- Structural cohesion
- a Exclusivity
- b Individual attraction to team
- c Leadership direction (vertical cohesion)
- Interpersonal cohesion
- a Team pride
- b Social cohesion
- c Belongingness
- d Morale
- Perceived complementarity
- Resilience
- a Mastery approaches
- b Social capital: Shared language
- c Collective efficacy: Perceived efficacy for collective team action
- 5. Enabling human–machine symbiosis: Automated establishment of common ground and estimates of the topological structures of Commander’s Intent
- 5.1 Introduction
- 5.2 Related work
- 5.2.1 Symbiotic artificial intelligence
- 5.2.2 Interdependence and joint activity theory
- 5.3 Common ground
- 5.3.1 Worked example of common ground
- 5.3.2 Estimating the underlying topology
- 5.3.3 Ontologies and ontology logs
- 5.3.4 Formally establishing common ground
- 5.4 Results and conclusions
- 6. Measuring consequential changes in human-autonomous system interactions
- 6.1 Introduction
- 6.2 Method
- 6.2.1 The baseline interface and tasks
- 6.2.2 Initial HMM development
- 6.2.3 Model comparison metric
- 6.3 Results
- 6.3.1 Same people, same interface
- 6.3.2 Different people and different interfaces
- 6.3.3 Same versus different people
- 6.3.4 Same people, modified interface
- 6.3.4.1 Observation alignment
- 6.3.5 Different people, similar interface, different tasks
- 6.4 Discussion
- 6.5 Limitations
- 6.5.1 Methodological
- 6.5.2 Practical
- 6.6 Conclusions
- 7. User affordances to engineer open-world enterprise dynamics
- 7.1 Background
- 7.2 An illustrative example
- 7.3 Categorical, situated reasoning
- 7.4 Challenges
- 7.4.1 A goal, a possibilistic network presentation
- 7.5 Three mathematical foundations
- 7.5.1 Situation and channel theories
- 7.5.2 Ordered valuation algebras
- 7.5.3 Constructivist systems and synthetic differential geometry
- 7.6 A tentative relationship of concepts with candidate visual grammars
- 7.6.1 CaNeTA, an intermediate user-facing capture model
- 7.6.2 Assigning dynamic ontology influence
- 7.6.3 Three levels of analysis
- 7.6.4 Self-organizing enterprises: Entropy and influence
- 8. Truth-O-Meter: Collaborating with LLM in fighting its hallucinations
- 8.1 Introduction
- 8.1.1 Why LLMs hallucinate
- 8.1.1.1 Introductory example
- 8.1.2 Related work
- 8.1.3 Iterative mode
- 8.1.4 Handling multiple mutually inconsistent facts obtained from authoritative sources
- 8.1.5 Correcting factual errors in syntactic and semantic spaces
- 8.1.5.1 Fact-checking by question answering against sources and syntax-semantic alignment
- 8.1.5.2 Example of alignment
- 8.1.6 Evaluation
- 8.1.7 Hallucination types
- 8.1.7.1 Token-level hallucination correction
- 8.1.7.2 Evaluation against fact extraction and verification annotation platform
- 8.1.8 Automated evaluation on QA datasets
- 8.1.8.1 Information-seeking dialogues
- 8.1.8.2 Personalized drug intake recommendation domain
- 8.1.8.3 Error analysis
- 8.1.8.4 Examples of repairs for hallucination
- 8.1.9 Discussions
- 8.1.10 Conclusions
- 9. Natural versus artificial intelligence: AI insights from the cognitive sciences
- 9.1 Introduction: Natural versus artificial intelligence
- 9.1.1 A path to machine intelligence
- 9.2 Insights from the cognitive sciences
- 9.2.1 Insight 0: Inductive learning strategies are brittle
- 9.2.2 Insight 1: “Learning” and “intelligence” are semantically overloaded terms
- 9.2.3 Insight 2: Removing technological barriers is necessary but not sufficient
- 9.2.4 Insight 3: Top-down processing offers a way forward
- 9.3 The argument for a cognitive concept of AI
- 9.4 Conclusions
- 10. Intention when humans team with AI
- 10.1 Introduction
- 10.2 State of human–AI teaming
- 10.3 Integration
- 10.3.1 Debate over intention
- 10.3.2 Challenge of collective intent
- 10.4 Conclusion
- 11. Autonomy: A family resemblance concept? An exploration of human–robot teams
- 11.1 Introduction
- 11.2 Teams and teammates
- 11.2.1 A small autonomous robot team
- 11.2.2 Human teams employ inference: Theory of mind
- 11.2.3 Gameplay between teams of humans and AI
- 11.2.4 Team conclusion
- 11.3 Autonomy
- 11.3.1 Autonomy in humans
- 11.3.2 Autonomy in groups
- 11.3.3 Autonomy in robots
- 11.3.4 Human versus robot autonomy
- 11.3.4.1 An illustration—Hanabi revisited
- 11.4 Summary and concluding remarks
- 11.4.1 Family resemblance
- 11.4.2 Conclusions
- Funding
- 12. A theoretical approach to management of limited attentional resources to support the m:N operation in advanced air mobility ecosystem
- 12.1 Introduction
- 12.2 AAM technology and ecosystem
- 12.3 Theories of attention allocation
- 12.4 Trust in human–automation interaction and human–autonomy teaming
- 12.5 A model of human–AAM technology interaction centering on human attentional limits
- 12.6 Situation assessment
- 12.7 Trust
- 12.8 Overall description of a model of human–AAM technology interaction
- 12.9 Conclusion
- 13. Predicting workload of dispatchers supervising autonomous systems
- 13.1 Introduction
- 13.2 Simulator of Humans and Automation in Dispatch Operations (SHADO)
- 13.3 Rail case study
- 13.3.1 Validation
- 13.3.1.1 Data validation
- 13.3.1.2 Black-box validation
- 13.3.2 Predicting changes in rail dispatchers' workload due to automation
- 13.4 Modeling surface transportation dispatchers that supervise autonomous vehicles
- 13.4.1 Single dispatcher operations
- 13.4.2 Two dispatcher operations
- 13.5 Limitations
- 13.6 Conclusion
- 14. The generative AI weapon of mass destruction: Evolving disinformation threats, vulnerabilities, and mitigation frameworks
- 14.1 Introduction
- 14.1.1 Overview of GenAI
- 14.1.2 Significance of AI in modern disinformation
- 14.1.3 Definitions and scope
- 14.1.4 Purpose and structure of the chapter
- 14.2 Understanding GenAI
- 14.2.1 GenAI integrity manipulation
- 14.2.2 Capabilities and applications in content generation
- 14.2.3 Deepfakes and synthetic media
- 14.3 The role of GenAI in propaganda
- 14.3.1 Historical context of disinformation techniques
- 14.3.2 Intersection of AI and propaganda: Use case
- 14.3.3 GenAI disinformation threats and vulnerabilities
- 14.3.3.1 AI hallucinations, model poisoning and deepfakes in disinformation campaigns
- 14.4 GenAI efficacy assessment frameworks
- 14.4.1 GenAI model testing and monitoring
- 14.4.1.1 Prompt integrity testing
- 14.4.1.2 GenAI Integrity Testing Methodology
- 14.4.1.3 AI Trust Framework and Maturity Model
- 14.5 Identifying GenAI risks and vulnerabilities
- 14.5.1 Identifying risks associated with GenAI in propaganda
- 14.5.2 Analysis of technological and societal vulnerabilities
- 14.5.3 Threat modeling: Assessing likelihood and severity of threats
- 14.6 Mitigation strategies
- 14.6.1 Governance, regulatory and ethical measures
- 14.6.2 Neuro-symbolic reasoning
- 14.6.3 Cyber Kill Chain
- 14.6.4 MITRE ATLAS
- 14.6.4.1 GenAI security best practices & frameworks
- 14.6.4.2 NIST AI Risk Management Framework
- 14.6.4.3 Use case: Applying MITRE ATLAS to an adversarial AI incident
- 14.7 Conclusion
- 14.7.1 Summary of key points
- 14.7.2 Outlook and challenges
- 14.7.3 Final remarks on managing GenAI in propaganda
- 15. Ethics for artificial agents: Toward commensurate capability and self-regulation
- 15.1 Introduction
- 15.2 Approach
- 15.3 Implementing ethics sensitivity in machines
- 15.3.1 Nonharmful (nonmaleficent) machines
- 15.3.2 Helpful (beneficent) machines
- 15.3.3 Dutiful (responsible) machines
- 15.3.4 Trustworthy and transparent machines
- 15.4 Scientific accounts of mJDM
- 15.4.1 Evolutionary accounts
- 15.4.2 Psychological accounts
- 15.4.3 Neuroscientific accounts
- 15.4.4 Integrating mJDM accounts to scope a version of the capacity in machines
- 15.5 Major design policy themes around artificial ethical agents
- 15.5.1 Design policy around endogeneity
- 15.5.2 Design policy around generality
- 15.5.3 Design policy around mentalization
- 15.5.4 Design policy around legibility
- 15.5.5 Design policy around duty responsiveness
- 15.6 Conclusion
- 16. Self-visualization for the human–machine mind–body problem
- 16.1 Foreword: About this paper
- 16.1.1 Relevance of cognitive entropy
- 16.2 Introduction
- 16.2.1 Computation functionalism and the polarity issue between computer science and logic
- 16.2.2 The mind-body problem
- 16.2.3 Precedence relations and m-compactness
- 16.2.4 Consistency forces us to merge Physicalism and Idealism
- 16.3 Virtual consciousness and machine consciousness
- 16.3.1 Decision-making and higher-order representations in machine consciousness
- 16.3.2 Vanishing of first-order representations, Kant's principle, and rejection of the empirical basis
- 16.3.3 Meta-consciousness
- 16.3.4 The algebraic type degeneration assumption
- 16.3.5 Trialism, local monism, and virtual thinking
- 16.3.6 Hopf algebras, trialism as oscillations on infinity
- 16.3.7 Machine consciousness and video games
- 16.4 Human–machine interactions and the mind-body problem
- 16.4.1 Consistency at Planck's length and alogism
- 16.4.2 Descartes' diagrams
- 16.4.3 Completeness of the alogical doubt state
- 16.4.4 Descartes' diagrams and quantum physics
- 16.5 Quantum physics and machine consciousness
- 16.5.1 Sense-making, quantum measurement and consciousness
- 16.5.2 Wigner's numerous friends
- 16.5.3 Consciousness and the m-property
- 16.5.4 Identities and co-identities or the problem of initial conditions
- 16.6 Conclusion
- 17. Knowledge, consciousness, and debate: advancing the science of autonomous human–machine teams
- 17.1 Introduction
- 17.2 A review of knowledge across selected disciplines
- 17.2.1 What is knowledge to systems engineering
- 17.2.2 What is knowledge to philosophers?
- 17.2.3 What is knowledge to social scientists?
- 17.2.3.1 What is knowledge to citizens making decisions to cleanup wastes
- 17.2.3.2 What is knowledge under autocrats?
- 17.2.3.3 What is knowledge to consciousness?
- 17.2.4 What is knowledge to information theorists?
- 17.2.4.1 Intuition
- 17.2.4.2 Details
- 17.2.5 What is knowledge to physicists?
- 17.3 What is knowledge to us?
- 17.4 Discussion
- 17.4.1 Systems engineering
- 17.4.2 Philosophy
- 17.4.3 Social science
- 17.4.3.1 Citizen recommendations to government agencies for the cleanup of their wastes
- 17.4.3.2 Business
- 17.4.4 Consciousness
- 17.4.4.1 Authoritarianism
- 17.4.4.2 Information theory
- 17.4.5 Physics
- 17.5 Conclusions
- Index
- No. of pages: 448
- Language: English
- Edition: 1
- Published: December 5, 2024
- Imprint: Academic Press
- Paperback ISBN: 9780443292460
- eBook ISBN: 9780443292477
WL
William Lawless
RM
Ranjeev Mittu
Ranjeev Mittu is the Branch Head for the Information and Decision Sciences Branch within the Information Technology Division at the U.S. Naval Research Laboratory (NRL). He leads a multidisciplinary group of scientists and engineers conducting research and advanced development in visual analytics, human performance assessment, decision support systems, and enterprise systems. Mr. Mittu’s research expertise is in multi-agent systems, human-systems integration, artificial intelligence (AI), machine learning, data mining and pattern recognition; and he has authored and/or coedited eleven books on the topic of AI in collaboration with the national and international scientific communities spanning academia and defense. Mr. Mittu received a Master of Science Degree in Electrical Engineering in 1995 from The Johns Hopkins University in Baltimore, MD.
The views expressed in this Work do not necessarily represent the views of the Department of the Navy, the Department of Defense, or the United States.
DS
Donald Sofge
Don Sofge is a computer scientist and roboticist at the Naval Research Laboratory (NRL) with 36 years of experience in artificial intelligence, machine learning, and control systems R&D, the last 23 years at NRL. He leads the Distributed Autonomous Systems Section in the Navy Center for Applied Research in Artificial Intelligence (NCARAI), where he develops nature-inspired computing paradigms to challenging problems in sensing, artificial intelligence, and control of autonomous robotic systems. He has more than 200 refereed publications including 12 edited books in robotics, artificial intelligence, machine learning, planning, sensing, control, and related disciplines.
The views expressed in this Work do not necessarily represent the views of the Department of the Navy, the Department of Defense, or the United States.
HF
Hesham Fouad
Hesham Fouad is the section head for the Intelligent Decision Support Section within the Information Technology Division at the U.S. Naval Research Laboratory (NRL). Dr. Fouad has over 35 years of experience as a Computer Scientist working in both industry and academia. He began his career at IBM Advanced Technologies where he worked on the first commercially available expert system development and runtime environment. Expert System Environment (ESE) was developed through a collaborative effort between the AI team at Stanford University and IBM. Dr. Fouad worked with the Stanford team to integrate new capabilities into ESE such as Frame Based Reasoning. Dr. Fouad also worked with the Voice Recognition group, led by Kai-Fu Lee, at Carnegie Mellon university to transition their S&T into IBM’s Via Voice product. Finally, Dr. Fouad worked with the MIT media lab’s Project Athena group on a collaborative effort to integrate a Hypermedia capability into the OS/2 operating system.
Dr. Fouad left IBM to pursue a Doctoral Degree in Computer Science at The George Washington University where he conducted his dissertation research on optimal real-time scheduling algorithms for Imprecise Computations using a fair scheduling strategy. This work was the basis of a startup company that Dr. Fouad founded and ran for 13 years. During that time, Dr. Fouad developed and managed the production of a line of commercial software for synthetic spatial audio design and production. He also conducted research on, and managed several ONR funded BAA and SBIR awards on adaptive training in virtual environments during that time.
Dr. Fouad maintained his involvement with academia both as an adjunct professor at the George Washington University, and as chair of a newly created undergraduate degree program in Computer Science with a focus on Game Development at the Art Institute of Washington. Since joining NRL, Dr. Fouad has led numerous efforts with three transitions to-date. He has procured millions in funding from Navy, Marines, Army, and OUSD sources. He has mentored SEAP students and has taken on a variety of roles within The Technical Cooperation Program.