Putting AI in the Critical Loop
Assured Trust and Autonomy in Human-Machine Teams
- 1st Edition - February 20, 2024
- Editors: Prithviraj Dasgupta, James Llinas, Tony Gillespie, Scott Fouse, William Lawless, Ranjeev Mittu, Donald Sofge
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 1 5 9 8 8 - 6
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 1 5 9 8 7 - 9
Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams takes on the primary challenges of bidirectional trust and performance of autonomous system… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quotePutting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams takes on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the autonomy of human-machine teams and systems. Throughout this book, the intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes which will help lay the groundwork for the audience to not only bridge knowledge gaps, but also to advance this science to develop better solutions. The distinctively different characteristics and features of humans and machines are likely why they have the potential to work well together, overcoming each other's weaknesses through cooperation, synergy, and interdependence which forms a “collective intelligence.” Trust is bidirectional and two-sided; humans need to trust AI technology, but future AI technology may also need to trust humans.
- Assesses the latest research advances, engineering challenges, and the theoretical gaps surrounding the question of autonomy
- Reviews the challenges of autonomy (e.g., trust, ethics, legalities, etc.), including gaps in the knowledge of the science
- Offers a path forward to solutions
- Investigates the value of trust by humans of HMTs, as well as the bidirectionality of trust, understanding how machines learn to trust their human teammates
Graduate students, researchers, and professionals in the fields of computer science, interested in the fields of artificial intelligence and autonomous human-machine teams and systems
- Cover image
- Title page
- Table of Contents
- Copyright
- Contributors
- About the editors
- 1 Introduction
- Abstract
- 1 Theme of the symposium
- 2 Teams and teamwork
- 3 Team situation awareness (TSA)
- 4 The trust dimension
- 5 Summary remarks
- References
- 2 Alternative paths to developing engineering solutions for human-machine teams
- Abstract
- 1 Introduction
- 2 Panel organization
- 3 Test vignettes
- 4 Major issues raised
- 5 Future work
- 6 Summary of presentations by panel members
- 7 Biographies of panelists and moderators
- References
- 3 Risk determination vs risk perception: From hate speech, an erroneous drone attack, and military nuclear wastes to human-machine autonomy
- Abstract
- 1 Introduction
- 2 Situation
- 3 Case studies
- 4 How to fix?
- 5 A work-in-progress: Future autonomous systems
- 6 Rationality
- 7 Deception
- 8 Innovation: A trade-off between innovation and suppression
- 9 Conclusions
- References
- 4 Appropriate context-dependent artificial trust in human-machine teamwork
- Abstract
- 1 Introduction
- 2 Trust definition
- 3 Trust models, Krypta and Manifesta
- 4 Trust as a context-dependent model
- 5 Trust as a belief of trustworthiness
- 6 Discussion and future work
- 7 Conclusion
- References
- 5 Toward a causal modeling approach for trust-based interventions in human-autonomy teams
- Abstract
- 1 Human-autonomy teams
- 2 Trust in human-autonomy teams
- 3 Trust measurement in HAT
- 4 Interventions and teaming
- 5 Our model human-autonomy teaming scenario
- 6 A brief overview of causal modeling
- 7 Causal modeling in context
- 8 Conclusions
- References
- 6 Risk management in human-in-the-loop AI-assisted attention aware systems
- Abstract
- 1 Introduction
- 2 Attention aware systems
- 3 Risk management of attention aware systems
- 4 Risk management considerations
- 5 Risk management approaches
- 6 Discussion and conclusions
- References
- 7 Enabling trustworthiness in human-swarm systems through a digital twin
- Abstract
- Acknowledgments
- 1 Introduction
- 2 Trustworthy human-swarm interaction
- 3 Industry-led trust requirements
- 4 Explainability of human-swarm systems
- 5 Use-case development
- 6 Human-swarm teaming simulation platform
- 7 Compliance with requirements
- 8 Discussion and conclusion
- References
- 8 Building trust with the ethical affordances of education technologies: A sociotechnical systems perspective
- Abstract
- 1 Introduction
- 2 Operationalizing ethics in learning engineering: From values to assessment
- 3 AI-based technologies for instruction and assessment
- 4 Knowledge management, learner records, and data lakes
- 5 Learning systems inside and outside higher education
- 6 Responsive and resilient design in learning engineering
- 7 Conclusion
- References
- 9 Perceiving a humorous robot as a social partner
- Abstract
- 1 Introduction
- 2 Background
- 3 Humor and trust
- 4 Research questions
- 5 Method
- 6 Experiment
- 7 Results
- 8 Discussion and future work
- 9 Conclusion
- References
- 10 Real-time AI: Using AI on the tactical edge
- Abstract
- Acknowledgments
- 1 Introduction
- 2 Problem definition
- 3 Related work
- 4 Multitask neural network model
- 5 Scheduling
- 6 Conclusions
- References
- 11 Building a trustworthy digital twin: A brave new world of human machine teams and autonomous biological internet of things (BIoT)
- Abstract
- 1 Introduction
- 2 Examination of the current state of biosecurity: What does assured trust in BIoT look like? What happens when it breaks down?
- 3 Security maturity of cyber-physical-biological systems in the biopharma sector
- 4 Antiquated biosafety and security net
- 5 BioSecure digital twin response
- 6 Trust between human-machine teams deploying AI driven digital twins
- 7 Zero-trust approach to biopharma cybersecurity
- 8 Trust framework for biological internet of things (BIoT)
- 9 Digital twin trust framework for human-machine teams
- 10 Digital twin opportunities and challenges to improve trust in human-machine teams
- 11 Future research and conclusion
- References
- 12 A framework of human factors methods for safe, ethical, and usable artificial intelligence in defense
- Abstract
- 1 Introduction
- 2 Method
- 3 Results
- 4 Discussion
- Appendix A Mapping of methods to the ADF capability life cycle phases.
- Appendix B Mapping of methods to each of the modified NATO principles of responsible use of AI.
- References
- 13 A schema for harms-sensitive reasoning, and an approach to populate its ontology by human annotation
- Abstract
- 1 Introduction: Chess bot incident begs for harms reasoning licensure
- 2 Generating values-driven behavior
- 3 Moral-scene assessment: Minds, and affordances to them
- 4 Injury: How physical harms come to be
- 5 Knowledge representation
- 6 Population of the ontology
- 7 Parameterizations
- 8 Handling harms of various types
- 9 Conclusion
- Acknowledgments
- References
- Index
- No. of pages: 304
- Language: English
- Edition: 1
- Published: February 20, 2024
- Imprint: Academic Press
- Paperback ISBN: 9780443159886
- eBook ISBN: 9780443159879
PD
Prithviraj Dasgupta
Prithviraj (Raj) Dasgupta is a computer engineer with the Distributed Intelligent Systems Section at the Naval Research Laboratory in Washington, D.C. His research interests are in the areas of machine learning, AI-based game playing, game theory and multi-agent systems. He received his Ph.D. in 2001 from the University of California, Santa Barbara. From 2001 to 2019, he was a full Professor with the computer science department at the University of Nebraska, Omaha where he established and directed the CMANTIC Robotics Laboratory. He has authored over 150 publications in leading journals and conferences in his research area. He is a senior member of IEEE.
Affiliations and expertise
Computer Engineer, Distributed Intelligent Systems Section, Naval Research Laboratory, Washington, D.C, USAJL
James Llinas
James Llinas is an emeritus professor at the University at Buffalo, New York. He established and directed the Center for Multisource Information Fusion at the university, the only academic systems-centered information fusion center in the United States, leading it to carrying out well-funded multidisciplinary research for over 20 years. He was a co-author of the first book on data fusion and has co-edited and co-authored several additional books on data and information fusion. In 1998, he helped establish and was first President of the International Society for Information Fusion.
Affiliations and expertise
Emeritus Professor, University at Buffalo, New York, NY, USATG
Tony Gillespie
Tony Gillespie is a visiting professor at University College London and a fellow of the Royal Academy of Engineering. His career includes academic, industrial, and government research and research management. His work on ensuring highly-automated weapons meet legal requirements has been extended to other autonomous systems in recent years, authoring a book and academic papers. He has acted as a technical adviser to the UN and other meetings discussing potential bans on autonomous weapon systems.
Affiliations and expertise
Visiting Professor, University College London, UKSF
Scott Fouse
Scott Fouse had a 42-year career in Aerospace R&D, mostly focused on exploring applications of AI to military applications. He was the VP of the Advanced Technology Center at Lockheed Martin Space where he led approximately 500 scientists and engineers performing research and development in space science and a variety of space systems-related technologies and capabilities. In prior appointments, Scott served as president and CEO of ISX Corporation and member of the Air Force Scientific Advisory Board where he supported a number of studies, directorate reviews, and chaired a study on experimentation to support disruptive innovation. Scott has a BS in Physics from the University of Central Florida and an MS in electrical engineering from the University of Southern California.
Affiliations and expertise
Independent Consultant, Fouse Consulting Services, AZ, USAWL
William Lawless
William Lawless is professor of mathematics and psychology at Paine College, GA. For his PhD topic on group dynamics, he theorized about the causes of tragic mistakes made by large organizations with world-class scientists and engineers. After his PhD in 1992, DOE invited him to join its citizens advisory board (CAB) at DOE’s Savannah River Site (SRS), Aiken, SC. As a founding member, he coauthored numerous recommendations on environmental remediation from radioactive wastes (e.g., the regulated closure in 1997 of the first two high-level radioactive waste tanks in the USA). He is a member of INCOSE, IEEE, AAAI and AAAS. His research today is on autonomous human-machine teams (A-HMT). He is the lead editor of seven published books on artificial intelligence. He was lead organizer of a special issue on “human-machine teams and explainable AI” by AI Magazine (2019). He has authored over 85 articles and book chapters, and over 175 peer-reviewed proceedings. He was the lead organizer of twelve AAAI symposia at Stanford (2020). Since 2018, he has also been serving on the Office of Naval Research's Advisory Boards for the Science of Artificial Intelligence and Command Decision Making.
Affiliations and expertise
Department of Mathematics, Sciences and Technology, and Department of Social Sciences, School of Arts and Sciences, Paine College, Augusta, GA, USARM
Ranjeev Mittu
Ranjeev Mittu is the branch head for the Information Management and Decision Architectures Branch within the Information Technology Division at the U.S. Naval Research Laboratory (NRL). He leads a multidisciplinary group of scientists and engineers that conduct research and advanced development in visual analytics, human performance assessment, decision support systems, and enterprise systems. Mr. Mittu’s research expertise is in multi-agent systems, human-systems integration, artificial intelligence (AI), machine learning, data mining and pattern recognition; and he has authored and/or coedited nine books on the topic of AI in collaboration with national and international scientific communities spanning academia and defense. Mr. Mittu received a Master of Science Degree in Electrical Engineering in 1995 from The Johns Hopkins University in Baltimore, MD.
Affiliations and expertise
Information Management and Decision Architectures Branch, Information Technology Division, U.S. Naval Research Laboratory, Washington, DC, USADS
Donald Sofge
Don Sofge is a computer scientist and roboticist at the Naval Research Laboratory (NRL) with 33 years of experience in artificial intelligence, machine learning, and control systems R&D. He leads the Distributed Autonomous Systems Group in the Navy Center for Applied Research in Artificial Intelligence (NCARAI), where he develops nature-inspired computing paradigms to challenging problems in sensing, artificial intelligence, and control of autonomous robotic systems. He has more than 180 refereed publications including 10 books in robotics, artificial intelligence, machine learning, planning, sensing, control, and related disciplines.
Affiliations and expertise
Navy Center for Applied Research in Artificial Intelligence, United States Naval Research Laboratory, Washington, DC, USARead Putting AI in the Critical Loop on ScienceDirect