
Trolley Crash
Approaching Key Metrics for Ethical AI Practitioners, Researchers, and Policy Makers
- 1st Edition - January 26, 2024
- Imprint: Academic Press
- Editors: Peggy Wu, Michael Salpukas, Hsin-Fu Wu, Shannon Ellsworth
- Language: English
- Paperback ISBN:9 7 8 - 0 - 4 4 3 - 1 5 9 9 1 - 6
- eBook ISBN:9 7 8 - 0 - 4 4 3 - 1 5 9 9 2 - 3
The prolific deployment of Artificial Intelligence (AI) across different fields has introduced novel challenges for AI developers and researchers. AI is permeating decision ma… Read more

Purchase options

Institutional subscription on ScienceDirect
Request a sales quoteThe prolific deployment of Artificial Intelligence (AI) across different fields has introduced novel challenges for AI developers and researchers. AI is permeating decision making for the masses, and its applications range from self-driving automobiles to financial loan approvals. With AI making decisions that have ethical implications, responsibilities are now being pushed to AI designers who may be far-removed from how, where, and when these ethical decisions occur.
Trolley Crash: Approaching Key Metrics for Ethical AI Practitioners, Researchers, and Policy Makers provides audiences with a catalogue of perspectives and methodologies from the latest research in ethical computing. This work integrates philosophical and computational approaches into a unified framework for ethical reasoning in the current AI landscape, specifically focusing on approaches for developing metrics. Written for AI researchers, ethicists, computer scientists, software engineers, operations researchers, and autonomous systems designers and developers, Trolley Crash will be a welcome reference for those who wish to better understand metrics for ethical reasoning in autonomous systems and related computational applications.
- Presents a comparison between human oversight and ethical simulation in robots
- Introduces approaches for measuring, evaluating, and auditing ethical AI
- Investigates how AI and technology are changing human behavior
- Cover image
- Title page
- Table of Contents
- Copyright
- Contributors
- Foreword
- Acknowledgments
- Chapter One: Introduction
- 1.1. Ethical AI introduction
- 1.2. Why ethical AI metrics?
- 1.3. Ethical AI metric development
- References
- Chapter Two: Terms and references
- Abstract
- 2.1. Definition of terms and references
- 2.2. Discussion
- 2.3. Conclusion
- References
- Chapter Three: Boiling the frog: Ethical leniency due to prior exposure to technology
- Abstract
- Notes and acknowledgments
- 3.1. Introduction
- 3.2. Background
- 3.3. Literature review
- 3.4. Problem
- 3.5. Methods
- 3.6. Data analysis
- 3.7. Use cases
- 3.8. Applications
- 3.9. Discussion
- 3.10. Conclusions
- 3.11. Outlook and future works
- References
- Chapter Four: Automated ethical reasoners must be interpretation-capable
- Abstract
- Acknowledgements
- 4.1. Introduction: Why addressing open-texturedness matters
- 4.2. Interpretive reasoning and the MDIA position
- 4.3. Benchmark tasks to achieve interpretation-capable AI
- 4.4. Conclusion
- References
- Chapter Five: Towards unifying the descriptive and prescriptive for machine ethics
- Abstract
- Acknowledgements
- 5.1. Machine learning – A gamble with ethics
- 5.2. Definitions, background, and state of the art
- 5.3. Is machine learning safe?
- 5.4. Moral axioms – A road to safety
- 5.5. Testing luck as distinguishing between morality and convention
- 5.6. Discussion
- 5.7. Conclusion
- References
- Chapter Six: Competent moral reasoning in robot applications: Inner dialog as a step towards artificial phronesis
- Abstract
- Acknowledgements
- 6.1. Introduction and motivation
- 6.2. Background, definitions, and notations
- 6.3. Literature review and state of the art
- 6.4. Problem/system/application definition
- 6.5. Proposed solution
- 6.6. Analysis (qualitative/quantitative)
- 6.7. Use cases
- 6.8. Applications
- 6.9. Discussion
- 6.10. Conclusions
- 6.11. Outlook and future works
- References
- Chapter Seven: Autonomy compliance with doctrine and ethics by using ontological frameworks
- Abstract
- 7.1. Introduction and motivation
- 7.2. Background, definitions, and notations
- 7.3. Literature review and state of the art
- 7.4. Problem/system/application definition
- 7.5. Proposed solution
- 7.6. Qualitative and quantitative analysis
- 7.7. Use cases and applications
- 7.8. Discussion and conclusions
- 7.9. Outlook and future work
- References
- Chapter Eight: Meaningful human control and ethical neglect tolerance: Initial thoughts on how to define, model, and measure them
- Abstract
- Acknowledgements
- 8.1. Introduction and background
- 8.2. A core distinction and a challenging problem
- 8.3. An approach to MHC evaluation for non-RT control systems
- 8.4. An illustrative worked example
- 8.5. Conclusions and implications
- References
- Chapter Nine: Continuous automation approach for autonomous ethics-based audit of AI systems
- Abstract
- 9.1. Introduction and motivation
- 9.2. Background, literature, definitions, and notations
- 9.3. A proposed automation approach for ethics-based auditing
- 9.4. Use case: AI-assisted recruitment risk-based ethical audit sample
- 9.5. Conclusions
- 9.6. Outlook and future work
- References
- Chapter Ten: A tiered approach for ethical AI evaluation metrics
- Abstract
- Acknowledgements
- 10.1. Introduction
- 10.2. Background
- 10.3. Related work
- 10.4. Methodology
- 10.5. Results/discussion
- 10.6. Conclusion and future work
- References
- Chapter Eleven: Designing meaningful metrics to demonstrate ethical supervision of autonomous systems
- Abstract
- Acknowledgements
- 11.1. Introduction and motivation
- 11.2. Background, scope, and design considerations
- 11.3. Ethical principles for AI
- 11.4. Human accountability for deployed AI
- 11.5. Mission orders as common basis for tasking, testing, and control
- 11.6. Heuristics for creating testable robotic AI metrics
- 11.7. Dimensions of Autonomous Decision Making (DADM)
- 11.8. Negligence provides sharp contrast of gaps
- 11.9. Correspondences needed between virtual and physical environments
- 11.10. Test suites as qualification and certification process for TestDevOps
- 11.11. Looking forward: “You get what you measure” and trust
- References
- Chapter Twelve: Obtaining hints to understand language model-based moral decision making by generating consequences of acts
- Abstract
- Acknowledgements
- 12.1. Introduction and motivation
- 12.2. Background and definitions
- 12.3. Literature review and state of the art
- 12.4. Problem definition
- 12.5. Proposed solution
- 12.6. Experiments and results
- 12.7. Use cases and applications
- 12.8. Discussion
- 12.9. Conclusions
- 12.10. Outlook and future works
- References
- Chapter Thirteen: Emerging issues and challenges
- Appendix 13.A. Other sample interactions with AI
- References
- Acronyms
- Index
- Edition: 1
- Published: January 26, 2024
- Imprint: Academic Press
- No. of pages: 264
- Language: English
- Paperback ISBN: 9780443159916
- eBook ISBN: 9780443159923
PW
Peggy Wu
Peggy Wu is an award-winning scientist with over 20 years of experience combining cognitive psychology with AI in Human-Machine systems. She conducts research advancing Knowledge Representation and Reasoning, Human-Computer Etiquette, Socially Intelligent Agents, Virtual/Mixed Reality, and AI explainability, transparency, and trust applied to military, commercial, and space domains. She is a speaker, author, judge for XPrize, and an associate producer of a documentary on the psychology of Space Exploration entitled, "The Longest Goodbye." Her work has been covered by media outlets including National Geographic and the BBC.
MS
Michael Salpukas
Michael Salpukas is a research and development leader with 25 years of experience transitioning advanced algorithms and robust AI into products. He leads a portfolio of government and internally funded research and development projects designed to provide human assistance to the warfighter and rapid updates to mitigate novel inputs. Michael has authored intellectual property and publications on a variety of topics and is a regular conference panelist/moderator in AI Ethics, AI Assurance, and AI for Manufacturing. He is on the advisory board for the Northeastern University Institute for the Wireless Internet of Things, and is a Judge/Mentor for startups via Activate and MassChallenge.
HW
Hsin-Fu Wu
Hsin-Fu “Sinker” Wu performs operations research for business development and customer engagement. Sinker served a 20-year career in the U.S. Navy, in which he developed and implemented strategies for coalition, joint, naval, and undersea warfare. He co-authored a report on Ethical Control of autonomous systems for naval missions and is an AI Assurance researcher.
SE
Shannon Ellsworth
Shannon Ellsworth currently leads several fundamental research efforts to apply Artificial Intelligence and Machine Learning to U.S. Department of Defense systems. Her research is based on over 20 years of work in the field and is focused on calculated design of algorithms to support smart command and control applications. She also serves as co-chair to the AI/ML Working Group for the Military Operations Research Society.