LIMITED OFFER
Save 50% on book bundles
Immediately download your ebook while waiting for your print delivery. No promo code needed.
Computer Architecture: A Quantitative Approach, Fifth Edition, explores the ways that software and technology in the cloud are accessed by digital media, such as cell phones, c… Read more
LIMITED OFFER
Immediately download your ebook while waiting for your print delivery. No promo code needed.
Computer Architecture: A Quantitative Approach, Fifth Edition, explores the ways that software and technology in the cloud are accessed by digital media, such as cell phones, computers, tablets, and other mobile devices. The book, which became a part of Intel's 2012 recommended reading list for developers, covers the revolution of mobile computing. It also highlights the two most important factors in architecture today: parallelism and memory hierarchy.
This fully updated edition is comprised of six chapters that follow a consistent framework: explanation of the ideas in each chapter; a crosscutting issues section, which presents how the concepts covered in one chapter connect with those given in other chapters; a putting it all together section that links these concepts by discussing how they are applied in real machine; and detailed examples of misunderstandings and architectural traps commonly encountered by developers and architects. Formulas for energy, static and dynamic power, integrated circuit costs, reliability, and availability are included. The book also covers virtual machines, SRAM and DRAM technologies, and new material on Flash memory. Other topics include the exploitation of instruction-level parallelism in high-performance processors, superscalar execution, dynamic scheduling and multithreading, vector architectures, multicore processors, and warehouse-scale computers (WSCs). There are updated case studies and completely new exercises. Additional reference appendices are available online.
This book will be a valuable reference for computer architects, programmers, application developers, compiler and system software developers, computer system designers and application developers.
In Praise of Computer Architecture: A Quantitative Approach Fifth Edition
Dedication
Foreword
Preface
Why We Wrote This Book
This Edition
Topic Selection and Organization
An Overview of the Content
Navigating the Text
Chapter Structure
Case Studies with Exercises
Supplemental Materials
Helping Improve This Book
Concluding Remarks
Acknowledgments
Contributors to the Fifth Edition
Contributors to Previous Editions
1. Fundamentals of Quantitative Design and Analysis
1.1 Introduction
1.2 Classes of Computers
1.3 Defining Computer Architecture
1.4 Trends in Technology
1.5 Trends in Power and Energy in Integrated Circuits
1.6 Trends in Cost
1.7 Dependability
1.8 Measuring, Reporting, and Summarizing Performance
1.9 Quantitative Principles of Computer Design
1.10 Putting It All Together: Performance, Price, and Power
1.11 Fallacies and Pitfalls
1.12 Concluding Remarks
1.13 Historical Perspectives and References
Case Studies and Exercises by Diana Franklin
2. Memory Hierarchy Design
2.1 Introduction
2.2 Ten Advanced Optimizations of Cache Performance
2.3 Memory Technology and Optimizations
2.4 Protection: Virtual Memory and Virtual Machines
2.5 Crosscutting Issues: The Design of Memory Hierarchies
2.6 Putting It All Together: Memory Hierachies in the ARM Cortex-A8 and Intel Core i7
2.7 Fallacies and Pitfalls
2.8 Concluding Remarks: Looking Ahead
2.9 Historical Perspective and References
Case Studies and Exercises by Norman P. Jouppi, Naveen Muralimanohar, and Sheng Li
3. Instruction-Level Parallelism and Its Exploitation
3.1 Instruction-Level Parallelism: Concepts and Challenges
3.2 Basic Compiler Techniques for Exposing ILP
3.3 Reducing Branch Costs with Advanced Branch Prediction
3.4 Overcoming Data Hazards with Dynamic Scheduling
3.5 Dynamic Scheduling: Examples and the Algorithm
3.6 Hardware-Based Speculation
3.7 Exploiting ILP Using Multiple Issue and Static Scheduling
3.8 Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation
3.9 Advanced Techniques for Instruction Delivery and Speculation
3.10 Studies of the Limitations of ILP
3.11 Cross-Cutting Issues: ILP Approaches and the Memory System
3.12 Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput
3.13 Putting It All Together: The Intel Core i7 and ARM Cortex-A8
3.14 Fallacies and Pitfalls
3.15 Concluding Remarks: What’s Ahead?
3.16 Historical Perspective and References
Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell
4. Data-Level Parallelism in Vector, SIMD, and GPU Architectures
4.1 Introduction
4.2 Vector Architecture
4.3 SIMD Instruction Set Extensions for Multimedia
4.4 Graphics Processing Units
4.5 Detecting and Enhancing Loop-Level Parallelism
4.6 Crosscutting Issues
4.7 Putting It All Together: Mobile versus Server GPUs and Tesla versus Core i7
4.8 Fallacies and Pitfalls
4.9 Concluding Remarks
4.10 Historical Perspective and References
Case Study and Exercises by Jason D. Bakos
5. Thread-Level Parallelism
5.1 Introduction
5.2 Centralized Shared-Memory Architectures
5.3 Performance of Symmetric Shared-Memory Multiprocessors
5.4 Distributed Shared-Memory and Directory-Based Coherence
5.5 Synchronization: The Basics
5.6 Models of Memory Consistency: An Introduction
5.7 Crosscutting Issues
5.8 Putting It All Together: Multicore Processors and Their Performance
5.9 Fallacies and Pitfalls
5.10 Concluding Remarks
5.11 Historical Perspectives and References
Case Studies and Exercises by Amr Zaky and David A. Wood
6. Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
6.1 Introduction
6.2 Programming Models and Workloads for Warehouse-Scale Computers
6.3 Computer Architecture of Warehouse-Scale Computers
6.4 Physical Infrastructure and Costs of Warehouse-Scale Computers
6.5 Cloud Computing: The Return of Utility Computing
6.6 Crosscutting Issues
6.7 Putting It All Together: A Google Warehouse-Scale Computer
6.8 Fallacies and Pitfalls
6.9 Concluding Remarks
6.10 Historical Perspectives and References
Case Studies and Exercises by Parthasarathy Ranganathan
A. Instruction Set Principles
A.1 Introduction
A.2 Classifying Instruction Set Architectures
A.3 Memory Addressing
A.4 Type and Size of Operands
A.5 Operations in the Instruction Set
A.6 Instructions for Control Flow
A.7 Encoding an Instruction Set
A.8 Crosscutting Issues: The Role of Compilers
A.9 Putting It All Together: The MIPS Architecture
A.10 Fallacies and Pitfalls
A.11 Concluding Remarks
A.12 Historical Perspective and References
Exercises by Gregory D. Peterson
B. Review of Memory Hierarchy
B.1 Introduction
B.2 Cache Performance
B.3 Six Basic Cache Optimizations
B.4 Virtual Memory
B.5 Protection and Examples of Virtual Memory
B.6 Fallacies and Pitfalls
B.7 Concluding Remarks
B.1 Historical Perspective and References
Exercises by Amr Zaky
C. Pipelining: Basic and Intermediate Concepts
C.1 Introduction
C.2 The Major Hurdle of Pipelining—Pipeline Hazards
C.3 How Is Pipelining Implemented?
C.4 What Makes Pipelining Hard to Implement?
C.5 Extending the MIPS Pipeline to Handle Multicycle Operations
C.6 Putting It All Together: The MIPS R4000 Pipeline
C.7 Crosscutting Issues
C.8 Fallacies and Pitfalls
C.9 Concluding Remarks
C.10 Historical Perspective and References
Updated Exercises by Diana Franklin
Index
Translation between GPU terms in book and official NVIDIA and OpenCL terms
JH
DP