CUDA by Example

CUDA by Example book cover

CUDA by Example

$49.99

In stock
0 out of 5

$49.99

SKU: 9780131387683 Category:
Title Range Discount
Trade Discount 5 + 25%

Description

The complete guide to developing high-performance applications with CUDA – written by CUDA development team members, and supported by NVIDIA

 

  • Breakthrough techniques for using the power of graphics processors to create high-performance general purpose applications
  • Packed with realistic, C-based examples — from basic to advanced
  • Covers one of today’s most highly-anticipated new technologies for software development wherever performance is crucial: finance, design automation, science, simulation, graphics, and beyond
  • CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.

    CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.

    Major topics covered include

    • Parallel programming
    • Thread cooperation
    • Constant memory and events
    • Texture memory
    • Graphics interoperability
    • Atomics
    • Streams
    • CUDA C on multiple GPUs
    • Advanced atomics

    Additional CUDA resources

    “This book is required reading for anyone working with accelerator-based computing systems.”

    –From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory

    CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.

     

    CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.

     

    Major topics covered include

    • Parallel programming
    • Thread cooperation
    • Constant memory and events
    • Texture memory
    • Graphics interoperability
    • Atomics
    • Streams
    • CUDA C on multiple GPUs
    • Advanced atomics
    • Additional CUDA resources

    All the CUDA software tools you’ll need are freely available for download from NVIDIA.

    http://developer.nvidia.com/object/cuda-by-example.html

    “This book is required reading for anyone working with accelerator-based computing systems.”

    –From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory

    CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.

    CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.

    Major topics covered include

    • Parallel programming
    • Thread cooperation
    • Constant memory and events
    • Texture memory
    • Graphics interoperability
    • Atomics
    • Streams
    • CUDA C on multiple GPUs
    • Advanced atomics
    • Additional CUDA resources

    All the CUDA software tools you’ll need are freely available for download from NVIDIA.

    http://developer.nvidia.com/object/cuda-by-example.html

    Jason Sanders is a senior software engineer in the CUDA Platform group at NVIDIA. While at NVIDIA, he helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. Jason received his master’s degree in computer science from the University of California Berkeley where he published research in GPU computing, and he holds a bachelor’s degree in electrical engineering from Princeton University. Prior to joining NVIDIA, he previously held positions at ATI Technologies, Apple, and Novell. When he’s not writing books, Jason is typically working out, playing soccer, or shooting photos.

    Edward Kandrot is a senior software engineer on the CUDA Algorithms team at NVIDIA. He has more than twenty years of industry experience focused on optimizing code and improving performance, including for Photoshop and Mozilla. Kandrot has worked for Adobe, Microsoft, and Google, and he has been a consultant at many companies, including Apple and Autodesk. When not coding, he can be found playing World of Warcraft or visiting Las Vegas for the amazing food.

    Foreword xiii

    Preface xv

    Acknowledgments xvii

    About the Authors xix

    Chapter 1: Why CUDA? Why Now? 1

    1.1 Chapter Objectives 2

    1.2 The Age of Parallel Processing 2

    1.3 The Rise of GPU Computing 4

    1.4 CUDA 6

    1.5 Applications of CUDA 8

    1.6 Chapter Review 11

    Chapter 2: Getting Started 13

    2.1 Chapter Objectives 14

    2.2 Development Environment 14

    2.3 Chapter Review 19

    Chapter 3: Introduction to CUDA C 21

    3.1 Chapter Objectives 22

    3.2 A First Program 22

    3.3 Querying Devices 27

    3.4 Using Device Properties 33

    3.5 Chapter Review 35

    Chapter 4: Parallel Programming in CUDA C 37

    4.1 Chapter Objectives 38

    4.2 CUDA Parallel Programming 38

    4.3 Chapter Review 57

    Chapter 5: Thread Cooperation 59

    5.1 Chapter Objectives 60

    5.2 Splitting Parallel Blocks 60

    5.3 Shared Memory and Synchronization 75

    5.4 Chapter Review 94

    Chapter 6: Constant Memory and Events 95

    6.1 Chapter Objectives 96

    6.2 Constant Memory 96

    6.3 Measuring Performance with Events 108

    6.4 Chapter Review 114

    Chapter 7: Texture Memory 115

    7.1 Chapter Objectives 116

    7.2 Texture Memory Overview 116

    7.3 Simulating Heat Transfer 117

    7.4 Chapter Review 137

    Chapter 8: Graphics Interoperability 139

    8.1 Chapter Objectives 140

    8.2 Graphics Interoperation 140

    8.3 GPU Ripple with Graphics Interoperability 147

    8.4 Heat Transfer with Graphics Interop 154

    8.5 DirectX Interoperability 160

    8.6 Chapter Review 161

    Chapter 9: Atomics 163

    9.1 Chapter Objectives 164

    9.2 Compute Capability 164

    9.3 Atomic Operations Overview 168

    9.4 Computing Histograms 170

    9.5 Chapter Review 183

    Chapter 10: Streams 185

    10.1 Chapter Objectives 186

    10.2 Page-Locked Host Memory 186

    10.3 CUDA Streams 192

    10.4 Using a Single CUDA Stream 192

    10.5 Using Multiple CUDA Streams 198

    10.6 GPU Work Scheduling 205

    10.7 Using Multiple CUDA Streams Effectively 208

    10.8 Chapter Review 211

    Chapter 11: CUDA C on Multiple GPUs 213

    11.1 Chapter Objectives 214

    11.2 Zero-Copy Host Memory 214

    11.3 Using Multiple GPUs 224

    11.4 Portable Pinned Memory 230

    11.5 Chapter Review 235

    Chapter 12: The Final Countdown 237

    12.1 Chapter Objectives 238

    12.2 CUDA Tools 238

    12.3 Written Resources 244

    12.4 Code Resources 246

    12.5 Chapter Review 248

    Appendix A: Advanced Atomics 249

    A.1 Dot Product Revisited 250

    A.2 Implementing a Hash Table 258

    A.3 Appendix Review 277

    Index 279

    Additional information

    Dimensions 0.60 × 7.30 × 9.00 in
    Imprint

    Format

    ISBN-13

    ISBN-10

    Author

    ,

    BISAC

    Subjects

    professional, higher education, COM051220, Employability, IT Professional, Y-AN SOFTWARE ENGINEERING