High Performance Parallelism Pearls Volume Two

Book description

High Performance Parallelism Pearls Volume 2 offers another set of examples that demonstrate how to leverage parallelism. Similar to Volume 1, the techniques included here explain how to use processors and coprocessors with the same programming – illustrating the most effective ways to combine Xeon Phi coprocessors with Xeon and other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as biomed, genetics, finance, manufacturing, imaging, and more. Each chapter in this edited work includes detailed explanations of the programming techniques used, while showing high performance results on both Intel Xeon Phi coprocessors and multicore processors. Learn from dozens of new examples and case studies illustrating "success stories" demonstrating not just the features of Xeon-powered systems, but also how to leverage parallelism across these heterogeneous systems.

  • Promotes write-once, run-anywhere coding, showing how to code for high performance on multicore processors and Xeon Phi
  • Examples from multiple vertical domains illustrating real-world use of Xeon Phi coprocessors
  • Source code available for download to facilitate further exploration

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Contributors
  6. Acknowledgments
  7. Foreword
    1. Making a bet on many-core
    2. 2013 Stampede—intel many-core system – a first
    3. HPC journey and revelation
    4. Stampede users discover: It’s parallel programming
    5. This book is timely and important
  8. Preface
    1. Inspired by 61 cores: A new era in programming
  9. Chapter 1: Introduction
    1. Abstract
    2. Applications and techniques
    3. SIMD and vectorization
    4. OpenMP and nested parallelism
    5. Latency optimizations
    6. Python
    7. Streams
    8. Ray tracing
    9. Tuning prefetching
    10. MPI shared memory
    11. Using every last core
    12. OpenCL vs. OpenMP
    13. Power analysis for nodes and clusters
    14. The future of many-core
    15. Downloads
  10. Chapter 2: Numerical Weather Prediction Optimization
    1. Abstract
    2. Numerical weather prediction: Background and motivation
    3. WSM6 in the NIM
    4. Shared-memory parallelism and controlling horizontal vector length
    5. Array alignment
    6. Loop restructuring
    7. Compile-time constants for loop and array bounds
    8. Performance improvements
    9. Summary
  11. Chapter 3: WRF Goddard Microphysics Scheme Optimization
    1. Abstract
    2. Acknowledgments
    3. The motivation and background
    4. WRF Goddard microphysics scheme
    5. Summary
  12. Chapter 4: Pairwise DNA Sequence Alignment Optimization
    1. Abstract
    2. Pairwise sequence alignment
    3. Parallelization on a single coprocessor
    4. Parallelization across multiple coprocessors using MPI
    5. Performance results
    6. Summary
  13. Chapter 5: Accelerated Structural Bioinformatics for Drug Discovery
    1. Abstract
    2. Parallelism enables proteome-scale structural bioinformatics
    3. Overview of eFindSite
    4. Benchmarking dataset
    5. Code profiling
    6. Porting eFindSite for coprocessor offload
    7. Parallel version for a multicore processor
    8. Task-level scheduling for processor and coprocessor
    9. Case study
    10. Summary
  14. Chapter 6: Amber PME Molecular Dynamics Optimization
    1. Abstract
    2. Theory of MD
    3. Acceleration of neighbor list building using the coprocessor
    4. Acceleration of direct space sum using the coprocessor
    5. Additional optimizations in coprocessor code
    6. Modification of load balance algorithm
    7. Compiler optimization flags
    8. Results
    9. Conclusions
  15. Chapter 7: Low-Latency Solutions for Financial Services Applications
    1. Abstract
    2. Introduction
    3. The opportunity
    4. Packet processing architecture
    5. The symmetric communication interface
    6. Optimizing packet processing on the coprocessor
    7. Results
    8. Conclusions
  16. Chapter 8: Parallel Numerical Methods in Finance
    1. Abstract
    2. Overview
    3. Introduction
    4. Pricing equation for American option
    5. Initial C/C++ implementation
    6. Scalar optimization: Your best first step
    7. SIMD parallelism—Vectorization
    8. Thread parallelization
    9. Scale from multicore to many-core
    10. Summary
    11. For more information
  17. Chapter 9: Wilson Dslash Kernel From Lattice QCD Optimization
    1. Abstract
    2. The Wilson-Dslash kernel
    3. First implementation and performance
    4. Optimized code: QPhiX and QphiX-Codegen
    5. Code generation with QphiX-Codegen
    6. Performance results for QPhiX
    7. The end of the road?
  18. Chapter 10: Cosmic Microwave Background Analysis: Nested Parallelism in Practice
    1. Abstract
    2. Analyzing the CMB with Modal
    3. Optimization and modernization
    4. Introducing nested parallelism
    5. Results
    6. Summary
  19. Chapter 11: Visual Search Optimization
    1. Abstract
    2. Image-matching application
    3. Image acquisition and processing
    4. Keypoint matching
    5. Applications
    6. A study of parallelism in the visual search application
    7. Database (db) level parallelism
    8. Flann library parallelism
    9. Experimental evaluation
    10. Setup
    11. Database threads scaling
    12. Flann threads scaling
    13. KD-tree scaling with dbthreads
    14. Summary
  20. Chapter 12: Radio Frequency Ray Tracing
    1. Abstract
    2. Acknowledgments
    3. Background
    4. StingRay system architecture
    5. Optimization examples
    6. Summary
  21. Chapter 13: Exploring Use of the Reserved Core
    1. Abstract
    2. Acknowledgments
    3. The Uintah computational framework
    4. Cross-compiling the UCF
    5. Toward demystifying the reserved core
    6. Experimental discussion
    7. Summary
  22. Chapter 14: High Performance Python Offloading
    1. Abstract
    2. Acknowledgments
    3. Background
    4. The pyMIC offload module
    5. Example: singular value decomposition
    6. GPAW
    7. PyFR
    8. Performance
    9. Summary
  23. Chapter 15: Fast Matrix Computations on Heterogeneous Streams
    1. Abstract
    2. The challenge of heterogeneous computing
    3. Matrix multiply
    4. The hStreams library and framework
    5. Cholesky factorization
    6. LU factorization
    7. Continuing work on hStreams
    8. Acknowledgments
    9. Recap
    10. Summary
    11. Tiled hStreams matrix multiplier example source
  24. Chapter 16: MPI-3 Shared Memory Programming Introduction
    1. Abstract
    2. Motivation
    3. MPI’s interprocess shared memory extension
    4. When to use MPI interprocess shared memory
    5. 1-D ring: from MPI messaging to shared memory
    6. Modifying MPPTEST halo exchange to include MPI SHM
    7. Evaluation environment and results
    8. Summary
  25. Chapter 17: Coarse-Grained OpenMP for Scalable Hybrid Parallelism
    1. Abstract
    2. Coarse-grained versus fine-grained parallelism
    3. Flesh on the bones: A FORTRAN “stencil-test” example
    4. Performance results with the stencil code
    5. Parallelism in numerical weather prediction models
    6. Summary
  26. Chapter 18: Exploiting Multilevel Parallelism in Quantum Simulations
    1. Abstract
    2. Science: better approximate solutions
    3. About the reference application
    4. Parallelism in ES applications
    5. Multicore and many-core architectures for quantum simulations
    6. Setting up experiments
    7. User code experiments
    8. Summary: try multilevel parallelism in your applications
  27. Chapter 19: OpenCL: There and Back Again
    1. Abstract
    2. Acknowledgments
    3. The GPU-HEOM application
    4. The Hexciton kernel
    5. Optimizing the OpenCL Hexciton kernel
    6. Performance portability in OpenCL
    7. Porting the OpenCL kernel to OpenMP 4.0
    8. Summary
  28. Chapter 20: OpenMP Versus OpenCL: Difference in Performance?
    1. Abstract
    2. Five benchmarks
    3. Experimental setup and time measurements
    4. HotSpot benchmark optimization
    5. Optimization steps for the other four benchmarks
    6. Summary
  29. Chapter 21: Prefetch Tuning Optimizations
    1. Abstract
    2. Acknowledgments
    3. The importance of prefetching for performance
    4. Prefetching on Intel Xeon Phi coprocessors
    5. Throughput applications
    6. Tuning prefetching
    7. Results—Prefetch tuning examples on a coprocessor
    8. Results—Tuning hardware prefetching on a processor
    9. Summary
  30. Chapter 22: SIMD Functions Via OpenMP
    1. Abstract
    2. SIMD vectorization overview
    3. Directive guided vectorization
    4. Targeting specific architectures
    5. Vector functions in C++
    6. Vector functions in Fortran
    7. Performance results
    8. Summary
  31. Chapter 23: Vectorization Advice
    1. Abstract
    2. The importance of vectorization
    3. About DL_MESO LBE
    4. Intel vectorization advisor and the underlying technology
    5. Analyzing the Lattice Boltzmann code
    6. Summary
  32. Chapter 24: Portable Explicit Vectorization Intrinsics
    1. Abstract
    2. Acknowledgments
    3. Related work
    4. Why vectorization?
    5. Portable vectorization with OpenVec
    6. Real-world example
    7. Performance results
    8. Developing toward the future
    9. Summary
  33. Chapter 25: Power Analysis for Applications and Data Centers
    1. Abstract
    2. Introduction to measuring and saving power
    3. Application: Power measurement and analysis
    4. Data center: Interpretation via waterfall power data charts
    5. Summary
  34. Author Index
  35. Subject Index

Product information

  • Title: High Performance Parallelism Pearls Volume Two
  • Author(s): Jim Jeffers, James Reinders
  • Release date: July 2015
  • Publisher(s): Morgan Kaufmann
  • ISBN: 9780128038901