Parallel and Concurrent Programming in Haskell

Book description

If you have a working knowledge of Haskell, this hands-on book shows you how to use the language’s many APIs and frameworks for writing both parallel and concurrent programs. You’ll learn how parallelism exploits multicore processors to speed up computation-heavy programs, and how concurrency enables you to write programs with threads for multiple interactions.

Author Simon Marlow walks you through the process with lots of code examples that you can run, experiment with, and extend. Divided into separate sections on Parallel and Concurrent Haskell, this book also includes exercises to help you become familiar with the concepts presented:

  • Express parallelism in Haskell with the Eval monad and Evaluation Strategies
  • Parallelize ordinary Haskell code with the Par monad
  • Build parallel array-based computations, using the Repa library
  • Use the Accelerate library to run computations directly on the GPU
  • Work with basic interfaces for writing concurrent code
  • Build trees of threads for larger and more complex programs
  • Learn how to build high-speed concurrent network servers
  • Write distributed programs that run on multiple machines in a network

Publisher resources

View/Submit Errata

Table of contents

  1. Parallel and Concurrent Programming in Haskell
  2. Preface
    1. Audience
    2. How to Read This Book
    3. Conventions Used in This Book
    4. Using Sample Code
    5. Safari® Books Online
    6. How to Contact Us
    7. Acknowledgments
  3. 1. Introduction
    1. Terminology: Parallelism and Concurrency
    2. Tools and Resources
    3. Sample Code
  4. I. Parallel Haskell
    1. 2. Basic Parallelism: The Eval Monad
      1. Lazy Evaluation and Weak Head Normal Form
      2. The Eval Monad, rpar, and rseq
      3. Example: Parallelizing a Sudoku Solver
      4. Deepseq
    2. 3. Evaluation Strategies
      1. Parameterized Strategies
      2. A Strategy for Evaluating a List in Parallel
      3. Example: The K-Means Problem
        1. Parallelizing K-Means
        2. Performance and Analysis
        3. Visualizing Spark Activity
        4. Granularity
      4. GC’d Sparks and Speculative Parallelism
      5. Parallelizing Lazy Streams with parBuffer
      6. Chunking Strategies
      7. The Identity Property
    3. 4. Dataflow Parallelism: The Par Monad
      1. Example: Shortest Paths in a Graph
      2. Pipeline Parallelism
        1. Rate-Limiting the Producer
        2. Limitations of Pipeline Parallelism
      3. Example: A Conference Timetable
        1. Adding Parallelism
      4. Example: A Parallel Type Inferencer
      5. Using Different Schedulers
      6. The Par Monad Compared to Strategies
    4. 5. Data Parallel Programming with Repa
      1. Arrays, Shapes, and Indices
      2. Operations on Arrays
      3. Example: Computing Shortest Paths
        1. Parallelizing the Program
      4. Folding and Shape-Polymorphism
      5. Example: Image Rotation
      6. Summary
    5. 6. GPU Programming with Accelerate
      1. Overview
      2. Arrays and Indices
      3. Running a Simple Accelerate Computation
      4. Scalar Arrays
      5. Indexing Arrays
      6. Creating Arrays Inside Acc
      7. Zipping Two Arrays
      8. Constants
      9. Example: Shortest Paths
        1. Running on the GPU
        2. Debugging the CUDA Backend
      10. Example: A Mandelbrot Set Generator
  5. II. Concurrent Haskell
    1. 7. Basic Concurrency: Threads and MVars
      1. A Simple Example: Reminders
      2. Communication: MVars
      3. MVar as a Simple Channel: A Logging Service
      4. MVar as a Container for Shared State
      5. MVar as a Building Block: Unbounded Channels
      6. Fairness
    2. 8. Overlapping Input/Output
      1. Exceptions in Haskell
      2. Error Handling with Async
      3. Merging
    3. 9. Cancellation and Timeouts
      1. Asynchronous Exceptions
      2. Masking Asynchronous Exceptions
      3. The bracket Operation
      4. Asynchronous Exception Safety for Channels
      5. Timeouts
      6. Catching Asynchronous Exceptions
      7. mask and forkIO
      8. Asynchronous Exceptions: Discussion
    4. 10. Software Transactional Memory
      1. Running Example: Managing Windows
      2. Blocking
      3. Blocking Until Something Changes
      4. Merging with STM
      5. Async Revisited
      6. Implementing Channels with STM
        1. More Operations Are Possible
        2. Composition of Blocking Operations
        3. Asynchronous Exception Safety
      7. An Alternative Channel Implementation
      8. Bounded Channels
      9. What Can We Not Do with STM?
      10. Performance
      11. Summary
    5. 11. Higher-Level Concurrency Abstractions
      1. Avoiding Thread Leakage
      2. Symmetric Concurrency Combinators
        1. Timeouts Using race
      3. Adding a Functor Instance
      4. Summary: The Async API
    6. 12. Concurrent Network Servers
      1. A Trivial Server
      2. Extending the Simple Server with State
        1. Design One: One Giant Lock
        2. Design Two: One Chan Per Server Thread
        3. Design Three: Use a Broadcast Chan
        4. Design Four: Use STM
        5. The Implementation
      3. A Chat Server
        1. Architecture
        2. Client Data
        3. Server Data
        4. The Server
        5. Setting Up a New Client
        6. Running the Client
        7. Recap
    7. 13. Parallel Programming Using Threads
      1. How to Achieve Parallelism with Concurrency
      2. Example: Searching for Files
        1. Sequential Version
        2. Parallel Version
        3. Performance and Scaling
        4. Limiting the Number of Threads with a Semaphore
        5. The ParIO monad
    8. 14. Distributed Programming
      1. The Distributed-Process Family of Packages
      2. Distributed Concurrency or Parallelism?
      3. A First Example: Pings
        1. Processes and the Process Monad
        2. Defining a Message Type
        3. The Ping Server Process
        4. The Master Process
        5. The main Function
        6. Summing Up the Ping Example
      4. Multi-Node Ping
        1. Running with Multiple Nodes on One Machine
        2. Running on Multiple Machines
      5. Typed Channels
        1. Merging Channels
      6. Handling Failure
        1. The Philosophy of Distributed Failure
      7. A Distributed Chat Server
        1. Data Types
        2. Sending Messages
        3. Broadcasting
        4. Distribution
        5. Testing the Server
        6. Failure and Adding/Removing Nodes
      8. Exercise: A Distributed Key-Value Store
    9. 15. Debugging, Tuning, and Interfacing with Foreign Code
      1. Debugging Concurrent Programs
        1. Inspecting the Status of a Thread
        2. Event Logging and ThreadScope
        3. Detecting Deadlock
      2. Tuning Concurrent (and Parallel) Programs
        1. Thread Creation and MVar Operations
        2. Shared Concurrent Data Structures
        3. RTS Options to Tweak
      3. Concurrency and the Foreign Function Interface
        1. Threads and Foreign Out-Calls
        2. Asynchronous Exceptions and Foreign Calls
        3. Threads and Foreign In-Calls
    10. Index
  6. About the Author
  7. Colophon
  8. Copyright

Product information

  • Title: Parallel and Concurrent Programming in Haskell
  • Author(s): Simon Marlow
  • Release date: July 2013
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781449335908