Read Chapter 1 on pages 1–20, which uses a matrix times (multiplication) vector example in section 1.3.1. It goes on to describe parallel approaches for computing a solution. Section 1.3.2 describes a shared-memory and threads approach; section 1.3.3 describes a message passing approach; section 1.3.4 describes the MPI and R language approach. Study these
sections to get an overview of the idea of software approaches to parallelism.
Read Chapter 2 on pages 21–30. This chapter presents issues that slow the performance of parallel programs.
Read Chapter 3 on pages 31–66 to learn about shared memory parallelism. Parallel programming and parallel software are extensive topics and our intent is to give you an overview of them; more in depth study is provided by the following chapters.
Read Chapter 4 on pages 67–100. This chapter discusses MP directives and presents a variety of examples.
Read Chapter 5 on pages 101–136. This chapter presents GPUs (Graphic Processing Units) and the CUDA language. This chapter also discusses parallel programming issues in the context of GPUs and CUDA and illustrates them with various examples.
Read Chapter 7 on pages 161–166. This chapter illustrates the message passing approach using various examples.
Read Chapter 8 on pages 167–169 for a description of MPI (Message Passage Interface), which applies to networks of workstations (NOWs). The rest of the chapter illustrates this approach with various examples.
Read Chapter 9 on pages 193–206 for an overview of cloud computing and the hadoop platform, which are interesting topics for today not just for parallel computing.
Lastly, read section 10.1 of Chapter 10 on pages 207 and 208, which explains what R is.
The rest of the chapters of the text and the four appendices cover other interesting topics. These chapters and the appendices are optional.