Programming on Parallel Machines

If you want to know more about programming on parallel machines, you may read this book.

Chapter 1 uses a matrix times (multiplication) vector example in section 1.3.1. It goes on to describe parallel approaches for computing a solution. Section 1.3.2 describes a shared-memory and threads approach; section 1.3.3 describes a message passing approach; section 1.3.4 describes the MPI and R language approach. Study these sections to get an overview of the idea of software approaches to parallelism.

Chapter 2 presents issues that slow the performance of parallel programs.

Chapter 3 discusses shared-memory parallelism. Parallel programming and parallel software are extensive topics and our intent is to give you an overview of them; a more in-depth study is provided by the following chapters.

Chapter 4 discusses MP directives and presents a variety of examples.

Chapter 5 presents GPUs (Graphics Processing Units) and the CUDA language. This chapter also discusses parallel programming issues in the context of GPUs and CUDA and illustrates them with various examples.

Chapter 7 illustrates the message-passing approach using various examples.

Chapter 8 describes MPI (Message Passage Interface), which applies to networks of workstations (NOWs). The rest of the chapter illustrates this approach with various examples.

Chapter 9 gives an overview of cloud computing and the Hadoop platform, which are interesting topics for today not just for parallel computing.

Section 10.1 in Chapter 10 explains what R is.

Click Programming-on-Parallel-Machines.pdf link to view the file.