Threads and Concurrent Programming

CHAPTER SUMMARY

Technical Terms 

  • asynchronous 
  • blocked 
  • busy waiting 
  • concurrent 
  • critical section 
  • dispatched 
  • fetch-execute cycle 
  • lock 
  • monitor 
  • multitasking 
  • multithreaded 
  • mutual exclusion 
  • priority scheduling 
  • producer/consumer model 
  • quantum 
  • queue 
  • ready queue 
  • round-robin scheduling 
  • scheduling algorithm 
  • task 
  • thread 
  • thread life cycle 
  • time slicing


Summary of Important Points 

  • Multitasking is the technique of executing several tasks at the same time within a single program. In Java we give each task a separate thread of execution, thus resulting in a multithreaded program. 
  • A sequential computer with a single central processing unit (CPU) can execute only one machine instruction at a time. A parallel computer uses multiple CPUs operating simultaneously to execute more than one instruction at a time. 
  • Each CPU uses a fetch-execute cycle to retrieve the next machine instruction from memory and execute it. The cycle is under the control of the CPU’s internal clock, which typically runs at several hundred megahertz—where 1 megahertz (MHz) is 1 million cycles per second. 
  • Time slicing is the technique whereby several threads can share a single CPU over a given time period. Each thread is given a small slice of the CPU’s time under the control of some kind of scheduling algorithm.
  • In round-robin scheduling, each thread is given an equal slice of time, in a first-come–first-served order. In priority scheduling, higher-priority threads are allowed to run before lower-priority threads are run. 
  • There are generally two ways of creating threads in a program. One is to create a subclass of Thread and implement a run() method. The other is to create a Thread instance and pass it a Runnable object— that is, an object that implements run()
  • The sleep() method removes a thread from the CPU for a determinate length of time, giving other threads a chance to run. 
  • The setPriority() method sets a thread’s priority. Higher-priority threads have more and longer access to the CPU.
  • Threads are asynchronous. Their timing and duration on the CPU are highly sporadic and unpredictable. In designing threaded programs, you must be careful not to base your algorithm on any assumptions about the threads’ timing. 
  • To improve the responsiveness of interactive programs, you could give compute-intensive tasks, such as drawing lots of dots, to a lowerpriority thread or to a thread that sleeps periodically. 
  • A thread’s life cycle consists of ready, running, waiting, sleeping, and blocked states. Threads start in the ready state and are dispatched to the CPU by the scheduler, an operating system program. If a thread performs an I/O operation, it blocks until the I/O is completed. If it voluntarily sleeps, it gives up the CPU. 
  • According to the producer/consumer model, two threads share a resource, one serving to produce the resource and the other to consume the resource. Their cooperation must be carefully synchronized. 
  • An object that contains synchronized methods is known as a monitor. Such objects ensure that only one thread at a time can execute a synchronized method. The object is locked until the thread completes the method or voluntarily sleeps. This is one way to ensure mutually exclusive access to a resource by a collection of cooperating threads.
  • The synchronized qualifier can also be used to designate a method as a critical section, whose execution should not be preempted by one of the other cooperating threads.
  • In designing multithreaded programs, it is useful to assume that if a thread can be interrupted at a certain point, it will be interrupted there. Thread coordination should never be left to chance.
  • One way of coordinating two or more cooperating threads is to use the wait/notify combination. One thread waits for a resource to be available, and the other thread notifies when a resource becomes available.