Threads and Concurrent Programming
Threads may be seen as methods that execute at "the same time" as other methods. Normally, we think sequentially when writing a computer program. From this perspective, only one thing executes at a time. However, with today's multi-core processors, it is possible to literally have several things going on at the very same time while sharing the same memory. There are lots of ways that this is done in the real world, and this chapter goes over them in a way that you can apply to your own projects.
14.4 Thread States and Life Cycle
Each thread has a life cycle that consists of several different states, which
are summarized in Figure 14.6 and Table 14.1. Thread states are represented by labeled ovals, and the transitions between states are repreReady, running, and sleeping sented by labeled arrows. Much of a thread’s life cycle is under the
control of the operating system and the Java Virtual Machine. Those
Controlling a thread transitions represented by method names—such as start(), stop(),
wait(), sleep(), notify()—can be controlled by the program. Of
these methods, the stop() method has been deprecated in JDK 1.2 because it is inherently unsafe to stop a thread in the middle of its execution.
Other transitions—such as dispatch, I/O request, I/O done, time expired, done
sleeping—are under the control of the CPU scheduler. When first created
a thread is in the ready state, which means that it is ready to run. In the ready state, a thread is waiting, perhaps with other threads, in the ready
queue, for its turn on the CPU. A queue is like a waiting line. When the
CPU becomes available, the first thread in the ready queue will be dispatched—that is, it will be given the CPU. It will then be in the running
Figure 14.6: A depiction of a
thread’s life cycle.
Transitions between the ready and running states happen under the
control of the CPU scheduler, a fundamental part of the Java runtime system. The job of scheduling many threads in a fair and efficient manner
is a little like sharing a single bicycle among several children. Children
who are ready to ride the bike wait in line for their turn. The grown up
(scheduler) lets the first child (thread) ride for a period of time before the
bike is taken away and given to the next child in line. In round-robin
scheduling, each child (thread) gets an equal amount of time on the bike
When a thread calls the sleep() method, it voluntarily gives up the
CPU, and when the sleep period is over, it goes back into the ready queue.
This would be like one of the children deciding to rest for a moment during his or her turn. When the rest was over, the child would get back in
When a thread calls the wait() method, it voluntarily gives up the
CPU, but this time it won’t be ready to run again until it is notified by some other thread.
This would be like one child giving his or her turn to another child.
When the second child’s turn is up, it would notify the first child, who
would then get back in line.
The system also manages transitions between the blocked and ready
states. A thread is put into a blocked state when it does some kind of I/O
operation. I/O devices, such as disk drives, modems, and keyboards, are very slow compared to the CPU. Therefore, I/O operations are handled
by separate processors known as controllers. For example, when a thread
wants to read data from a disk drive, the system will give this task to the
disk controller, telling it where to place the data. Because the thread can’t
do anything until the data are read, it is blocked, and another thread is
allowed to run. When the disk controller completes the I/O operation,
the blocked thread is unblocked and placed back in the ready queue.
In terms of the bicycle analogy, blocking a thread would be like giving
the bicycle to another child when the rider has to stop to tie his or her
shoe. Instead of letting the bicycle just sit there, we let another child ride
it. When the shoe is tied, the child is ready to ride again and goes back into the ready line. Letting other threads run while one thread is waiting
for an I/O operation to complete improves the overall utilization of the
EXERCISE 14.5 Round-robin scheduling isn’t always the best idea.
Sometimes priority scheduling leads to a better system. Can you think
of ways that priority scheduling—higher-priority threads go to the head
of the line—can be used to improve the responsiveness of an interactive