CS401 Study Guide

This study guide will help you get ready for the final exam. It discusses the key topics in each unit, walks through the learning outcomes, and lists important vocabulary terms. It is not meant to replace the course materials!

Unit 3: Synchronization

3a. Describe synchronization

  • Define synchronization.
  • What is another definition of synchronization?
  • What is the most important idea in correct synchronization?
  • Define interleaving.

Synchronization means that threads are scheduled preemptively or cooperatively. With multi-user operating systems, preemptive multithreading is the more widely-used approach because it allows for finer-grained control over execution time via context switching. However, preemptive scheduling may context switch threads during moments programmers fail to anticipate, which will crash the OS. 

In contrast, cooperative multithreading requires threads to relinquish control of execution to ensure all of the threads run to completion. This can create problems if a cooperatively-multitasked thread blocks because it is waiting on a resource, or if it starves out the other threads by not yielding control of execution during an intensive computation.

In multithreading computing, a queue controls the timing for threads to run. Computer engineers use synchronization to avoid entering or leaving the multi-thread queue and avoid conflicts among the threads.

Another definition of synchronization is using atomic operations to ensure cooperation among threads. Making competing threads wait, until an earlier thread is finished is the most important idea in correct synchronization. This way, interleaving goes away.

Let's return to our bus analogy. If you have a limited resource, like our slate of busses, only so many busses arrive during a unit time and information needs to get on and off each bus. Timing according to the applications is critical, too. You could make applications wait until their turn at a bus arrives, but that is considered wasteful. Also, if that were the case, you would type a key on the keyboard and have to wait for it to appear.

Interleaving offers a solution in the face of limited resources. Interleaving cuts each "thread", which is its own process, into pieces so as many of each thread can fit on one bus at a time. This is not a preferred, perfect, or ideal system, but it at least allows everything to move forward, somewhat.

Ideally, there would be no limit on resources. However, this would drastically increase the price of computers. We ask our computers to do more and more, but software must work in an environment of hardware restrictions. Demand exceeds supply and everyone wants to do more with what they have. User interface functions tend to take priority so you do not have to wait until the character you just typed appears.

Synchronization is an important part of OS control because it allows multi-demand (or process/user) computing systems to operate using threads. 

Review this material in Synchronization and Chapter 1 of Little Book of Semaphores.

 

3b. Explain a race condition

  • Define a race condition.
  • How can you prevent a race condition?
  • Define a critical section.
  • Define an atomic action or instruction.
  • Are race conditions easy to fix?

A race condition is an error in synchronization caused by errors in OS software. It is an undesirable situation that occurs when a device or system tries to perform two or more operations at the same time. Operations must follow the proper sequence to perform correctly, due to the nature of the computer device or system. A race condition results from a multiple thread execution: the critical section differs according to the order the threads execute.

The critical section is a code segment where shared variables can be accessed. Atomic action (an indivisible sequence of operations that must complete without interruption) is required in a critical section. In other words, only one process can execute in its critical section at a time. All the other processes have to wait to execute in their critical sections.

We can avoid race conditions in critical sections by treating the critical section as an atomic instruction. Proper thread synchronization using locks or atomic variables can also prevent race conditions.

A race condition can be difficult to reproduce and debug because the end result is nondeterministic and depends on the relative timing between interfering threads.

Review this material in Introduction to Race Conditions and Avoiding Race Conditions in SharedArrayBuffers with Atomics.

 

3c. Discuss interprocess communication

  • Define an interprocess communication (IPC).

Interprocess communication (IPC) refers to the way operating systems allow processes to manage shared data. These mechanisms or constructs are necessary since resources are limited in practical computing. These days, size is of particular concern.

In practical computing, where the resources the operating system schedules are limited, it is desirable, and often necessary, for different processes to share memory allocation. Since memory space is shared, it is terribly important for interprocess communication to occur in the intended fashion, with extreme logic.

Review this material in Mutual Exclusion, Semaphores, Monitors, and Condition Variables.

 

3d. Describe how semaphores can be used in an operating system

  • Define a semaphore.
  • Where does the allusion of the name semaphores come from?

Semaphores are important conditional programming constructs that ensure proper process synchronization. The origin of the term comes from the railway system where a semaphore was an early form of a fixed railway signal. Train tracks have these warning signals built into their tracks to keep trains from colliding. Think about a semaphore as a computer-based signaling mechanism. A thread can signal another thread that is waiting on a semaphore.

A semaphore uses two atomic operations for process synchronization: wait and signal. In other words, a semaphore is a high-level "lock" construct in the OS which prevents synchronization problems. In the language of computers, a semaphore is a special type of integer, a non-negative value.

Review this material in:

 

3e. Discuss three of the classic synchronization problems

  • Name four classic synchronization problems with semaphores.
  • Describe the bounded-buffer (producer-consumer, or vendor/customer) problem. Describe a method you can use to prevent this problem from occurring.
  • Describe the dining philosopher's problem.
  • Describe the reader's and writer's problem.
  • Describe the sleeping barber problem.

Classic synchronization problems using semaphores are terribly important to understanding synchronization and how coding works when it is correct. Failure to understand these classic problems coders to make their code more complex than necessary and repeat the problems with certain scenarios over and over again.

Four classic synchronization problems with semaphores include:

  1. Bounded–buffer (also producer-consumer or vendor-customer) problem;
  2. Dining philosophers problem;
  3. Readers and writers problem;
  4. Sleeping barber problem.
  1. The Bounded-Buffer (Producer-Consumer or Vendor-Customer) Problem

The bounded-buffer (also called producer-consumer or vendor-customer) problem describes two processes: the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing data from the buffer), one piece at a time. The challenge is to make sure the producer does not try to add data into the buffer if it is full, and the consumer does not try to remove data from an empty buffer.

The solution to this problem is to create two counting semaphores (full and empty) to keep track of the current number of full and empty buffers, respectively. Producers produce a product while consumers consume the product, but both use of one of the containers each time.

  1. The Dining Philosophers Problem

In this computer systems analogy, the dining philosopher problem describes a situation where a certain number of diners (philosophers) are seated around a circular table with one chopstick between each pair of philosophers. A diner may eat if they can pick up the two chopsticks adjacent to them. One chopstick may be picked up by any one of its adjacent followers, but not both.

In computing, this challenge pertains to the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner.

  1. The Readers and Writers Problem

Suppose a database needs to be shared among several concurrent processes. Some of these processes may only want to read the database, whereas others may want to update (that is, to read and write) the database. We distinguish between these two types of processes by referring to the former as readers and to the latter as writers. In OS we call this situation the readers-writers problem.

Here are the parameters of this challenge:

  • One set of data is shared among several processes.
  • Once a writer is ready, it performs its write. Only one writer may write at a time.
  • If a process is writing, no other process can read it.
  • If at least one reader is reading, no other process can write.
  • Readers may not write and only read.
  1. The Sleeping Barber Problem

In this computer systems analogy, think about when a barbershop only has one barber: one barber chair and any number of chairs where customers wait. When there are no customers, the barber goes to sleep in their barber chair and must be woken when a new customer comes in. While the barber is cutting hair, new customers can take the empty seats to wait, or leave if there is no vacancy.

In computing, this challenge pertains to how threads await use in the computing process.

Review this material in Mutual Exclusion, Semaphores, Monitors, and Condition Variables and Little Book of Semaphores.

 

3f. Explain the alternatives to semaphores

  • Name and describe some alternatives to semaphores.
  • Define and describe the function of a mutex.
  • Define and describe the function of a recursive mutex.
  • Define and describe the function of a reader/writer mutex.
  • Define and describe the function of a spinlock.

A semaphore is a "relaxed" type of lock. A challenge to overcome with semaphores is that any thread can signal a semaphore at any time, regardless of whether that thread has previously waited for the semaphore. Consequently, more specific types of programming constructs are necessary, such as the alternatives we discussed above.

Mutexes

A mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a unique name at the start of a program. The mutex is a locking mechanism that makes sure only one thread can acquire the mutex at a time and enter the critical section. This thread only releases the mutex when it exits the critical section.

A mutex is different from a semaphore because it is a locking mechanism, while a semaphore is a signaling mechanism. A binary semaphore can be used as a mutex, but a mutex can never be used as a semaphore.

Recursive Mutexes

A recursive mutex is similar to a plain mutex, but one thread may own multiple locks on it at the same time. For example, If Thread A acquires a lock on a recursive mutex, then Thread A can acquire further locks on the recursive mutex without releasing the locks already held. However, Thread B cannot acquire any locks on the recursive mutex until Thread A has released all of the locks it held.

In most cases, a recursive mutex is undesirable, since it makes it harder to reason correctly about the code. With a plain mutex, if you ensure the invariants on the protected resource are valid before you release ownership, then you know these invariants will be valid when you acquire ownership.

With a recursive mutex, this is not the case. Being able to acquire the lock does not mean the current thread did not already hold the lock and therefore does not imply the invariants are valid.

Reader/Writer Mutexes

Multiple-reader/single-writer mutexes (also called read/write mutexes or shared mutexes) offer two distinct types of ownership:

  • Exclusive ownership (also called write ownership or write lock);
  • Shared ownership (also called read ownership or a read lock).

Exclusive ownership works just like ownership of a plain mutex: only one thread may hold an exclusive lock on the mutex, only that thread can release the lock. No other thread may hold any type of lock on the mutex while that thread holds its lock.

Shared ownership is laxer. Any number of threads may take shared ownership of a mutex at the same time. No thread may take an exclusive lock on the mutex while any thread holds a shared lock.

These mutexes are typically used to protect shared data that is seldom updated, but they cannot be safely updated if any thread is reading it. Consequently, the reading threads take shared ownership while they are reading the data. When the data needs to be modified, the modifying thread first takes exclusive ownership of the mutex, which ensures that no other thread is reading it, then releases the exclusive lock after the modification has been done.

Spinlocks

A spinlock is a special type of mutex that does not use OS synchronization functions when a lock operation has to wait. Instead, it just keeps trying to update the mutex data structure to take the lock in a loop.

If the lock is not held often or is only held for short periods, then this can be more efficient than calling heavyweight thread synchronization functions. However, if the processor has to loop too many times then it is just wasting time doing nothing, and the system would do better if the OS scheduled another thread with active work to do, instead of the thread failing to acquire the spinlock.

Review this material in the lecture Mutual Exclusion, Semaphores, Monitors, and Condition Variables.

 

Unit 3 Vocabulary

  • Atomic operation
  • Bounded–buffer (also producer-consumer or vendor-customer) problem
  • Condition variable
  • Critical section
  • Dining philosophers problem
  • Interleaving
  • Lock
  • Monitor
  • Multithreading
  • Mutual exclusion (mutex)
  • Preemptive cooperative
  • Race condition
  • Reader/writer mutex
  • Readers and writers problem
  • Recursive mutex
  • Semaphore
  • Sleeping barber problem
  • Software primitive
  • Spinlock
  • Starvation – a thread never gets to run due to a scheduling algorithm
  • String copy operation
  • Synchronization
  • Thread interleaving "starvation"