CS401 Study Guide

Site: Saylor Academy
Course: CS401: Operating Systems
Book: CS401 Study Guide
Printed by: Guest user
Date: Thursday, April 25, 2024, 10:45 PM

Description

This study guide will help you get ready for the final exam. It discusses the key topics in each unit, walks through the learning outcomes, and lists important vocabulary terms. It is not meant to replace the course materials!

Navigating the Study Guide


Study Guide Structure

In this study guide, the sections in each unit (1a., 1b., etc.) are the learning outcomes of that unit. 

Beneath each learning outcome are:

  • questions for you to answer independently;
  • a brief summary of the learning outcome topic;
  • and resources related to the learning outcome. 

At the end of each unit, there is also a list of suggested vocabulary words.

 

How to Use the Study Guide

  1. Review the entire course by reading the learning outcome summaries and suggested resources.
  2. Test your understanding of the course information by answering questions related to each unit learning outcome and defining and memorizing the vocabulary words at the end of each unit.

By clicking on the gear button on the top right of the screen, you can print the study guide. Then you can make notes, highlight, and underline as you work.

Through reviewing and completing the study guide, you should gain a deeper understanding of each learning outcome in the course and be better prepared for the final exam!

Unit 1: Introduction to Operating Systems

1a. Explain what an operating system does and how it is used

  • What is a computer operating system (OS) and what does it do in general?
  • What particular functions does a computer operating system (OS) do?
  • Describe some common types of computer operating systems. Explain how they differ.
  • Define open source.
  • Define an application (app).
  • Summarize the history of computer operating systems.

A computer operating system (OS) is a type of abstract virtual machine that handles diverse computer hardware. The computer operating system serves as a platform that exchanges computer information or computer code between your computer's hardware and the applications running on it. A specific, universal, operating system standard does not exist: the computer operating system is basically whatever the vendor ships and updates.

The computer operating systems basically coordinates and manages computer information. It settles conflicting requests, facilitates and manages machine facilities, prevents errors and improper use, serves as a file system, and operates a windowing system where the graphical user interface (GUI) presents data in a window format to the user interface (UI).

The operating system does many, many things simultaneously, and that is its real value. To give us some appreciation of those many, many things an OS does pretty much all at the same time, the below list is not exhaustive, but provides an idea of some of the many things and operating systems does more, or less, simultaneously:

  • I/O (input/output) management
  • Software standard library management
  • CPU (central processing unit) scheduling
  • Multitasking
  • Multiprogramming
  • Address translation
  • Dual-mode operation
  • Simplify applications
  • Fault containment, recovery, tolerance
  • Multimedia
  • Internet browser
  • Communications/email
  • Loading the object code
  • Full coordination and protection
  • Manage interactions between different users
  • Application by application crash, not entire environment
  • Multiplex programs running simultaneously
  • Multiplex and protect hardware resources
    • CPU
    • Memory
    • I/O devices
      • Disks
      • Printers
      • Optical drives
      • Flash drives, etc.
  • Good debugging info allowed/provided
  • Full coordination and protection of multiple apps from one another

Two popular proprietary operating systems are the Microsoft Windows operating system family (Windows 10, XP, and Vista), and the Apple suite of operating systems (Catalina, Mojave, and Sierra).

UNIX is an open-source operating system that is deployed all over the world in personal and commercial systems. It is free, non-proprietary, and open-to-all which means that any user is able to see and change the source code as they would like.

Linux, a popular open-source operating system, is a close relative to Unix. Several versions of Linux exist. Open source means that the source code is free and open. This means that users have the permission to reuse, revise, remix, and redistribute it. In practical terms, this means computer programmers are free to fix bugs, improve functions, or adapt the software to suit their own needs.

Operating systems are massive. For example, Microsoft Windows has 50 million lines of code and is terribly complex. Large scale coding is also like an interlocking puzzle: moving or changing one piece often affects other pieces in significant and unpredictable ways.

For example, technicians soon learn that when they upgrade a computer operating system, an element that worked previously is subject to breaking down, even when it was not the main focus of their initial upgrade. You may need to stamp out or correct the bugs, including new and unintentional ones you created when you upgraded the old version of the OS.

Resiliency and variety are great features of computer operating systems because they amplify each other concurrently. These features supplement the complexity of the OS.

Reliability often requires monotony since repeating one, and only one, single task, is less likely to fail or break down. This is especially important in medical equipment. Incredible reliability repeats over and again and uses a simple computer program.

An application, or application program, is simply a software program that runs on your computer or mobile device. Web browsers, email programs, word processors, games, and utilities are all applications.

Note that the computer operating system (OS) can crash an application (app), and an app can crash an OS. Programmers try to make sure applications and programs cannot read/write the memory of other applications/programs of the OS. This is because operating systems have certain definitive duties, requirements, or functions. One of these core functions is to keep computing going, separately from each other so they do not affect or crash each other.

Review this material in What Is an Operating System? and Operating Systems History, Services, and Structure.

 

1b. Identify the various components of a computer system and how they interact with an operating system

  • Describe the various components of a computer system that the OS controls and regulates so they can work together harmoniously and not interfere with each other when serving the user.
  • Why is it important to use operating systems when several users want to share and use the same computing resources simultaneously?
  • Explain the phrase "the network is the operating system" in terms of late and current model networks.
  • Define distributed computing.

Different components of a computer system include:

  1. The software that resides on the computer, in addition to the operating system, including the drivers for various components of hardware.
  2. The computer hardware including the CPU, memory, and storage. The operating system typically provides the graphical user interface or GUI (pronounced gooey). The GUI allows us to use a mouse to click icons, buttons, and menus. Everything is clearly displayed on the screen using a combination of graphics and text.

The computer operating system:

  1. Coordinates the running of compilers, which turn mouse clicks and keystrokes into the computer language of binary ones and zeroes;
  2. Provides text editing applications;
  3. Controls and operates the assembler. The assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations.

When computers first came on the market, manufacturers focused on the single computing machine, which was not connected to other computers (if there were any). When computers became smaller and less expensive, we began to focus more on how to connect with other users and share information. The end device became small, cheap, and easy to replace. The network, on the other hand, became unique and irreplaceable while the importance of the end device became negligible.

Distributed computing is a model where we share software system components among multiple computers to improve efficiency and performance. Distributed computing, using the narrowest of definitions, is limited to programs with components shared among computers within a limited geographic area.

This model could mean incorporating multiple servers in a single location data center, or multiple servers geographically dispersed, so the time for communication (latency) among communicating machines is negligible.

Understanding the value and effectiveness of an OS helps us appreciate what this critically important piece of software does for us on our desktop machines and mobile devices. Different devices require different operating systems, according to their operation, computing resources, hardware, and software.

For example, a mobile device requires a different operating system than a desktop or larger machine because it is more constrained in terms of power, battery, memory, storage, and computing power. All of these things must operate together effectively according to their size constraints, in accordance with their size, performance, and capacity.

Rapid changes in hardware, such as the advances in computing technology from 1950–1980 and today, compels rapid change in operating systems.

Review this material in What Is an Operating System?, Operating Systems History, Services, and Structure, and Computer System Overview.

 

1c. Describe the differences between a 32-bit and 64-bit operating system

  • What is the difference between a 32-bit and 64-bit operating system?
  • How does it affect computer performance?

Computers do not use the base ten numbering system that humans do. They use a base two system which means they have two states: on or off, one or zero, as opposed to base ten which uses 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Like a lightbulb that only switches on or off, computers only understand two things: one and zero. 

The computer engineers and mathematicians who first invented computers could not replicate the base ten system we use. They realized you can represent any number by using a number as a base (two in computer terms or 10 for humans), and raising that number to an exponent, written as 2x. If you lack a font that shows exponents in superscript, you can write these numbers as 2^x or 2exp(x).

Because computers use, and have always used, a base 2 assumption, counting in computing always uses multiples of two (an exponent of 2x). For example, 32 = 25, 64 = 26, 128 = 2256. This is why you always see numbers with exponents with computers, rather than 5, 10, 15, 20, 25, 100, 1,000, 10 = 101, 100 = 102, or 1,000 = 103.

A computer that uses a 64-bit processor can address twice as many bits as one that uses a 32-bit processor and can perform twice as many calculations during the same time period. This greater performance allows us to realize more dramatic computing abilities for applications, such as gaming, graphics, HD (high definition), and video.

It is important to understand the difference between 32-bit and 64-bit computing environments because the application you want to run may not work on a 32-bit system. Your application simply requires too much computing power and will not work with a 32-bit processor.

Review this material in Operating Systems History, Services, and Structure, 32-bit and 64-bit Explained, and 32-bit and 64-bit Frequently Asked Questions.

 

1d. Explain the different types of operating systems and the major systems in use today

  • Name some different types of operating systems.
  • Why are there different types of operating systems?

Some different operating systems include iOS, Linux, macOS, Solaris, Unix, and Windows.

Different software producers created different systems to achieve different goals with different types of hardware. For example, Apple built macOS for its iMac desktop, iPadOS for its iPad, and iOS for its smartphones.

Each hardware environment has different abilities, which may be greater or weaker. Their goals require a unique operating system, such as the need to fit a computer device into a certain size or shape (a form factor) or to perform certain functions.

Review this material in Android, iOS (Apple), Microsoft Windows, Solaris 8, Windows Phone, and What Is Linux?.

 

Unit 1 Vocabulary

  • 32-bit
  • 64-bit
  • API, application programming interface
  • ARPANET
  • Assembler
  • BIOS, basic input/output system
  • Command line
  • Compiler
  • Computing abstraction
  • Control register
  • CPU (central processing unit)
  • Device driver
  • DOS, disk operating system
  • File
  • GUI (graphical user interface)
  • HD (high definition)
  • I/O (input/output)
  • iOS
  • iPadOS
  • Kernel
  • Latency
  • Linux
  • Linux distribution
  • MacOS
  • Object code
  • Operating system
  • Packet switching
  • Program crash
  • Queueing
  • ROM (read only memory)
  • Source code
  • Swapping
  • Thrashing
  • Virtual machine
  • Windows

Unit 2: Processes and Threads

2a. Discuss the importance and use of threads and processes in an operating system

  • Define and compare uniprogramming with multiprogramming.
  • Define a virtual machine.

Uniprogramming refers to computer operating systems that use one thread at a time. Multiprogramming uses more than one thread at a time and involves multicore processors.

Uniprogramming is a relatively easy OS design, such as Microsoft's MS/DOS and Apple's early Mac or Macintosh computer, which required batch processing (one thing at a time). Since this system gets rid of concurrency by defining it away, there are no concurrency issues.

In order to allow several activities to occur at the same time, such as play music, send an email, talk on the phone, Instagram, Facebook, and other social media, multiple threads of computing activity need to run concurrently. These applications have no awareness of the other computing activity that is in communion with them.

The only way to accomplish all of this activity is to make each operation function as if it is the only operation in existence. In other words, it operates as a virtual machine, which is a program on a computer that works like it is a separate computer inside the main computer. The OS manages these operations and everything else in the computing environment.

Review this material in Concurrency: Processes, Threads, and Address Spaces.

 

2b. Describe concurrency

  • Define concurrency and parallelism.
  • Describe four challenges of concurrency.
  • Define a virtual machine.

Concurrency refers to when many operations may be running at the same time, or interleaved in any arbitrary way. You should always assume several operations are running at the same time to ensure you understand the process correctly. Parallelism refers to when operations are definitely running at the same time.

Four challenges of concurrency include:

  1. Single resources for many operations are designed for single resource use alone. These operations are not aware that other operations may want to use that resource.
  2. The OS has to coordinate ALL of the activity.
  3. The virtual machine functions as if it is the only one running that needs resources.
  4. Concurrency introduces synchronization problems via shared data.

As we discussed above, a virtual machine is a program on a computer that works like it is a separate computer inside the main computer.

Wikipedia notes that a virtual machine is: 

  1. a simple way to run more than one operating system on the same computer.
  2. a very powerful server can be split into several smaller virtual machines to use its resources better.
  3. can help with security. If the virtual machine is affected by a virus, the host operating system is unaffected.
  4. can be completely emulated, such as Java. This lets a program run on different types of computer without having to be converted into a code specific for it. They can communicate even though the programming languages are not the same.

As with many things in life, computing requires managing many objectives with limited resources. If resources, such as cost and space, were unlimited we would always use parallelism. We adopt concurrency because resource constraints make parallelism unpractical. In other words, we must share resources among activities in a single computing environment.

Review this material in Concurrency: Processes, Threads, and Address Spaces and Concurrency.

 

2c. Explain the difference between a thread and a process

  • Define thread and explain why we need to protect them.
  • How do we protect threads?
  • Define a process. How many parts does a process have?
  • Define a process state. Name and define the process states.
  • Define a heavyweight process.
  • Define a lightweight process.
  • Define a parent process.

A thread is an independent fetch/decode/execute loop. A thread is the definition of unique computing operation activity. A thread consumes a single address space.

We need to protect threads because they are susceptible to crashing when all of the computer programs are running. Depending on where the thread exists in the OS, a faulty thread has the potential to crash the entire OS. We can prevent threads from crashing by protecting memory, input and output (I/O), and access to the processor, such as by using a timing system.

A computer operates by what is called a "bus". Think of this as a continuous row of busses around a block, with no gaps. Information gets on and off. The whole thing is controlled by a clock within the bus. This is how the computer gets information, to and from everywhere. Everything has to fit on the bus and is controlled by the bus.

A process is a momentary creation in computing we facilitate when hardware and software work together. It may be helpful to think about a computer process in the same way as our brain when we have a single thought. A process is an abstraction in computing, because when the hardware and software stop operating or working together the abstraction disappears. Think of a process as a conception that does not persist in reality.

In other words, a process represents what is needed to run a program: it is a momentary creation in computing occasioned by the effective and productive interaction between hardware and software. We create a process to perform what we want to accomplish.

A process has two parts or goals: 1. To run computer software, and 2. To protect resources.

A process state refers to the stages a process goes through in order to execute.

Process states include:

  1. New – no resources have been allocated.
  2. Ready – resources are allocated.
  3. Running – executing.
  4. Waiting – need allocating of I/O.
  5. Terminated – the process has ended.

A heavyweight process is an older understanding and use of computing OS processes where each process contains one thread and one address state. Computer engineers used this concept before they created the concept of threads.

A lightweight process is a more recent understanding of computing OS processes that introduced the concept of threads. It came about when more advanced hardware design and manufacturing allowed more code to execute during the same time period. A lightweight process allows more than one thread, but still only consumes one address space. This is a good thing.

A process is similar to a program. Think of a program as software that is not being run. Think of a process as software that is being run. So, the colloquial expression, "run the program" is somewhat inaccurate. "Run the process" is more technically accurate.

A process invokes or creates many threads. A thread is a particular, singular activity the computer engineer created to perform the unique thing they wanted by running the process. Running the process can create or invoke many threads. For example, a parent thread can generate other threads. 

While you should try not to give computers human qualities, it may be helpful to think of a thread as if it were a thought.

Review this material in:

 

2d. Discuss context switching and how it is used in an operating system

  • Define context switching and multiplexing.
  • Why do we need to switch processes?
  • Name something to be avoided during context switching.
  • Define overhead.
  • Do modern operating systems have more or less overhead? Explain why or why not.

Context switching occurs when the OS switches between processes. For example, we often need to switch processes because different virtual machines are running which require access to limited resources. For optimal computing environments, the OS is the traffic cop that tells which process to run and when not to run.

Every time a context switch occurs, the OS must run and no processes will be running. Since processes produce the useful output we need and want, you can think about the OS operations between switching processes as overhead, which is wasted computing and time that allows processes to switch back and forth.

Modern OSs have more overhead because they are doing more. For example, they are running more lines of code to make the OS robust and not crash.

When computing resources are limited, such as time, money, or another practicality, single resources, such as a CPU or chip, must be shared. Context switching is a BIG part of what the OS does. Context switching is the basic idea behind multiplexing or scheduling how to share limited resources in the computing environment.

Review this material in:

 

Unit 2 Vocabulary

  • Concurrency
  • Context switching
  • Context switching overhead
  • Crash (computing)
  • Exception (computing)
  • Interrupt
  • Multiprogramming
  • Parallelism
  • Process
  • Process state
  • Register
  • Scheduler
  • Shared Memory
  • Stack
  • Stack overflow
  • Thread
  • Uniprogramming
  • Virtual machine

Unit 3: Synchronization

3a. Describe synchronization

  • Define synchronization.
  • What is another definition of synchronization?
  • What is the most important idea in correct synchronization?
  • Define interleaving.

Synchronization means that threads are scheduled preemptively or cooperatively. With multi-user operating systems, preemptive multithreading is the more widely-used approach because it allows for finer-grained control over execution time via context switching. However, preemptive scheduling may context switch threads during moments programmers fail to anticipate, which will crash the OS. 

In contrast, cooperative multithreading requires threads to relinquish control of execution to ensure all of the threads run to completion. This can create problems if a cooperatively-multitasked thread blocks because it is waiting on a resource, or if it starves out the other threads by not yielding control of execution during an intensive computation.

In multithreading computing, a queue controls the timing for threads to run. Computer engineers use synchronization to avoid entering or leaving the multi-thread queue and avoid conflicts among the threads.

Another definition of synchronization is using atomic operations to ensure cooperation among threads. Making competing threads wait, until an earlier thread is finished is the most important idea in correct synchronization. This way, interleaving goes away.

Let's return to our bus analogy. If you have a limited resource, like our slate of busses, only so many busses arrive during a unit time and information needs to get on and off each bus. Timing according to the applications is critical, too. You could make applications wait until their turn at a bus arrives, but that is considered wasteful. Also, if that were the case, you would type a key on the keyboard and have to wait for it to appear.

Interleaving offers a solution in the face of limited resources. Interleaving cuts each "thread", which is its own process, into pieces so as many of each thread can fit on one bus at a time. This is not a preferred, perfect, or ideal system, but it at least allows everything to move forward, somewhat.

Ideally, there would be no limit on resources. However, this would drastically increase the price of computers. We ask our computers to do more and more, but software must work in an environment of hardware restrictions. Demand exceeds supply and everyone wants to do more with what they have. User interface functions tend to take priority so you do not have to wait until the character you just typed appears.

Synchronization is an important part of OS control because it allows multi-demand (or process/user) computing systems to operate using threads. 

Review this material in Synchronization and Chapter 1 of Little Book of Semaphores.

 

3b. Explain a race condition

  • Define a race condition.
  • How can you prevent a race condition?
  • Define a critical section.
  • Define an atomic action or instruction.
  • Are race conditions easy to fix?

A race condition is an error in synchronization caused by errors in OS software. It is an undesirable situation that occurs when a device or system tries to perform two or more operations at the same time. Operations must follow the proper sequence to perform correctly, due to the nature of the computer device or system. A race condition results from a multiple thread execution: the critical section differs according to the order the threads execute.

The critical section is a code segment where shared variables can be accessed. Atomic action (an indivisible sequence of operations that must complete without interruption) is required in a critical section. In other words, only one process can execute in its critical section at a time. All the other processes have to wait to execute in their critical sections.

We can avoid race conditions in critical sections by treating the critical section as an atomic instruction. Proper thread synchronization using locks or atomic variables can also prevent race conditions.

A race condition can be difficult to reproduce and debug because the end result is nondeterministic and depends on the relative timing between interfering threads.

Review this material in Introduction to Race Conditions and Avoiding Race Conditions in SharedArrayBuffers with Atomics.

 

3c. Discuss interprocess communication

  • Define an interprocess communication (IPC).

Interprocess communication (IPC) refers to the way operating systems allow processes to manage shared data. These mechanisms or constructs are necessary since resources are limited in practical computing. These days, size is of particular concern.

In practical computing, where the resources the operating system schedules are limited, it is desirable, and often necessary, for different processes to share memory allocation. Since memory space is shared, it is terribly important for interprocess communication to occur in the intended fashion, with extreme logic.

Review this material in Mutual Exclusion, Semaphores, Monitors, and Condition Variables.

 

3d. Describe how semaphores can be used in an operating system

  • Define a semaphore.
  • Where does the allusion of the name semaphores come from?

Semaphores are important conditional programming constructs that ensure proper process synchronization. The origin of the term comes from the railway system where a semaphore was an early form of a fixed railway signal. Train tracks have these warning signals built into their tracks to keep trains from colliding. Think about a semaphore as a computer-based signaling mechanism. A thread can signal another thread that is waiting on a semaphore.

A semaphore uses two atomic operations for process synchronization: wait and signal. In other words, a semaphore is a high-level "lock" construct in the OS which prevents synchronization problems. In the language of computers, a semaphore is a special type of integer, a non-negative value.

Review this material in:

 

3e. Discuss three of the classic synchronization problems

  • Name four classic synchronization problems with semaphores.
  • Describe the bounded-buffer (producer-consumer, or vendor/customer) problem. Describe a method you can use to prevent this problem from occurring.
  • Describe the dining philosopher's problem.
  • Describe the reader's and writer's problem.
  • Describe the sleeping barber problem.

Classic synchronization problems using semaphores are terribly important to understanding synchronization and how coding works when it is correct. Failure to understand these classic problems coders to make their code more complex than necessary and repeat the problems with certain scenarios over and over again.

Four classic synchronization problems with semaphores include:

  1. Bounded–buffer (also producer-consumer or vendor-customer) problem;
  2. Dining philosophers problem;
  3. Readers and writers problem;
  4. Sleeping barber problem.
  1. The Bounded-Buffer (Producer-Consumer or Vendor-Customer) Problem

The bounded-buffer (also called producer-consumer or vendor-customer) problem describes two processes: the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing data from the buffer), one piece at a time. The challenge is to make sure the producer does not try to add data into the buffer if it is full, and the consumer does not try to remove data from an empty buffer.

The solution to this problem is to create two counting semaphores (full and empty) to keep track of the current number of full and empty buffers, respectively. Producers produce a product while consumers consume the product, but both use of one of the containers each time.

  1. The Dining Philosophers Problem

In this computer systems analogy, the dining philosopher problem describes a situation where a certain number of diners (philosophers) are seated around a circular table with one chopstick between each pair of philosophers. A diner may eat if they can pick up the two chopsticks adjacent to them. One chopstick may be picked up by any one of its adjacent followers, but not both.

In computing, this challenge pertains to the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner.

  1. The Readers and Writers Problem

Suppose a database needs to be shared among several concurrent processes. Some of these processes may only want to read the database, whereas others may want to update (that is, to read and write) the database. We distinguish between these two types of processes by referring to the former as readers and to the latter as writers. In OS we call this situation the readers-writers problem.

Here are the parameters of this challenge:

  • One set of data is shared among several processes.
  • Once a writer is ready, it performs its write. Only one writer may write at a time.
  • If a process is writing, no other process can read it.
  • If at least one reader is reading, no other process can write.
  • Readers may not write and only read.
  1. The Sleeping Barber Problem

In this computer systems analogy, think about when a barbershop only has one barber: one barber chair and any number of chairs where customers wait. When there are no customers, the barber goes to sleep in their barber chair and must be woken when a new customer comes in. While the barber is cutting hair, new customers can take the empty seats to wait, or leave if there is no vacancy.

In computing, this challenge pertains to how threads await use in the computing process.

Review this material in Mutual Exclusion, Semaphores, Monitors, and Condition Variables and Little Book of Semaphores.

 

3f. Explain the alternatives to semaphores

  • Name and describe some alternatives to semaphores.
  • Define and describe the function of a mutex.
  • Define and describe the function of a recursive mutex.
  • Define and describe the function of a reader/writer mutex.
  • Define and describe the function of a spinlock.

A semaphore is a "relaxed" type of lock. A challenge to overcome with semaphores is that any thread can signal a semaphore at any time, regardless of whether that thread has previously waited for the semaphore. Consequently, more specific types of programming constructs are necessary, such as the alternatives we discussed above.

Mutexes

A mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a unique name at the start of a program. The mutex is a locking mechanism that makes sure only one thread can acquire the mutex at a time and enter the critical section. This thread only releases the mutex when it exits the critical section.

A mutex is different from a semaphore because it is a locking mechanism, while a semaphore is a signaling mechanism. A binary semaphore can be used as a mutex, but a mutex can never be used as a semaphore.

Recursive Mutexes

A recursive mutex is similar to a plain mutex, but one thread may own multiple locks on it at the same time. For example, If Thread A acquires a lock on a recursive mutex, then Thread A can acquire further locks on the recursive mutex without releasing the locks already held. However, Thread B cannot acquire any locks on the recursive mutex until Thread A has released all of the locks it held.

In most cases, a recursive mutex is undesirable, since it makes it harder to reason correctly about the code. With a plain mutex, if you ensure the invariants on the protected resource are valid before you release ownership, then you know these invariants will be valid when you acquire ownership.

With a recursive mutex, this is not the case. Being able to acquire the lock does not mean the current thread did not already hold the lock and therefore does not imply the invariants are valid.

Reader/Writer Mutexes

Multiple-reader/single-writer mutexes (also called read/write mutexes or shared mutexes) offer two distinct types of ownership:

  • Exclusive ownership (also called write ownership or write lock);
  • Shared ownership (also called read ownership or a read lock).

Exclusive ownership works just like ownership of a plain mutex: only one thread may hold an exclusive lock on the mutex, only that thread can release the lock. No other thread may hold any type of lock on the mutex while that thread holds its lock.

Shared ownership is laxer. Any number of threads may take shared ownership of a mutex at the same time. No thread may take an exclusive lock on the mutex while any thread holds a shared lock.

These mutexes are typically used to protect shared data that is seldom updated, but they cannot be safely updated if any thread is reading it. Consequently, the reading threads take shared ownership while they are reading the data. When the data needs to be modified, the modifying thread first takes exclusive ownership of the mutex, which ensures that no other thread is reading it, then releases the exclusive lock after the modification has been done.

Spinlocks

A spinlock is a special type of mutex that does not use OS synchronization functions when a lock operation has to wait. Instead, it just keeps trying to update the mutex data structure to take the lock in a loop.

If the lock is not held often or is only held for short periods, then this can be more efficient than calling heavyweight thread synchronization functions. However, if the processor has to loop too many times then it is just wasting time doing nothing, and the system would do better if the OS scheduled another thread with active work to do, instead of the thread failing to acquire the spinlock.

Review this material in the lecture Mutual Exclusion, Semaphores, Monitors, and Condition Variables.

 

Unit 3 Vocabulary

  • Atomic operation
  • Bounded–buffer (also producer-consumer or vendor-customer) problem
  • Condition variable
  • Critical section
  • Dining philosophers problem
  • Interleaving
  • Lock
  • Monitor
  • Multithreading
  • Mutual exclusion (mutex)
  • Preemptive cooperative
  • Race condition
  • Reader/writer mutex
  • Readers and writers problem
  • Recursive mutex
  • Semaphore
  • Sleeping barber problem
  • Software primitive
  • Spinlock
  • Starvation – a thread never gets to run due to a scheduling algorithm
  • String copy operation
  • Synchronization
  • Thread interleaving "starvation"

Unit 4: CPU Scheduling

4a. Discuss CPU scheduling and its relevance to operating systems

  • Define scheduling and the concept of fair scheduling.
  • What is a computing cycle?
  • Define a bus.

Scheduling refers to how the CPU decides which tasks it should retrieve first from the Ready queue. Fair scheduling refers to how the user understands scheduling, in terms of the context of their particular computing situation.

Fair scheduling does not imply moral reasoning, which may be confusing to someone who is unfamiliar with the terminology of computer science. Rather, it applies to the optimal use of resources and computer performance.

According to fair scheduling, should it take a long time for the CPU to perform an action (i.e. a long computation time), with a short wait time for the user? Or should there be short CPU computation time and long user wait time? The answer is: it should be short on user time and long on computation time.

Most users would probably argue that CPU scheduling tasks should prioritize the user, regardless of the length of time it takes to perform a computation. We do not want to annoy users by making them wait! 

However, some would consider prioritizing the user unfair because it violates fairness in terms of CPU scheduling. Minimizing average user response time violates "fairness" in terms of CPU scheduling. Let's be practical and make them wait!

The computer cycle refers to the entire cycle the bus takes to complete one iteration around the computer. As we discussed in section 3a on synchronization, the bus is a scheduling paradigm similar to a subway line which has different stops (computing resources, I/O) that must be placed on the bus and taken to the resource intended to be manipulated. Results are put back on the bus to the computing resource intended, such as I/O, memory, screen, etc.

Computer engineers strive to develop the most efficient and reliable techniques for organizing and preventing conflict among different operations and their resources, which are always limited in the real world. Whether it is vehicle traffic, queueing theory of any kind, or making sure operating systems perform robustly and efficiently, these challenges remain standard.

While the conditions and parameters of computing have changed over the years, the lessons learned from doing things well persist into the modern age, in addition to the benefits.

Review this material in Thread Scheduling (the first from 54:00 to the end and the second until 31:00) and CPU Scheduling.

 

4b. Explain the general goals of CPU scheduling

  • Why is fair scheduling an important concept in operating systems?
  • What does it mean to maximize throughput?
  • Define overhead. Name two parts of maximizing overhead.

Fair scheduling refers to CPU scheduling controlled by the OS, which optimizes the use of computing time and resources according to the goals intended for the job.

Maximizing throughput means completing the most important meaningful work in the shortest amount of time. Two parts of maximizing throughput include 1. minimizing overhead and 2. optimizing the efficient use of computing resources, often by minimizing response time.

Programming errors are the primary cause of overhead in CPU scheduling. Minimizing overhead usually means eliminating useless or unnecessary computing cycles that are unnecessary and create little, if any, benefit. We reduce overhead cost by promoting brilliant programming, which tends to incorporate practices that are less than intuitive.

Goals for minimizing overhead include: 

  1. Minimize context switching.
  2. Minimize accessing CPU.
  3. Minimize accessing memory.
  4. Minimize accessing I/O.

The second main goal of CPU scheduling is to minimize response time. Unfortunately, our two goals – minimizing overhead and response time – are often at loggerheads. Short user requests are easy to handle, but maximizing throughput may preclude user activity. Meanwhile, computational cycles tend to be long but improve throughput.

The concept of fair scheduling originated as a way to balance limited and expensive resources. But even today, when resources are much cheaper, the multiplying effect of modern computing has demonstrated the wisdom of these techniques which have minimized overhead and maximized throughput.

So, maximizing throughput means achieving the most useful "bits" in the shortest amount of time, transmitting the greatest percentage of data possible. Computing will always involve limited data transmission, even though its magnitude has grown significantly. Brilliant software engineers have created computing functions that perform more efficiently, achieving more benefit and speeding processing along with fewer lines of code and less total data transmission.

Review this material in Thread Scheduling (the first from 54:00 to the end and the second until 31:00), CPU Scheduling, and CPU Scheduling.

 

4c. Describe the differences between preemptive and non-preemptive scheduling

  • Describe a preemptive and a non-preemptive CPU scheduling technique.
  • What is a computing preemptive resource and a computing non-preemptive service?

Here's another analogy that is similar to computing. Let's say, you work in a call center and your job is to respond to inquiries from users. Should you respond to each call, first come, first serve, which means you could get bogged down responding to one question that requires an hour of research? Or is it better to respond to 20 inquiries that are easy to answer first, so you can make 20 people happy and deal with the one long response later?

A preemptive scheduling technique is one that processes shorter computer processing jobs first, preempting longer CPU processing jobs. Preemptive scheduling is determined by the priority of the process as determined by the OS.

  • Shortest Remaining Job First (SRJF) is an example of a preemptive scheduling technique. Note that Shortest Remaining Time First (SRTF) is simply another way of saying Shortest Remaining Job First.

A non-preemptive scheduling technique is one that processes computing jobs as they are presented, i.e. it does not consider the length of time the job will take before it begins working on what is next in the queue. Non-preemptive doesn't care about length. It just runs to completion whatever is presented to it. No interruptions.

  • Shortest Job First is an example of a non-preemptive scheduling technique.

Shortest Remaining Time First (SRTF), which is the same as Shortest Remaining Job First (SRJF), requires a type of psychic ability to judge which job is actually going to be the shortest remaining job to finish first, since the OS only schedules, it does not read or investigate prior to scheduling.

There are several possible techniques for simulating this psychic ability.

Estimating SRTF requires:

  1. Adaptive scheduling – changing scheduling policy based on past behavior of certain scheduling jobs and functions.
  2. Estimator function – taking the average of past job length times.
  3. Exponential averaging – is similar to taking an average, but can vary over time-based on past time requirements for that particular type of job.
  4. Multi-level feedback scheduler – requires multiple ready queues that provide feedback to each other.

Every ready queue has a distinct and hierarchical priority. No two queues may have the same priority and each queue has a particular scheduling algorithm. Each higher-level, ready queue has a smaller time clock.

As jobs exceed the time clock of higher-level queues they descend to lower-level ready queues until the time to run the job does not exceed the clock of that level queue, regardless of priority. If a job does not expire the time clock of a ready queue, it will push the next job up to the next higher-level ready queue.

You can take a computing preemptive resource away from a thread without suffering any severe consequences. Taking a computing non-preemptive resource away from a thread, on the other hand, would have noticeable consequences.

Preemption is an attractive proposition for scheduling. Preemption considers the length of time each operation needs to execute. Since most user interface operations are preemptive, many programmers believe it is the solution for every scheduling problem. However, that would be foolish. One simple solution for all scheduling issues does not exist, otherwise, the issue of scheduling would be trivial and we would not spend any time or research discussing or learning about it.

Review this material in Thread Scheduling (first from 54:00 to the end, second until 31:00) and Scheduling.

 

4d. Discuss four CPU scheduling algorithms

  • What are four CPU scheduling algorithms?
  • Describe a negative aspect of FIFO and Round Robin.

Let's look at four CPU scheduling algorithms:

  1. First In First Out (FIFO) – the first item in the docket is done until completion, followed by the second, third, and so on.
  2. Round Robin – a clock controls access to the CPU. The clock goes off, you obtain access to the CPU for a small, specified period of time. If the job is not finished, you have to wait until your next turn is up to continue.
  3. Shortest Job First – the CPU processes whatever job is smallest.
  4. Shortest Remaining Time First – the CPU processes whatever job is closest to completion first.

A negative aspect of FIFO is that users become annoyed when they have to wait for the CPU to finish a longer job that arrived into the docket before theirs. A negative aspect of Round Robin is that so much overhead is continually switching. The average response time is much worse than the other options!!

As with all queue theory, classical approaches are available. However, each option has its own drawbacks. You should not implement them without considering the positive and negative consequences. Also, each option, even when implemented correctly, may have unintended consequences. Considering all of the scenarios that may occur, and the consequences that may result, is a critical part of robust and best-practice software design.

The most robust software is typically the simplest. But, an application, particularly involving interfacing with humans (involving a user experience, UX, and user interface, UI), usually requires more complex software to provide a graphical user interface (GUI) today's user expects. Simplicity is often lost and challenges that need to be resolved often ensue. 

Review this material in Thread Scheduling (first from 54:00 to the end and second until 31:00) and the first seven slides of CPU Scheduling.

 

Unit 4 Vocabulary

  • Computer bus
  • Computing cycles
  • CPU bus
  • CPU overhead
  • CPU scheduling
  • CPU scheduling
  • Fair Scheduling
  • FIFO (first in, first out)
  • Maximize throughput
  • Minimize overhead
  • Non-preemptive resource
  • Non-preemptive scheduling technique
  • Preemptive resource
  • Preemptive scheduling technique
  • Ready queue
  • Round Robin

Unit 5: Deadlock

5a. Explain what deadlock is in relation to operating systems

  • Define deadlock.
  • Describe the conditions for deadlock.

Deadlock is a specific type of computer scheduling starvation where the execution of a job process is delayed for an inordinate amount of time. Deadlock occurs when the process is never allowed to execute. The job is in a circular starvation pattern which never allows it to execute.

Five conditions for deadlock include:

  1. Mutual Exclusion – only one thread at a time can use a resource.
  2. Hold and Wait – a thread holding at least one resource is waiting to acquire additional resources held by other threads.
  3. No Preemption – threads only release resources voluntarily.
  4. Circular Wait – there is a set of threads with a cyclic waiting pattern.
  5. Requesting Resources in the Wrong Order.

Review this material in Deadlock.

 

5b. Discuss deadlock prevention, avoidance, and the differences between each

  • Describe some techniques for preventing deadlock.
  • Define the Banker's Algorithm.

Brilliant and elegant thinking, and using computer-based logic may be required to avoid or overcome deadlock.

Some techniques for avoiding gridlock include:

  1. Providing excess resources.
  2. Never sharing resources.
  3. Eliminating wait time.
  4. Pre-requesting resources.
  5. Forcing resources to be requested in order.

One technique for avoiding deadlock is Banker's Algorithm. This technique involves incorporating a dynamic allocation of resources. Think about how a banker needs to calculate whether the bank should lend money to a customer based on how much money the individual has deposited at the bank.

In the same way, all resources that might be required, while not immediately seized, must be made known in advance by the thread. This process ensures each thread releases its resources after execution. Banker's Algorithm dictates that the maximum amount of all resource needs for all of the current threads is greater than the total resources.

Note that we are using a fluid or expanding the definition of "program". The more formal understanding is that a program is the written software, dormant on a shutdown computer that is ready to run. Although we often say "run the program", computer purists would dismiss this.

A process means the program is running. You might consider a thread to be a "program" since it executes one particular function the program does or is supposed to do. Each process must request resources from the OS, such as CPU time to run. The OS commands everything and will grant threads resources based on how the OS is written. Different results will ensue depending on how that OS is written and handles the allocation of resources to requesting threads. You can think about a thread as a single keystroke, when it shows up on the keyboard and when it is recorded in memory. Each of those could be its own thread.

Review this material in Deadlock.

 

5c. Describe deadlock detection and recovery

  • Describe three ways to discover deadlock.
  • Name some ways to recover from deadlock.

Deadlock describes a specific case of starvation where a thread will NEVER be executed. It is logically impossible, not just operationally unlikely, although it may seem so.

Deadlock cannot be allowed and must be anticipated if possible. If ignored, a reboot is required. If a reboot is unacceptable, the computer programmer must find a better and more elegant design that definitively avoids deadlock, otherwise, the consequences of a failure of the software will ensue, and are assured to occur at an unspecified time. Most likely, any time is the worst time for it to occur.

Three ways to discover deadlock include:

  1. Infinite testing – this option is often impractical.
  2. Analysis – examine for no cycles in "lock" acquisition.
  3. Random timing testing.

Some ways to recover from deadlock include:

  1. Add more resources.
  2. Interrupt thread participating in deadlock.
  3. Prioritize non-competing threads.
  4. Preventing deadlock by definition, i.e. proper ordering of threads, aka dimension ordering

Review these materials in:

 

Unit 5 Vocabulary

  • Banker's Algorithm
  • Circular wait
  • Deadlock
  • Hold and wait
  • Mutual exclusion
  • No preemption
  • Starvation

Unit 6: Memory Management

6a. Explain the memory hierarchy

  • Define computing memory hierarchy.
  • Explain why response time is important.
  • Explain why memory hierarchy is important.

Proper memory management is essential for computer environments to function correctly. Poor memory hierarchy will lead to poor performance in architectural design, predictions necessary for scheduling or computing, and lower-level programming constructs.

Response time, complexity, and capacity are all related. Poor memory hierarchy will lead to longer response times and poor computer performance across the system.

Review this material in Overview of Memory Hierarchy.

 

6b. Discuss how the operating system interacts with memory

  • What is memory management and why is it important?

Memory management refers to the OS function responsible for managing the computer's primary memory. The memory management function keeps track of the status of each memory location, whether it is allocated or free. It determines how to allocate memory among competing processes by deciding which process will get memory, when they will receive it, and how much they are allowed. When memory is allocated, it determines which memory locations will be assigned. It tracks when memory is freed or unallocated and updates the status.

Without proper memory management, the computer does not know where it is in processing a function. It is immediately lost and the programs will crash.

Review memory management in Paging.

 

6c. Describe how virtual memory works

  • Define virtual memory.
  • Explain how virtual memory works.

Virtual memory is an essential part of modern computing. It describes the hardware and software-based memory management capability that allows computers to compensate for physical memory shortages.

The OS temporarily transfers data from random access memory (RAM) to disk storage. Virtual address space is increased using active memory in RAM and inactive memory in hard disk drives (HDDs) to form contiguous addresses that hold the application and its data.

Review this material in Virtual Memory and More on Virtual Memory.

 

6d. Discuss three algorithms for dynamic memory allocation

  • What is dynamic memory allocation?
  • When should dynamic memory allocation be used?
  • What are three dynamic memory allocation techniques?

Dynamic memory allocation refers to the process of assigning memory space during the execution or run time. It is critical to have elegant dynamic memory allocation techniques. These techniques occur while computer operations are in process. These memory allocation decisions and executions must be handled elegantly and efficiently (in milliseconds) before the computing operation overtakes the memory available for it to continue and the system crashes.

You should use dynamic memory allocation during the following conditions:

  1. When you do not know how much memory a program will need.
  2. When you want to use data structures without any upper limit of memory space.
  3. When you want to use memory space more efficiently. For example, let's say you have allocated memory space for a 1D array as an array [20], and you only use 10 memory spaces. Without dynamic memory allocation, the remaining 10 memory spaces will be wasted because other program variables will not be able to use them.
  4. Dynamically-created lists insertions and deletions allow us to easily manipulate addresses. Statically-allocated memory insertions and deletions lead to more movements and wasted memory.
  5. Dynamic memory allocation is needed for structures and linked lists in programming, 

Three techniques for dynamic memory allocation include:

  1. Stack allocation;
  2. Heap allocation;
  3. Fibonacci allocation.

Review this material in Memory Management.

 

6e. Explain methods of memory access

  • Name three methods of memory access.

Computer engineers created different methods of memory access because no one method provides a viable solution for all situations, such as the need for different response times.

  1. Random Access Memory – each memory location has a unique address. You can use this unique address to reach any memory location, in the same amount of time, in any order.
  2. Sequential Access Memory – allows memory access in sequence (in order).
  3. Direct Access Memory – information is stored in tracks. Each track has a separate read/write head.

Review this material in SegmentationPagingAn Introduction to Intel Memory Management, and The GDT.

 

6f. Describe paging and page replacement algorithms

  • Define paging. What is a page in an OS?
  • Define a page frame. What is a page transfer?
  • How does paging work? When does paging occur?

Paging refers to a computer operating system that uses paging for virtual memory management.

A page is a unit of memory whose location is held in a page table. A page is the smallest unit of data for memory management in a virtual operating system. A page is a fixed-length, contiguous block of virtual memory, described by a single entry in the page table. It is the smallest unit of data for memory management in a virtual memory operating system.

A page frame is the smallest fixed-length contiguous block of physical memory into which memory pages are mapped by the operating system.

A page transfer of pages between main memory and an auxiliary store, such as a hard disk drive, is referred to as paging or swapping. Page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk when a page of memory needs to be allocated.

Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

Review this material in Paging.

 

Unit 6 Vocabulary

  • Computer memory
  • Direct-access memory management
  • Dynamic memory allocation
  • Memory access
  • Memory hierarchy
  • Memory management
  • Page
  • Page frame
  • Page replacement
  • Page swapping
  • Paging
  • Random-access memory management
  • Response time
  • Sequential access memory management
  • Virtual memory

Unit 7: File System

7a. Describe a file system and its purpose

  • What is a computer file system?
  • What is a common and well-known file system in use today?

A computing file system controls how data is stored and retrieved. File systems usually consist of files separated into groups called directories. Directories can contain files or additional directories. Today, the most commonly used file system with Windows is NTFS (new technology file system).

Organizing computing information is critical to the proper operation of computers. A file system tells and helps a computer keep and know where all the information it needs to run is located, or should be. NTFS is a common Windows file system platform.

Review this material in File Systems and Disk ManagementFile Systems, and File System Management.

 

7b. Discuss various file allocation methods

  • Why do we need a file system?
  • What methods exist for creating a file system?

Without a file system, data placed in a storage medium would simply be one large body of data with no way to tell where one piece of data stops and the next begins. This would make modern computing impossible. By separating data and giving each piece its own address within the file system, necessary files can be isolated and identified quickly and efficiently.

There are many different kinds of file systems. Each one has a different structure and logic, and different properties of speed, flexibility, security, size, and more.

Methods of storing data include storing it on a local data storage device or providing file access via a network protocol. Some file systems are "virtual", which means files are computed on request or mapped in a different file system.

Review file systems in File Management and The File System.

 

7c. Explain disk allocation and associated algorithms

  • What is disk allocation?
  • What are methods for disk allocation?

Disk allocation determines how disk blocks are allocated for files. Three methods of disk allocation include contiguous allocation, linked allocation, and indexed allocation.

Effective disk allocation algorithms are important for speed, efficiency, and response time. Effective algorithms, such as the three mentioned here, allow for the efficient use of disk space and fast access to file blocks required for computing operation.

Review disk allocation in File Systems and Disk ManagementQueueing Models, and Disk Scheduling Lecture Notes.

 

Unit 7 Vocabulary

  • Contiguous allocation
  • Directory
  • Disk allocation
  • File blocks
  • File blocks
  • File system
  • Indexed allocation
  • Linked allocation
  • NTFS

Unit 8: Security

8a. Discuss and identify types of security threats

  • Describe some security threats operating systems face.

Here is a list of some common security threats to operating systems:

Denial of Service (DOS) – A DOS attack is one that prevents authorized users from using a necessary and needed computing system for legitimate and legal reasons. A DOS attack floods the computing system with unnecessary requests which prevent real legitimate requests from accessing the computing system.

Logic Bomb – A logic bomb is malicious code that appears normal, but only executes for nefarious purposes when certain conditions are met. 

Port Scanning – Port scanning allows hackers to detect system vulnerabilities so they can attack the system, such as via an open port that should not be open, i.e. an unlocked door. 

Trap Door – A trap door is a defect in the computer code that allows malicious actors to exploit the flaw and gain access to valuable information.

Trojan Horse – A Trojan Horse traps and stores user login credentials, to send to malicious hackers who can use them to log in by impersonating the real and authorized user. 

Virus – A computer virus, like a biological virus, replicates itself over and over again to spread to other computers. Computer viruses can disrupt computing operations by modifying or deleting user files and crashing the system.

Worm – A computer worm is a standalone malware computer program that replicates itself over and over again to spread to other computers. A worm will use the machine as a host to scan and infect other computers. Like a virus, computer worms can disrupt computing operations by modifying or deleting user files and crashing the system.

The difference between a worm and a virus is that a worm operates more or less independently of other files, whereas a virus depends on a host program to spread itself.

Review these security threats in Security IntroductionProtection and Security, and Security.

 

8b. Describe various types of malware

  • Describe some types of malware that can affect operating systems?

Malware is software designed to infiltrate or damage a computer system without the owner's informed consent. Different types of malware exist, are written by different coders, execute differently, and may be variants of other existing malware. These days, the primary incentive for this type of fraud is financial and disruptive. Professional criminals continue to extort money from companies, organizations, and individuals, large and small.

In addition to the security threats described above, some additional types of malware include:

Adware – Software that automatically displays or downloads advertising material when the user is online.

Backdoor – a feature or defect of a computer system that allows fraudsters to access data surreptitiously.

Bots and Botnets – a computer that is compromised so fraudsters can control it remotely. Cybercriminals use bots to launch attacks on other computers and often create networks of controlled computers, known as botnets.

Browser Hijacker – a type of malware that modifies the web browser's settings without their permission and allows fraudsters to display unwanted advertising on the user's website. 

Code Design Bug – a mistake or error made to the program's source code, design, or operating system.

Crimeware – malware designed to carry out or facilitate illegal online activity.

Cryptojacking – malware that mines a user's computer for cryptocurrencies, such as Bitcoin.

Fileless Malware – malware that exists exclusively in computer memory, such as RAM.

Grayware – software that resides in a "gray area" between malware and legitimate conventional software. For example, grayware might track your online behavior or send a barrage of pop-up windows. Grayware is not only annoying, but it can affect computer performance and expose it to security risks.

Hybrid or Combo Malware – a combination of two or more different types of malware, such as a Trojan horse or worm attached to adware or malware.

Keylogger – malware that records every keystroke the user makes, so fraudsters can access passwords and other confidential information.

Malicious Mobile Apps – malware designed to target mobile devices, such as smartphones and tablets, to access private data.

Malvertising – the use of online advertising to spread malware. Malvertising typically injects malicious or malware-laden ads into legitimate online advertising networks and webpages.

RAM Scraper – malware that scans the memory of digital devices, such as point-of-sale systems so fraudsters can collect personal data, credit card, and personal identification numbers.

Ransomware – malware that threatens to publish the victim's data or perpetually block access to it until a ransom is paid.

Rogue Security Software – malware that misleads users into believing a virus exists on their computer and tries to convince them to pay for a fake malware removal tool that actually installs malware onto their computer.

Rootkits – a collection of computer software designed to allow fraudsters to access a user's computer by masking itself as another type of software.

Social Engineering or Phishing – a range of techniques designed to trick people into giving fraudsters their personal data, such as usernames, passwords, and credit card numbers, by disguising themselves as a trustworthy entity.

Spyware – software that enables fraudsters to obtain data from a computer user's activities by transmitting data covertly from their hard drive.

Review these types of malware in Malware: Viruses and Worms and Bots and Botnets.

 

8c. Explain basic security techniques

  • Name some security techniques used to protect operating systems?

OS security refers to specified steps or measures used to protect the OS from malware, threats, viruses, worms, and other remote hacker intrusions. OS security encompasses all preventive-control techniques, which safeguard any computer assets capable of being stolen, edited, or deleted if OS security is compromised.

Some common security techniques include:

  1. Performing regular OS patch updates;
  2. Installing updated antivirus engines and software;
  3. Scrutinizing all incoming and outgoing network traffic through a firewall;
  4. Creating secure accounts with required privileges only (i.e., user management);

Since security requires taking additional time and effort, it is frequently seen to be a burden. It is not a one-and-done, rather, it is an attitude. Good, basic security is like good hygiene. You could even call the basics of information security, security hygiene. Physical and information security are bound together inseparably since both are critical.

For example, it is terribly important to take the time to update software, which typically includes the latest security software fixes. Installing antivirus software that scans incoming bits for patterns known to be malware signatures is key. In addition, users should only allow computing activity that is necessary and close all "open doors" that do not need to be open. Good identity and access management is critical.

Review this material in ProtectionSecurity StrategiesSecurity Techniques, and Guide to Intrusion Detection and Prevention Systems.

 

Unit 8 Vocabulary

  • Adware
  • Backdoor
  • Bots and botnets
  • Browser hijacker
  • Code design bug
  • Crimeware
  • Cryptojacking
  • Fileless malware
  • Grayware
  • Hacker
  • Hybrid malware
  • Keylogger
  • Malicious mobile apps
  • Malvertising
  • Malware
  • Phishing
  • Ram scraper
  • Ransomware
  • Rogue security software

Unit 9: Networking

9a. Explain basic networking principles

  • Describe some of the basic elements involved in networking.

Computer networking refers to the ability of different computers, peripherals, and things we might not consider to be computers, to talk to each other. For example, networking allows computer users to share printers, make phone calls, use social media tools, and send and receive email.

Make sure you are familiar with the definitions for the following terms related to computer networking:

Client – a desktop computer or workstation that can obtain information and applications from a server.

Hub – a basic networking device that connects multiple computers with other network devices.

Local Operating System – the operating system that manages a desktop computer or workstation.

Network Interface Card (NIC) – the computer hardware that connects computers and devices to the computer network.

Network Operating System – the operating system that manages network resources and performs the special functions that connect computers and devices, such as to a local area network (LAN) or a wide area network (WAN) over the Internet.

Router – a networking device that exchanges data packets among computer networks. Routers direct the traffic functions on the Internet.

Server – a computer that provides data to other computers. It may serve data to systems on a local area network (LAN) or a wide area network (WAN) over the Internet. Many types of servers exist, including web servers, mail servers, and file servers.

Shared Printers and Other Peripherals – hardware resources provided to network users by servers. Resources include data files, printers, software, and other items clients share on the network.

Switch – a computer network device that connects other devices together.

Review these basic elements of networking in Introduction to Networks, Computer Networking, and Networking II.

 

9b. Discuss protocols and how they are used

  • What are some common networking protocols? Include their acronyms, if typically used.

Computer networking requires different tools, in the form of common software programs or protocols, so computers can communicate with each other even if they were produced by different manufacturers. This is where networking protocol standards come into play. Any manufacturer can refer to the networking protocol standard and know what their product needs to do to talk with other equipment or computers made by other manufacturers.

Here are some common networking protocols:

  1. Domain Name System (DNS)
  2. File Transfer Protocol (FTP)
  3. Hypertext Transfer Protocol (HTTP)
  4. Hypertext Transfer Protocol over SSL/TLS (HTTPS)
  5. Internet Message Access Protocol (IMAP)
  6. Internet Protocol (IP)
  7. Post Office Protocol version 3 (POP 3)
  8. Secured Shell (SSH)
  9. Simple Mail Transfer Protocol (SMTP)
  10. Simple Network Management Protocol (SNMP)
  11. Telnet
  12. Transmission Control Protocol (TCP)

Review this material in Layering and Link Layer.

 

9c. Explain reference models, particularly TCP/IP and OSI

  • Define the OSI 7 Layer model. What do computer programmers use it for?
  • What is the TCP/IP model? What is it used for?

The OSI/ISO 7 Layer Model

The International Organization for Standardization (ISO), an international standard-setting body, created the OSI (Open Systems Interconnection) reference model which describes the functions of a communication system. The OSI model provides a framework for creating and implementing networking standards and devices and describes how network applications on different computers can communicate through network media.

Engineers use the OSI/ISO 7 Layer model as a teaching model to explain the different functions computers have to be able to complete to communicate. They do not use this model to manufacture or build equipment.

The TCP/IP Model

The United States Defense Advanced Research Project Agency (DARPA) created the TCP/IP model in the 1970s as an open, vendor-neutral, public networking model. Just like the OSI model, it describes general guidelines for designing and implementing computer protocols. 

These computer protocols consist of four layers: network access, Internet, transport, and application.

The four TCP/IP layers cover, perform, and can be mapped alongside the OSI 7 Layer model, showing where and how they are logically equivalent even though they are distinctly different.

Engineers use the TCP/IP model to manufacture and build equipment and as a practical model for internetworking communication between devices.

OSI/ISO 7 Layer model and the TCP/IP networking communications protocol are foundational to modern computer networking. Computer engineers use the OSI model to teach and discuss the subject of networking. While they also use the TCP/IP model to discuss networking, they typically supplant this model with the OSI 7 Layer model in professional conversations. However, engineers use the TCP/IP model to build and communicate between computing devices.

Review this material in:

 

Unit 9 Vocabulary

  • Domain Name System (DNS)
  • File Transfer Protocol (FTP)
  • Hypertext Transfer Protocol (HTTP)
  • Hypertext Transfer Protocol over SSL/TLS (HTTPS)
  • Internet Message Access Protocol (IMAP)
  • Internet Protocol (IP)
  • Networking
  • Networking protocol
  • OSI/ISO 7 Layer Model
  • Post Office Protocol version 3 (POP 3)
  • Protocol
  • Secure Shell (SSH)
  • Simple Mail Transfer Protocol (SMTP)
  • Simple Network Management Protocol (SNMP)
  • Telnet
  • Transmission Control Protocol (TCP)