CS402 Study Guide

Site: Saylor Academy
Course: CS402: Computer Communications and Networks
Book: CS402 Study Guide
Printed by: Guest user
Date: Friday, July 26, 2024, 8:40 PM

Navigating this Study Guide

Study Guide Structure

In this study guide, the sections in each unit (1a., 1b., etc.) are the learning outcomes of that unit. 

Beneath each learning outcome are:

  • questions for you to answer independently;
  • a brief summary of the learning outcome topic; and
  • and resources related to the learning outcome. 

At the end of each unit, there is also a list of suggested vocabulary words.

 

How to Use this Study Guide

  1. Review the entire course by reading the learning outcome summaries and suggested resources.
  2. Test your understanding of the course information by answering questions related to each unit learning outcome and defining and memorizing the vocabulary words at the end of each unit.

By clicking on the gear button on the top right of the screen, you can print the study guide. Then you can make notes, highlight, and underline as you work.

Through reviewing and completing the study guide, you should gain a deeper understanding of each learning outcome in the course and be better prepared for the final exam!

Unit 1: Networking Fundamentals

1a. describe the evolution of computer networks and the internet

  • What are the major milestones that highlight the invention and progress of computer networks as we know them today?

In the early days of computing in the 1950s and before, "networks" consisted basically of interactive processing through time-sharing. Users' terminals were connected directly to a mainframe computer, and they shared the processing time of that computer.

In the late 50s and 60s, time-sharing was replaced by "batch processing", where users would need to physically carry their work, like punched cards, to the computer. These systems supported only a single user at a time, but they avoided the headache of running wires to each of the many users of the mainframe.

The biggest move toward today's networks started in 1964, when Paul Baran wrote reports outlining packet networks. It was not until 1969 when the first nodes of a true network, ARPANET, became operational. ARPANET featured two especially important applications, Telnet and FTP. Telnet allowed for users on one computer to have access to a remote computer and run programs and commands as if it was local. FTP allowed for high-speed file transfer between computers.

In 1972, the first email system was developed by Ray Tomlinson. Interestingly, email was invented before TCP/IP became operational – that giant step did not occur until 1980. By 1983, ARPANET had fully adopted the TCP/IP protocol. ARPANET was retired in 1990, but the World Wide Web was developed in 1991. It originally used GOPHER, which was a text-based browser, and was followed in 1992 by the first "modern" web browser, MOSAIC.

To review, see Introduction to Networking Fundamentals. For more detail about computing milestones, see this suggested additional article on Computer Networks.

 

1b. describe the difference between a computer network and a distributed system

  • What are the main differences between a computer network and a distributed system?
  • What applications are better suited for each one of the systems?

A network is a system that consists of many computers connected to each other but operating independently. Users know the existence of each computer and resource.

A distributed system consists of many computers working together as a single unit. A distributed system automatically allocates jobs between available processors, which makes the process completely transparent to users. Users in a distributed system see the system as a single big system. A distributed system depends on a layer of software, called middleware, while a network is simply a group of computers connected together. Networks are best when you need to share resources to make them available to everyone, such as a printer.

To review, see Introduction to Networking Fundamentals. For a detailed description of distributed computing and several examples for distributed systems, see this suggested additional article on Distributed Computing.

 

1c. explain the use of layers in networking

  • What are the 7 layers of the OSI reference model and the counterpart layers of the TCP/IP model?
  • What is the function of each layer?

The challenges involved with networking computers together could be overwhelming if they were considered as one single system. At the beginning of networking, designers decided to use a "divide and conquer" approach to divide network functions into logical layers. This split bigger problems into smaller, more manageable problems.

Each of these layers is composed of software and/or hardware modules that perform related network services. Each layer uses the services provided by the layer immediately underneath it, and provides services to the layer above it. Data to be transmitted must pass down through the layers of the source node to the communication medium (that is, the physical link). The data travels across the physical link and up through the layers of the destination node to the user. This is called end-to-end communications. As long as the interface between the layers is not changed, implementers of the layers can feel free to change as they see fit the way that a certain function is accomplished.

The interface, as defined by the protocol, cannot change. Each layer in the stack deals with messages, which are normally limited to a maximum size. Each layer in the stack adds a header to its messages. This header is used to synchronize the data with the same layer in the remote peer. The header contains information that will let the remote peer decide what to do with the data. Data flows down the stack on the sending host. Starting with the raw data in the application, each layer will add a header and pass it down to the next layer below it. When the remote peer receives the message, each layer reads the header, makes a decision on what to do with the message, removes the header, and passes it to the layer above it, which will repeat the same process. The process looks like this:


Applications running on both machines need to exchange data. We will use the term Application Data Unit (ADU) to refer to data units exchanged by the two applications. The transport layer receives data from the application. It divides the data into manageable units and attaches the TH header forming what is called a "segment". The TCP header has a minimum length of 20 bytes. The transport layer will then give the segment to the network layer. The network layer attaches a 20-byte header for IPv4 (NH) to form a "packet" and gives it to the data link layer. The data link layer forms a frame by adding a header, denoted here as DLH, and optionally a trailer, shown here as DLT. Not all data link protocols add a trailer, but Ethernet does add a 14-byte header and 4-byte trailer, not counting the preamble of 7 bytes and Start of Frame Delimiter of 1 byte. The data portion on Ethernet has a maximum of 1500 bytes.

Notice that by now a lot of overhead needs to be added before the actual data goes into the physical layer. For TCP/IP with Ethernet in the data link layer that will be 20 (TH) + 20 (NH) + 14 (DLH) + 4 (DLT) + 7 (Pr) + 1 SFD) = 66 bytes. A full-size frame of 1500 bytes of data will have a minimum length of 1500 + 66 = 1566 bytes. That represents an overhead of 66/1566 = 4.2%. For smaller amounts of data, such as 100 bytes, the overhead will be a whopping 66/166 = 39.7%, going even higher for smaller amounts of data. Needless to say, larger frame sizes are preferred.

One final point: notice in the figure that we show all the layers on the source host as talking with the corresponding layer in the destination host. This is, of course, a "logical" connection. Each layer uses the header to send information to its counterpart layer on the other side. We think of this as each layer having a logical connection to the counterpart layer on the other side.

To review, see The Reference Models.

 

1d. explain the difference between Local Area Networks (LANs), Metropolitan Area Networks (MANs), and Wide Area Networks (WANs)

  • What is the difference between a LAN, MAN, and WAN?
  • 4 different topologies for a LAN include ring, bus, star, and mesh. Of these, which one:
    • is more effective under low load conditions?
    • is more effective under high load conditions?
    • yields the highest reliability?
    • is more deterministic?

A local area network (LAN) is a network where all the nodes share the same physical medium and have a common broadcast domain. Traditionally, LANs were described as a network of computers privately owned and in close proximity, like a home network. However, the total size of a LAN can extend for miles – the primary requirement for it to be classified as a LAN is that they share the same broadcast domain and physical medium.

A MAN, or Metropolitan Area Network, covers a city. A great example of a MAN is the cable television networks that are ubiquitous in cities all around the country.

A WAN, or Wide Area Network, spans a large geographical area that could be a country, a continent, or even the entire planet. It is composed of a combination of hosts and routers that span large areas.

In a LAN with a bus topology, multiple nodes connect to a single bus using one of the typical bus access methodologies like CSMA to access the media. A bus topology provides excellent performance under low load conditions, since there are few collisions and stations have access to the media when they need it. But, at high load conditions, the media will be busy most of the time, which will result in multiple collisions and low throughput. In a bus configuration, a single node failure will not have any effect on the operation of the other nodes in the network, which means it is reliable. However, a link failure will split the network into two or more isolated networks. This could result in some members of the network not being able to access essential services.

In a LAN with a ring topology, the nodes are physically connected in a ring. Each node needs to wait for a token to arrive before being able to send its data. One possible drawback of such a configuration is that if any host in the ring fails, the ring breaks and the network goes down. Contrast that with ethernet using CSMA – every station works independently, and one failure will not affect the operation of others in the network. But a token ring has the advantage of being basically deterministic. Once a station releases the token, it would be able to determine when the media will be available for it again, based on the number of stations in the ring. This topology is also relatively good under heavy loads. Some other features of the token ring topology will be considered later in this study guide. 

In a star topology, all nodes connect to a central physical interface. This is an effective communication strategy, but it suffers from the fact that the central physical interface is a single-point failure. Failure there means that the whole network could go down. The failure of a single node, however, would not have an effect on the operation of the system.

In theory, a full mesh topology would provide the highest performance and redundancy for a small network. However, this topology also has the highest cost, and is virtually impossible if the number of hosts exceeds the number of available network interfaces per host.

To review, see Services and Protocols and The Reference Models. For more detailed information, see this suggested additional article on Local Area Networks.

 

1e. explain the role of the Network Request for Comments (RFC) as a mechanism to develop, review, and incorporate standard changes in a network protocol

  • When would you use an RFC as opposed to a manufacturer's specification or other forms of documentation, like ISO standards or IEEE standards?

RFCs were originally developed as part of the ARPANET project as a means to disclose, share, and generate discussions about a particular protocol. Today, they have become the official publication channel for the Internet Engineering Task Force (IETF). They are the de facto standards that define and describe networking protocols and algorithms.

You use RFCs to learn about the operation of a generic protocol like DNS or OSPF. RFCs give the complete specifications and algorithms used by open protocols that are common in networks. RFCs are tightly controlled by the IETF, and you would not use them if you were experiencing network issues or failing components.

The Institute of Electrical and Electronics Engineers (IEEE) and the International Organization for Standardization (ISO) are two independent non-governmental organizations that created a number of standards that are common in networking. IEEE 802.11 is an example of a standard that standardizes the basic operation of wireless networks. ISO is an international organization that produces many types of standards, not only for networking. It was heavily used in the early days of networking, and its OSI standard described the 7-layer model that has been used ever since. However, RFC is the most prominent tool when trying to learn about networking protocols.

Read the full history and uses of RFCs in The Role of RFC in Computer Networks.

 

1f. describe different switching techniques, such as packet, circuit, and virtual calls

  • What switching technique requires the lowest overhead, but offers the worst resiliency in case of router failures?
  • What switching technique provides the best reliability in case of router failures, but has a high overhead?

Circuit switching, datagram packet switching, and virtual call switching are the three switching techniques used in computer networks.

With circuit switching, the path that all packets will follow is established at the beginning and kept the same throughout the whole exchange of data. This model was followed by telephone networks, where a circuit from caller to recipient is established at the beginning and kept open for the duration of the call. Before data flow, a path must be established from source to destination and all packets follow exactly the same route. Establishing the circuit requires time, which can be considered a disadvantage. Circuit switching also performs the worst when a router fails. If a router fails, the initial "circuit establishment" phase will need to be repeated to establish a new "circuit". That will lead to packet losses and transmission delays. Circuit switching has advantages, however, like low overhead. Once the circuit is established, all that is needed in the header to route the packets is a small circuit ID number. QoS is easy to implement with circuit switching, since quality parameters can be negotiated during circuit establishment. Virtual call switching is a variation of circuit switching, except that packets from different calls can be interleaved during transmission because the routing decision is based on their virtual circuit ID.

Datagram packet switching was modeled after the postal system, where no two packets need to follow the same route. Each packet carries a header with full address information. Routers in the path make routing decisions based on the final destination using routing tables obtained by routing protocols. This gives the best flexibility and reliability, even in the case of router failures or congestion. Another advantage is that there are no delays for data transmission, since packets can be sent immediately without waiting to establish a circuit. However, datagram packet switching requires the highest overhead, since the full addressing information is needed on each packet. IP is a classic example of a datagram packet switching technique.

A real-life implementation of virtual circuit switching was a technique called asynchronous transfer mode (ATM). Data was sent in 53-byte cells, of which only 5 bytes were used for the header. This is a small header, since it was primarily used to indicate the virtual path and circuit ID. The percentage of overhead for ATM was 5/53 = 9.4%.

To review, see Introduction to Networking Fundamentals. For more detailed information, see this suggested additional article on Circuit Switching.

 

1g. differentiate between connection-oriented and connectionless services

  • If you were designing a registered electronic mail system, would you want a connection-oriented service or a connectionless service?
  • If you were designing a network to allow users to log in remotely, would you want a connection-oriented or connectionless service?
  • UDP is a connectionless service. What kinds of real-life scenarios are best suited for UDP?

In a connection-oriented service, a connection is established ahead of time before data is sent through the network. in connection-oriented systems, each data packet must contain the full address of the destination and is sent independently of each other. The universally recognized example of a connection-oriented service is TCP. Note that TCP provides connection-oriented communications while using the connectionless infrastructure provided by IP. You would want to provide connection-oriented services for applications that require user login, or for data-sensitive applications that would not tolerate missing information, such as money transfers.

UDP is the most common example of a connectionless service, where data flows without an initial agreement being established and random bits of data loss is tolerated. Examples of connectionless services are video streaming applications, outward data dissemination, or even electronic mail systems. TCP would not be tolerated by real-life video streaming applications, because the "connection-oriented" assumption of TCP requires that any single lost segment be retransmitted. Because of this, transmission would need to stop while the source retransmitted that particular segment even if the data it carried was insignificant. That would create continuous interruptions in the stream. It would be better to see a small blip in a video than to have the video be interrupted every few seconds.

Viewers of sporting events, for example, would rather lose a bit of color definition and still see a winning play, rather than having the video stop during the play. In the extreme opposite scenario, if you are logging in remotely to your bank account to retrieve money, you would much prefer having a connection-oriented protocol, where every single cent of your money is accounted for. Most protocols in the TCP/IP suite have been designed to use either TCP or UDP. One big exception of this is DNS, which can use either UDP or TCP depending on the type of data being handled.

To read more about connection-oriented and connectionless services, see Services and Protocols.

 

1h. describe the differences between wireless, fiber, and copper media for the transmission of data in a computer network

  • What maximum data rates can be obtained with cat-6 STP cable?
  • What type of cable would you recommend using if high data rates are expected while spanning long distances?

The capacity for transmitting data has improved dramatically over time across a variety of media types, including UTP, STP, fiber, and even wireless media. In the early days of networking, copper cable was the standard. With the invention of Ethernet in 1973, and its standardization and commercial introduction in 1980, coaxial cable became the standard. You would often see a thick orange coaxial cable running from end-to-end in the buildings of organizations that used this technology. Connecting stations to this cable was not an easy task, so they introduced what were called "vampire taps" to make connections. Also known as "piercing taps", they were devices that clamped onto the coaxial cable, piercing into it, like a "vampire" to introduce a couple of probes and connect to the internal copper wires. As networks evolved it became obvious that this type of cable was too thick, too heavy, and too difficult to handle if the transition to "personal computers" was going to happen smoothly. That led to the creation of UTP followed by STP cable in the mid-80s. But these cables all suffered from one big drawback: they provided low bandwidth. At that time, there were no bandwidth-hungry applications, so regular UTP cables worked fine. With the many bandwidth-hungry applications that we have today, copper wire required extreme improvements if it was still going to be used. Over the years the STP cables have been improved by leaps and bounds. For example, cat-6 cable can support data rates of up to 10Gbps at a maximum length of 55 meters or 1Gpbs if the distance increases by around 100 meters. For short distances and maximum rates like these, they do provide a good choice if cost is of concern. However, it is not a good choice for long-distance, high data-rate hauls – for that, you would need to step up to fiber.

A comparison between different transmission media options can be found in Transmission Media.

 

Unit 1 Vocabulary

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • ARPANET
  • Telnet
  • FTP
  • Batch processing
  • Distributed System
  • LAN
  • MAN
  • WAN
  • Physical Layer
  • Data Link Layer
  • Ethernet
  • Network Layer
  • IP
  • Transport Layer
  • TCP
  • UDP
  • Ring topology
  • Bus topology
  • Star topology
  • Mesh topology
  • RFC
  • OSI
  • IEEE
  • Circuit Switching
  • Virtual Circuit Switching
  • Datagram Packet Switching
  • Broadcast
  • Connection-oriented service
  • Connectionless service
  • UTP
  • STP
  • Fiber

Unit 2: The Basics of Protocols

2a. list and describe each of the layers in the Open Systems Interconnection (OSI) model and the TCP/IP model

  • What layer in the TCP/IP model combines the application, presentation, and session layers from the OSI model?
  • What layers of the TCP/IP model have the same function as the physical and data link layers in the OSI model?

When you look at the OSI model, you will see (from bottom to top) the physical, data link, network, transport, session, presentation, and application layers. The TCP/IP model includes the network access (link), internet, transport, and application layers.

As you can see, the application, presentation, and session layers are all combined into a single application layer in the TCP/IP model. Some elements of the OSI session layer are also included as part of the TCP transport layer. Similarly, the link layer of the TCP/IP model covers what the OSI model divides into the data link and physical layers. This diagram shows both models side-by-side and should help you visualize each model. Note that the internet and network layers in the two models have the same function.


All the functions offered by the physical layer and data link layer are covered by what the TCP/IP model refers to as the link layer, though note that some literature refers to the link-layer as the network access layer or the host-to-network layer. TCP/IP was originally developed for the military, and it was used for their first network, called ARPANET. Because of that, TCP/IP is the dominant technology used in industry today.

Even so, elements of the original OSI data link layer are still in wide use. Ethernet frames are encapsulated in a frame that starts with a data link layer header that includes the data link layer address, otherwise known as the MAC address. Some authors consider a hybrid, 5-layer model that includes elements of both the OSI and TCP/IP models: Application-Transport-Internet-Data Link-Physical. Because this is a common way to refer to things, here is an overview of the function of each of the layers in this 5-layer model:

  • Layer 5 – Application: Includes the high-level protocols to run applications such as Telnet, FTP, SMTP, DNS, HTTP, SNMP. The TCP/IP model gets rid of the presentation and session layers from the OSI model and treats all three as a single layer.
  • Layer 4 – Transport: Referred to as the "end-to-end" layer, and is where communication between a source process to a destination process occurs.
  • Layer 3 – Internet: Referred to as the "Network" layer in the OSI model. This is the "routing" layer, where addressing information is carried by a header that allows packets to be routed all the way from the source to the destination regardless of geographical location.
  • Layer 2 – Data Link: Ensures error-free transmission of packets between two machines in the same physical network. The only way for two machines to talk is by exchanging data at the data link layer.
  • Layer 1 – Physical: Defines how data bits are transmitted across the network. It deals with media type, connectors, voltage levels, and so on.

To review, see The Internetworking Problem.

 

2b. differentiate between all the protocols in the TCP/IP reference model

  • What common features do the SIP, SMTP, TCP, and UDP protocols share?
  • Internet Group Management Protocol is a multicast protocol used by TCP/IP systems. What protocol is used to carry IGMP data?

UDP and TCP are the two basic protocols of the TCP/IP model. UDP is the classical example of a connectionless, unreliable service, and is used extensively by applications and other protocols. By itself, UDP does not have a handshaking phase, and it sends data as soon as it is available. Because of this, it is generally used for applications or protocols that are time-sensitive, and where dropping packets is preferable than stopping to wait for a dropped packet to be retransmitted. An example of an application where this would be important is video streaming. In many cases, UDP depends on applications for error checking. However, this does not mean that UDP is error-prone – in fact, the UDP header has a checksum that guarantees data integrity. The header also contains port numbers to appropriately address the packet to the correct process.

TCP is the "reliable" counterpart of the TCP/IP model. TCP is connection-oriented and provides reliability for applications running over IP, which itself is a connectionless protocol by definition. In IP, packets are put into the network and expected to arrive. By itself, IP does not keep track of packets to request retransmissions in the event of packet loss. Since many applications cannot tolerate packet loss, those applications typically run over TCP. If a packet is lost by IP, the resulting TCP segment will not be complete, an error check will fail, and TCP will request retransmission.

It is important to distinguish protocols that run over UDP versus those that run over TCP. Typical protocols that run over UDP are SNMP, TFTP, BOOTP, and RIP. Other protocols, like FTP, BGP, Telnet, and HTTP, run over TCP. There are a few cases where these rules are broken. DNS can use UDP or TCP. Normal DNS queries are sent over UDP, but TCP is used in large queries like zone transfers. The ICMP, IGMP, and OSPF protocols run over IP, but they do not have a TCP or a UDP header as would be expected for any other protocol running over IP. Understanding how these protocols run directly over IP is covered in detail later in later units of this course.

To review, see The OSI Reference Model.

 

Unit 2 Vocabulary 

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course. Although most of the protocols listed in the acronyms have not been covered yet, at this point you need to understand what the acronyms stand for and where they fall in the TCP/IP protocol stack.

Try to think of the reason why each term is included.

  • OSI
  • UDP
  • TCP
  • SNMP
  • TFTP
  • BOOTP
  • RIP
  • ICMP
  • IGMP
  • OSPF
  • UDP
  • DNS
  • Protocol Stack
  • Application Layer
  • Presentation Layer
  • Session Layer
  • Transport Layer
  • Network Layer
  • Internet Layer
  • Data Link
  • Link layer
  • Network Access layer
  • Host to Network Layer

Unit 3: The Application Layer

3a. Use the Domain Name System (DNS) protocol to map hostnames to IP addresses

  • What is the use of resource record type A in a DNS table?
  • What DNS record allows you to create aliases for a particular domain?

The domain name system is the mechanism used to translate host or domain names into IP addresses. Every domain can contain a set of "resource records''. The most common and most important record will be, of course, the IP addresses associated with the domain. This record is identified as type A. However, DNS has evolved to include much more information than just the IP address. Examples of such records are:

  • The CNAME (Canonical Name) provides a domain name and allows for aliases to be created
  • HINFO (Host Information) provides the information or description of the host and allows people to find what kind of machine and operating system a domain corresponds to
  • The NS (Name Server) specifies name servers
  • SOA (Start of Authority) provides the name and information of the primary source of information of the name server's zone, such as the email address of its administrator
  • A (Address) is the IP Address of a host
  • MX (Mail Exchange) specifies the name of a host that will accept email for the specified domain
  • PTR (Pointer) points to another domain
  • TXT(Text) is used by a domain to identify itself in a way that provides information that the administrator wants to be known

The DNS table will contain the domain name, time to live, class, type, and value, in addition to the records described above. Time to live is the time that the record will remain in the table before being removed. A typical value is 86,400 which means that the record will be removed after 86,400 seconds (one day). The class provides information about the record. If you are dealing with the internet, the only class that you will see in these tables is class IN. Although other non-Internet classes have been defined, they are almost never used.

Consider this example of a DNS table:

Study the table in detail and understand what each record is telling you. What will happen if someone sends an email with the address xyz@cs.uprm.pr.edu?

To review, see The Domain Name System and Domain Name System (DNS).

 

3b. Compare and contrast the Simple Mail Transfer Protocol (SMTP), Post Office Protocol (POP), and Internet Message Access (IMAP) protocols to send and retrieve email

  • What email retrieve protocol is designed to leave the email in the server for easy access from anywhere later?
  • Is SMTP based on a peer-to-peer architecture?

SMTP is the application used to send out email messages from an email client to an email server. SMTP operates over TCP port 25, or port 465 for secure transmission. In order for the server to establish the SMTP connection, it must be listening on 25 or 465. Once the email arrives at the server, it is the job of a protocol like POP or IMAP to retrieve the email and make it available to the user. POP and IMAP are both solid protocols, but each has advantages and disadvantages. POP normally downloads messages to the client and then removes them from the server, which quickly frees up space. However, POP requires a lot of discipline from users, since the email will no longer be available in the server once it is retrieved. IMAP, on the other hand, is designed to leave messages on the server, and messages can be retrieved from anywhere using different devices. If you need access to your email anytime and anywhere with multiple devices, you should use IMAP. You will need more storage space on your server if you use IMAP. IMAP runs on the default TCP port 143 for unsecure connections and port 993 for secure connections. POP uses the default TCP port 110, or port 995 for secure connections. IMAP allows for multiple users to be connected and manipulate email, while POP only allows for one client to be connected to the server.

To review, see Electronic Mail.

 

3c. Describe the use of Hypertext Transfer Protocol (HTTP) for the generation and management of web-based applications

  • What is the basic difference between HTTP and HTML?

The world wide web was developed in the early 90s as a mechanism to share documents anywhere on earth by the use of hyperlinks. There are three important components of the world wide web: a standard addressing scheme, a standard document format, and a standardized protocol for efficient retrieval of documents. These are:

  • URI, or Uniform Resource Identifier, is a character string that uniquely identifies a resource on the internet. The most known type of URI is the URL, which identifies the address of a web page. URL is not the only type of URI. For example, the SIP protocol uses a SIP-specific URI to identify the address of sending and receiving nodes.
  • HTML, or HyperText Markup Language, defines the format of the documents that are exchanged on the web. It uses tags or markup text to set fonts, colors, and other effects on the text displayed on webpages. The browser does not display the markup text or tags, but uses them to interpret the content of the page.
  • HTTP, or HyperText Transport Protocol, is a client-server protocol in which the client sends a request and the server returns a response. HTTP servers listen by default on TCP port 80, although 8080 is also used. HTTPS is the secure version, and operates by default on port 443.

To review message format and HTTP request types, see HyperText Transfer Protocol.

 

3d. Use the telnet and File Transfer Protocol (FTP) applications to open remote connections and transfer files between hosts in a network

  • What is the best way to securely and quickly transfer a file between two hosts in your network?

The File Transfer Protocol, FTP, was popular in the late 1990s and early 2000s. FTP has lost some popularity today, although it is still in use and should be understood. TFTP was developed in parallel with FTP as an option for "quick and dirty" file transfer between two systems. The main difference between FTP and TFTP was the requirement for user login used by FTP, while TFTP does not require user login. It is still used, but typically only to do quick insecure transfers.

When security became an important requirement, FTP ceased to be the protocol of choice. FTP provided authentication but files were transferred in open text. Network developers quickly came up with FTPS to provide such security. FTPS was built to provide FTP transfer through an SSL or TLS tunnel. Although it accomplished the task and provided security, it was hard to deploy and cumbersome to use. FTPS is now rarely used, and many consider it to be an obsolete protocol.

Instead, users moved to SFTP, which was much more popular and easier to deploy and implement. SFTP was developed as an extension of SSH but with full protocol capabilities. FTPS is now considered deprecated, and you should not consider it in your environment. The only problem with SFTP is when going across a firewall when transferring files outside of your organization. In situations like that, it would be appropriate to secure your FTP connection using encryption like SSL. HTTP, of course, could always be an option. However, HTTP is designed to transfer files from server to client.

To review, see SSH Protocols.

 

3e. Improve system reliability by using client-server and peer-to-peer models

  • Which model, between peer-to-peer or client-server, provides better connectivity and availability instead of focusing on sharing data?

In a client-server architecture, there are a series of clients connected to a server. In a peer-to-peer model, clients and servers are not different, each node can act as a client or a server. Peer-to-peer is a model normally used by distributed computing applications. Peer-to-peer networks are typically less secure than client-server, since security is handled by individual computers. However, since data is distributed between many systems, there is not a single point of failure. Because of this, peer-to-peer networks can provide higher connectivity and availability than their client-server counterparts. In a system where sensitive information needs to be protected, the client-server models are preferred, but for content delivery organizations, peer-to-peer models should probably be considered first. If your company has a single printer shared by many users, a client-server architecture would be the way to go.

Review Client-Server to see the main differences between the peer-to-peer model and the client-server model. Peer-to-Peer will give you a better understanding of applications where peer-to-peer is a better fit than client-server.

 

3f. Illustrate the use of Session Initiation Protocol (SIP) to initiate and control multimedia sessions

  • What is the INFO SIP request used for?
  • What information is found in the SDP portion of a SIP request?

The Session Initiation Protocol (SIP) is a text-based, application-level protocol used for setting up, changing, and terminating multimedia sessions between participants on a TCP/IP setting. Typical uses of SIP include IP Telephony, Instant Messaging (voice, video, chat), interactive games, and virtual reality.

Described on RFC 326, SIP handles the setup, modification, and tear-down of multimedia sessions. It is based on an HTTP-like request/response transaction model. SIP normally runs on top of UDP but can optionally run over TCP or TLS.

SIP is a powerful protocol that provides for all of the following:

  • User Location: Finds the location of the end-user wanted for communication; supports address resolution, name mapping, and call redirection
  • User Availability: Finds out if the end-user is available and willing to start a session, and informs the requester if the endpoint was unavailable and why (already on the phone, didn't answer, and so on)
  • User Capabilities: Determine the appropriate media and media parameters to be used (via SDP), determines the lowest level of services, and defaults to the capabilities that can be handled by everyone
  • Session Setup: Establishes the session and "rings" the user, and supports mid-session changes like adding another endpoint or changing the media characteristics
  • Session Management: Keeps an eye on the session and indicates when session parameters are being modified
  • Session transfer and termination: Transfers a call from one end user to a different one, and terminates the sessions between all parties at the end of a session

SIP is based on a client-server architecture. A User Agent is a piece of software present in every SIP end station. A User Agent Client (UAC) sends requests, while a User Agent Server (UAS) receives requests and sends responses.

Clients send requests and receive responses. Examples of clients are phones and PSTN gateways. The server receives requests and sends back responses. A series of servers are:

  • Proxy Server: act as intermediate devices by relaying call signaling, or by providing other functions like authentication, authorization, network access control, and security
  • Registrar Server: accept registration requests from users and maintains information of user whereabouts at a Location Server
  • Redirect Server: provide clients with information about the next hop that they should send their messages; the clients then contact the next hop or server directly
  • Location Server: used by redirect or proxy servers to obtain information about a user's possible whereabouts; maintains a database of SIP/IP address mappings

The SIP protocol is based on a request/response transaction mode. Its request methods are:

  • INVITE: user is invited to a session
  • ACK: confirm a session establishment
  • BYE: terminates a session
  • CANCEL: cancels a pending invite
  • OPTIONS: to inquire the server or other devices, such as checking media capabilities before using an invite
  • REGISTER: to bind a user address with a SIP registrar
  • SUBSCRIBE: subscribe users to certain events; users should be notified if that event occurs
  • NOTIFY: notify a subscribed user that an event has occurred.
  • MESSAGE: sip method to actually send instant messages.
  • INFO: transfer information during a session (such as typing on the keyboard or change of status)
  • NEGOTIATE: to negotiate various kinds of parameters, such as security mechanisms
  • REFER: to tell the receiver to contact a different user using the contact information provided (like call transfer)

Responses contain a Status Code and a Reason Phrase. For example, a response of code 200 means that it is OK. Other response classes are:

  • 1xx: Provisional – request received, processing, ringing (180), trying (100)
  • 2xx: Success – ok (200), accepted (202)
  • 3xx: Redirection – moved temporarily (302)
  • 4xx: Client Error – unauthorized (401), busy here (486)
  • 5xx: Server error – timeout (504)
  • 6xx: Global Failure – busy everywhere (600)

Addresses in SIP are expressed as a Uniform Resource Identifier, URI. The URI identifies the user with a unique address. An example of a SIP URI would be sip:bob@xyz.org or sip:9781112222@xyz.org.

The Session Description Protocol is a portion of the SIP message that describes:

  • Media streams (sessions can include multiple streams of differing content like audio, video, data, control, and application)
  • Addresses
  • Ports used for each stream
  • Payload types for each media stream type
  • Start and stop times, useful for broadcast sessions like television or radio
  • Originator, for broadcast sessions

This is an example of a SIP Invite request. SIP agent at sip:9781118484@192.168.215.50 is trying to establish a session at sip:9781118484@192.168.215.50. The SDP portion includes the call type, audio using port 49756, and the codecs that can be used listed as a=. Make sure you understand all lines of this request.

INVITE sip:5081113434@192.168.215.50;SIP/2.0
Via: SIP/2.0/UDP 192.168.215.66:34522;branch=z9hG4b
From: <sip:9781118484@192.168.215.50>;tag=f33c8a7c
To: <sip:5081113434@192.168.215.50;user=phone>
Call-ID: 3c2670a47ef4-t6ik7rk30zgf@snom190
CSeq: 1 INVITE
Max-Forwards: 70
Contact: <sip:9781118484@192.168.215.66:34522>
User-Agent: snom190/3.60s
Allow: INVITE, ACK, CANCEL, BYE, REFER, OPTIONS, NOTIFY, SUBSCRIBE, PRACK, MESSAGE, INFO
Session-Expires: 3600
Content-Type: application/sdp
Content-Length: 275
v=0
o=root 1747056259 1747056259 IN IP4 192.168.215.66
s=call
c=IN IP4 192.168.215.66
t=0 0
m=audio 49756 RTP/AVP 2 4 0 101
a=rtpmap: 0: PCMU/8000

This request is followed by the following response:

SIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.216.50:5060;branch=z9hG4bK
From: <sip:5081113434@nfl.com>;tag=32d7a8c0-13c4
To: <sip:5081113434@192.168.216.50:5060>;tag=003094c
Call-ID: 6c3328642c0833b761bf838d51bfbaed@nfl.com
Date: Mon, 27 Nov 2006 23:45:30 GMT
CSeq: 1 INVITE
Server: CSCO/7
Contact: <sip:5081113434@192.168.216.98:5060>
Content-Type: application/sdp
Content-Length: 199
v=0
o=Cisco-SIPUA 128 23815 IN IP4 192.168.216.98
s=SIP Call
c=IN IP4 192.168.216.98
t=0 0
m=audio 19888 RTP/AVP 0 101
a=rtpmap:0 PCMU/8000
a=rtpmap:101 telephone-event/8000

The call has been accepted and the appropriate parameters agreed, including to use codec PCMU/8000.

This is an example of a typical SIP Session where John establishes a session with Dan:


The call is going through a proxy server. Once Dan's agent accepts the call, John will acknowledge and the media session using RTP is opened. Study this figure in detail and look carefully at the requests and responses. Notice that once the call has started, SIP uses the real-time protocol RTP to provide end-to-end delivery services, which is tightly coupled with RTCP to monitor the quality of service and to convey information about the participants.

To review SIP, see SIP and RTP, which also gives an overview of the full SIP process including the RTP and RTCP protocols.

 

3g. Describe Secure Shell (SSH)-based applications

  • Why would you use an SSH-based application instead of an SSH-based application?
  • What TCP port number is used for SSH-based applications?

SSH, or secure shell, was developed as a replacement for the Telnet protocol from the early days of networking. Telnet was developed in the late 60s as a way to connect remotely. However, in the early days of the internet, security was not a big requirement. In those early protocols, information was sent in plaintext, including sensitive information such as passwords. SSH was developed with built-in encryption technology to provide data integrity and security while doing such operations in the network.

Telnet operates over TCP port 23, while SSH operates over TCP port 22. SSH is often compared or confused with TLS. Although they are both security protocols, there is a subtle difference between them. While SSH encrypts data to allow for secure remote login, TLS (as well as its predecessor SSL) creates an encrypted tunnel where files can be securely transferred between two hosts. SSH provides for secure data transmission, while TLS ensures the integrity and privacy of the message. TLS runs on port 443, which is used every time you use encrypted HTTPS applications. One popular application that uses SSH is puTTY. SSH uses public-key encryption.

To review the SSH protocol, including a comparison of FTPS with SFTP, see SSH Protocols.

 

3h. Describe and use the Simple Network Management Protocol (SNMP)

  • What is the name of the collection of all managed devices in an SNMP system?
  • What does this SNMP Wireshark capture mean?


SNMP is the network management system used by the TCP/IP system. SNMP, described in RFC 1157, is layered on top of UDP. The basic elements of an SNMP system are:

  • Network elements or nodes containing a processing entity called an agent responsible for performing the management functions requested by a management station
  • Network Management stations (NMS) is the software that executes management applications to monitor and control managed elements
  • A management protocol used to communicate management information between the management stations and the agents in the network elements
  • Management information, such as variables; a collection of these variables is called a Management Information Base, MIB

An SNMP NMS monitors and controls a managed node. It does that by issuing requests directed to the Agent residing in the managed node. Managed nodes could be routers, switches, modems, printers, and many more.

The agent is software that resides in the managed node. It interprets the request from the manager and performs function according to what is asked. As an example of managed objects, consider the variable that SNMP refers to as SwPortState. This variable describes the actual state of a port in a switch device, which could be up (value 1) or down (value 0). It is a read-write type variable that can be read by the manager to obtain its value and change if needed from up to down or vice versa. Another example of a MIB variable is the SysUpTime which describes the time since the network management system was last re-initialized. This is, of course, a read-only variable that can be read but not changed. Each SNMP transaction occurs in a PDU (Protocol Data Unit). The four types of request PDUs are GET, GETNEXT, SET, and TRAP. A single PDU can GET or SET one or multiple variables.

Every variable in a MIB has two names: the textual name, such as SysDescr, and the proper name or Object ID (OID), such as 1.3.6.1.2.1.1.1. This will be followed by an instance number, like 0. To read or write a given MIB variable, you perform a GET or SET with the variable OID sent as part of the PDU. All OIDs in the universe are globally unique and fit into a hierarchical tree. For example, the OID hierarchy for TCP/IP will be:


The SysDescr variable follows the path of iso.org.dod.internet.private. Since all private variables fall under that node, they all have OID beginning with 1.3.6.1.4. The OID for the SysUpTime variable is 1.3.6.1.2.1.1.3. Notice that this variable falls under the mgmt node. Every management variable will have a 1.3.6.1.2 prefix. In the sample Wireshark trace above, you can see an SNMP GET with an OID of 1.3.6.1.2.1.1.3.0, so the NMS is trying to get Instance 0 of SysUpTime. Also notice from this actual trace, that the SNMP request is sent in UDP with port number 161. 

All MIB variables' OIDs follow a specific ordering in the tree. This ordering is used by GET-NEXT, which simply says to get the next variable in the tree that follows the given OID. You provide an OID with a GET-NEXT and rather than getting the value for that OID you will get the next variable in the tree. The response can then be used in a new GET-NEXT and that will provide the next variable. Continue doing this recursively to transverse the whole agent's tree (which is called a MIB-Walk).

TRAP PDUs are used by the Agent to alert the manager of an extraordinary event when it senses that something out of the ordinary happens to it. Available TRAP PDUs are:

  • coldStart(0): The device has powered up
  • warmStart(1): The device has rebooted
  • linkDown(2): A link has gone down
  • linkUp(3): A link has transitioned to up
  • authenticationFailure(4): A wrong community string was used
  • egpNeighborLoss(5): A neighbor has gone down
  • enterpriseSpecific: Vendor-specific; as many as a vendor wants to define

To review, see Simple Network Management Protocol.

 

3i. Explain the role of socket programming in application processing

  • What primitive in network socket programming is used to assign a socket to a port number and IP address?

The basis for network I/O in BSD UNIX centers on an abstraction known as the socket API. A socket can be thought of as a generalization of the UNIX file access mechanism that provides an endpoint for communication. As with file access, application programs request the operating system to create a socket when one is needed. The system returns a small integer that the application program uses to reference the newly created socket. The application can choose to supply a destination address each time it uses the socket (such as when sending UDP datagrams). Otherwise, it can bind the destination address to the socket and avoid specifying the destination repeatedly (such as when making a TCP connection). Socket communications perform like UNIX files or devices, so they can be used with traditional operations like read and write.

When writing a socket program you will need to start by defining a socket type and address format. You will also want to assign the socket to a port number and IP address using the BIND primitive. When you open a socket and establish a connection, you are connecting two processes together. Notice, though, that the way sockets are defined, the connection could be between processes on two different computers or within the same computer but it does not have to be that way. You could use sockets to connect two processes within the same computer. But for all practical purposes, sockets are normally created to connect processes in two different machines.

To review, see SocketServer.

 

Unit 3 Vocabulary 

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • DNS
  • SMPT
  • IMAP
  • POP
  • HTTP
  • HTTPS
  • HTML
  • URI
  • FTP
  • TFTP
  • SFTP
  • FTPS
  • Telnet
  • Client-Server Architecture
  • Peer-to-peer Architecture
  • SIP
  • SIP REQUEST/SIP RESPONSE
  • SSH
  • SNMP
  • NMS
  • Agent
  • PDU
  • MIB
  • MIB variable
  • OID
  • MIB Walk
  • TRAP PDU
  • Socket

Unit 4: The Transport Layer (TCP/UDP)

4a. Describe the use of the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to transfer data segments

  • Which protocol, UDP or TCP, would you select for an application that requires extreme reliability and guaranteed delivery?
  • Which protocol, UDP or TCP, would you select for an application that requires efficient data transmission at the cost of lacking flow control?

UDP and TCP are the two Transport Layer protocols used in TCP/IP networks. Both protocols run on top of IP, which is an unreliable system. TCP itself is a reliable protocol, while UDP is an unreliable protocol.

Port numbers are needed for data to be sent to the appropriate final destination. Both UDP and TCP carry port number information in their headers. Both protocols also provide for a checksum field to assure data integrity, although it is sometimes not used by UDP. Finally, both TCP and UDP headers have a length field, which prevents incorrect or runt segments from circulating through the network.

The essential difference between the two protocols is that TCP is a connection-oriented, reliable protocol, while UDP is a connectionless, unreliable protocol. TCP was first developed in 1973, but in the 1980s it became clear that its stringent requirements were not required in some cases, such as applications like inward data collection, outward data dissemination, or real-time video streaming. In these cases, not stopping when a frame is lost is better than stopping until the lost frame is retransmitted. UDP was developed with a header that contains the source and destination port numbers, and an optional checksum and a length field were added to provide some data integrity. That way, although there is no way for the segment to be retransmitted, the destination system will not use a segment that is obviously corrupted.

To review, see Principles of a Reliable Transport Protocol, The User Datagram Protocol, and The Transmission Control Protocol. Note that section 4.1.1 describes a reliable transport protocol on top of a "perfect" network layer.

 

4b. Explain the use of the TCP and UDP header fields

  • What is the difference between the RST and FIN flags of the TCP header?
  • What field in the TCP header is used for flow control?

The following figures show the TCP and UDP header fields.


The UDP header includes the port numbers used by sender and receiver (16 bits each), and an optional checksum and segment length (also 16 bits each). The protocol must provide port information to assure that the segment goes to the correct process. The checksum field assures header and data integrity, and the length field prevents incorrect or runt segments from circulating in the network.

TCP, on the other hand, is a complex protocol with many header fields. These are the fields in the TCP header:

  • Source and destination port (16 bits each): TCP port of the sender and receiver. They alert the receiver of the process that sent the segment and to what process it should be directed.
  • Sequence number (32 bits): The sequence number of the first byte of data in the data portion of the segment.
  • Acknowledgment number (32 bits): The next byte expected. The receiver has received up to and including every byte prior to the acknowledgment. You can see that the acknowledgment field is sent as part of the data segment. That is the way that TCP uses to "piggyback" the acknowledgment as part of the data. In the old days, a separate acknowledgment segment was sent, consuming unnecessary resources and bandwidth.
  • Header Lengthor Data offset (4 bits): The number of 32-bit words in the TCP header. Used to locate the start of the data section.
  • Reserved (6 bits): Reserved for future use. Should be set to 0.
  • Flags (6 bits): six 1-bit flags:
    • Urgent pointer (URG): If set, the urgent pointer field contains a valid pointer, used as shown below.
    • Acknowledgment valid (ACK bit): Set when the acknowledgment field is valid.
    • Reset (RST): There are occasions where problems or invalid segments are received that call for the immediate termination of a connection. The RST flag is used to immediately terminate a connection. This is done instead of issuing a FIN flag, as shown below.
    • Push (PSH): Used by the sender as a notification to tell the destination that data must be immediately passed to the receiving process. Normally, data is buffered by the receiving host until the allowed window size is reached. If the PHS flag is set, the receiver should immediately send the data without waiting for a full buffer. There is a subtle difference between the URG and the PSH flag. If using PSH, all data in the buffer must be correctly passed to the process, and all data continues in the correct order. If using URG, only the urgent data is given to the process, which might result in data being delivered out of order.
    • Synchronization (SYN): Used in the first step of the 3-way handshake for connection establishment. The flag alerts the receiver that a connection needs to be established, and provides synchronization information to the other side – that is, what sequence number they should start with.
    • Finish (FIN): Used to close a connection. The FIN terminates a connection graciously. This contrasts with the RST flag which abruptly terminates the connection. An abrupt termination using RST can result in some data loss which will never happen if the connection is graciously terminated with the FIN process.
  • Window Size (16 bits): The size of the receive window relative to the acknowledgment field, also known as the "advertised" window. Used to tell the other side the maximum amount of data that it can send with the next segment. This is TCP's built-in feature for flow control.
  • Checksum (16 bits): Protection against bit errors of the TCP header, payload; and a pseudo-header, which consists of the source IP address, the destination IP address, the protocol number for the TCP protocol (6), and the length of the TCP headers and payload.
  • Urgent pointer (16 bits): When urgent data is present, this field indicates the byte position relative to the sequence number. This data is given to the process immediately, even if there is more data in the queue to be delivered.
  • Options (variable): Optional parameters can be used here, such as maximum segment size, the window scale factor F (which multiplies the value of the window size field by 2F), and the timestamp.
  • Padding (variable): Used to ensure that the size of the TCP header is a multiple of 32 bits, and contains the necessary amount of bits so the whole header has a length that is a multiple of 32 bits.

To review, see The User Datagram Protocol and The Transmission Control Protocol.

 

4c. Explain the transport layer port addressing scheme and port address assignments

  • What is the well-known port used for SSL encrypted frames?
  • What is the well-known port used for HTTP connections?

A TCP connection is uniquely identified by a combination of IP address and port number, which point to a unique process in a unique host. Knowing common port numbers is essential to troubleshoot and understand network behavior. Consider this trace captured by a network sniffer tool like Wireshark:

Trace 1:

Internet Protocol Version 4, Src: 192.168.0.14, Dst: 23.286.196.8
Transmission Control Protocol, Src Port 59139, Dst Port 80 , Seq: 0, Len: 0
Source Port: 59139
Destination Port: 80
[Stream index: 0]
Sequence number: 0
Header length: 24 bytes
Header Length: 20 bytes
Flags: 0x002
Window Size: 4128
Checksum: 0x9v4 [valid]

In an example like this one, we immediately know that a local client HTTP process is using port 59139 as its source port on a TCP connection to an HTTP server. We know that the established TCP connection goes to an HTTP process in the server because its destination port is 80. Another common port number for HTTP is 8080. Why is the source not using a source port of 80 if the originating process is HTTP? This is because the source port is a random number selected by the system to uniquely identify that process in the system. That allows for multiple web browsers to connect to the same web server simultaneously. Each one will receive a randomly chosen port number, and the server will respond to that particular client's HTTP process using that port number. Because of this, you will not always see common port numbers like 80, 22, 25, 443, and so on; in most cases, one of the sides will be a randomly generated number.

Now consider this trace:

Trace 2:

Internet Protocol Version 4, Src: 192.168.0.14, Dst: 23.286.196.8
Transmission Control Protocol, Src Port 3566, Dst Port 443, Seq: 55, Ack: 54, Len: 0
Source port: 63566
Destination port: 443
[Stream index: 4]
[TCP Segment Len: 0]
Sequence number: 55
Acknowledgement number: 54
Header Length: 20 bytes
Flags: 0x014 (RST, ACK)
[Calculated window Size: 0]
[Window size scaling factor: -1 (unknown)
Checksum: 0x9v4 [valid]
Urgent pointer: 0

A quick look tells you that this packet is destined to port 443 at the destination. This means that it is intended for a website that uses SSL encryption, HTTPS. For a server to respond and for you to be able to establish a connection, the server must be listening to that port.

To review, see The Transmission Control Protocol. For a complete list of TCP and UDP port numbers, see this suggested article on a List of TCP and UDP Port Numbers.

 

4d. Describe the Stream Control Transmission Protocol (SCTP) and Real-time Transport Protocol (RTP) and the applications based on these protocols

  • When would you need to use SCTP instead of TCP or UDP?

STCP was developed to provide multi-streaming capabilities, and it provides reliable service to multiple streams. If one stream gets blocked, the other stream can deliver the data. SCTP is message-oriented like UDP, but it is also connection-oriented like TCP. Multihoming allows both ends of a connection to define multiple IP addresses for communication. One is used as the primary, and the remaining can be used as alternative addresses. Not all systems support SCTP, but it can still be used. In the absence of native support, SCTP can be tunneled over UDP.

To review, see Stream Control Transmission Protocol (SCTP).

 

4e. Explain the mechanics of a TCP connection establishment (3-way handshake) and release

  • What is the TCP 3-way handshake used for?

A 3-way handshake is the way TCP establishes a connection. The 3-way handshake starts with setting the SYN (synchronize) flag to alert the other end that a connection is going to be established. That flag is acknowledged, and the connection is accepted by the receiving end by responding with a SYN-ACK. To finalize the connection, the side that initiated the connection acknowledges the SYN-ACK with an ACK of its own. The 3-way handshake not only alerts the receiver that the sender wants to establish a connection, but also provides information to synchronize the sequence number on both sides. Early conceptions of the TCP/IP model used a 2-way handshake. The danger of a 2-way handshake is a real possibility of a deadlock situation.

See the real Wireshark trace below, which clearly shows a 3-way handshake. Line 22 is the first leg of the 3-way handshake with the SYN flag set. Host 192.168.1.3 is establishing a connection with host 151.101.116.153. Line 23 is the response from 151.101.116.153 accepting the connection with a SYN-ACK. Finally, in line 24, 192.168.1.3 finalizes the process by sending the final ACK. 


In contrast to a connection establishment, TCP uses a 4-way handshake to terminate a connection. To understand the process of terminating a connection, it might be better to think of the connections as a pair of unidirectional connections. Each connection is released independently of the other side. The FIN bit is used by either side to release a connection. When a FIN from one side is acknowledged, data flowing from that direction is shut down. However, data might continue to flow indefinitely in the other direction, which could happen if a 3-way handshake was used to terminate the connection. To avoid that undesirable situation, each side sends a segment with a FIN and receives a segment with an ACK – four different segments are needed. The first ACK could be piggybacked with the second FIN requiring only three total segments, but the protocol still requires two FIN and two ACK. This is also known as a symmetric connection release. As a comparison tool, this is a real Wireshark trace for a connection release:


Host 192.168.1.3 is closing the connection with host 104.94.115.9. 4 segments used, as explained above.

To review, see TCP Connection Establishment and TCP Connection Release. These explain how systems recover when one of the handshakes is lost. Page 94 illustrates how the connection establishment SYN flag was used by hackers to generate denial of service attacks, and what was done to remedy the problem.

 

4f. Illustrate the TCP transmission policy and window management

  • A host with an established TCP connection receives a segment with Seq=1024, Ack=2048, W=4096. What is the sequence number of the first segment that the host can send in response, and how many bytes can it send?

Sequence (Seq), acknowledgment (Ack), and window size (W) are header fields used by TCP for flow control and window management. Window size announces how much data the system is ready to accept. This only applies for a system at a steady-state with no segment loss or retransmissions. Otherwise, the system might be in the middle of a congestion avoidance process where the advertised window and the allowed (congestion) window might differ. That will make a big difference. Ack announces which byte number the host has received and processed. By design, if this number is x, that means that the host is acknowledging up to x-1 and allowing the remote system to send starting with sequence number x a total of w bytes. So, if ACK = x and W = w, bytes of up to a serial number of x-1 are being acknowledged and permission is granted to send w more bytes, starting with byte x and ending with x+w-1. Notice how flexible the system is. If the system wants to increase the credit from w to z, where z > w when no new data has arrived, B issues Ack = x, W = z. To acknowledge a segment containing n bytes, where n < w without granting additional credit, it will issue ACK = x + n, W=w-n.

To review, see TCP Reliable Data Transfer.

 

4g. Iillustrate congestion control protocols used by TCP such as Slow Start, Fast Retransmit, Fast Recovery

  • According to the Slow Start protocol, what value will the congestion window be set to after a segment is lost and there is a timeout?
  • When is the Fast Retransmit rule invoked?
  • What is the purpose of the Fast Recovery algorithm?

The slow start algorithm was developed to keep systems from becoming fully congested and to try to avoid further packet loss after a timeout occurs in steady-state. Three different windows are defined for a running system: the allowed window, referred to as awnd, the congestion window, cwnd, and credit, which is the advertised window sent in the most recent acknowledgment.

When a system is brought up, Slow Start defines a congestion window of size 1 (the maximum segment size). That means that awnd = 1 to start. The congestion window then grows exponentially toward the advertised window, which is the maximum that it will be allowed to send at any given moment. In other words, awnd starts at 1 and follows the cwnd exponentially until it reaches the advertised credit. At that moment, awnd, cwnd, and credit will be the same.

Assume that at a certain point, a packet is lost and a retransmission is needed. Packet loss means congestion. Sending more data is unacceptable, since it will only add to the congestion and aggravate the situation. That is where Slow Start comes back into play. It specifies that, right after a timeout occurs, a threshold is set with a size of half the congestion window at the time of the timeout. The congestion window is then set to 1 – the maximum segment size. At that point, it will increase exponentially up to the value of the threshold. After the threshold is reached, the congestion window will continue to increase linearly until the full size of the credit window is reached, as demonstrated in this image.


What determines when a timeout occurs? A timer is associated with each segment as it is sent. If the timer expires before the segment is acknowledged, the sender must retransmit. We call the value of that timer the retransmission timeout, or RTO. This timer must be higher than the expected time it will take for the segment to arrive and the ACK to arrive back. We refer to that time as the round trip time, or RTT. In a TCP system, the value of that time will be variable, since the system is dynamic and packets take different lengths of time to make a round trip.

What is the appropriate size for RTO? One simple solution would be to observe the pattern of delays for recent segments and set the RTO based on that observation. This could cause complications in the real world because you want to give more weight to recent RTT observations, since they are a better representation of the current state of the network. Several solutions were suggested, like using exponential averaging or smoothed round trip time (SRTT). To review the different ways of calculating RTO, see TCP's Retransmission Timeout.

If a segment is lost, the sending host has to wait until the RTO expires to retransmit. RTO is significantly higher than RTT, so stopping to wait for RTO to expire could waste significant amounts of time. Because of this, the sender will save a copy of the segment and will continue sending new segments until the RTO for that lost segment expires. That is when Fast Retransmit comes into play. In TCP, if a segment is received out of order, an ACK must be issued immediately for the last in-order segment. That is used by Fast Retransmit to avoid wasting time. If four Acks are received for the same segment, the segment has likely been lost. The Fast Retransmit rule then requires the host to retransmit immediately after the fourth Ack is received, rather than waiting for RTO to expire.

Since the fact that a segment was lost means that there was congestion in the system, congestion avoidance measures are appropriate. One possible scenario is to cut the congestion window to 1 and invoke the Slow Start/congestion avoidance procedure. This may be overly conservative, since multiple Acks indicate segments are getting through. The Fast Recovery algorithm was developed to overcome this limitation. It can be summarized as:

  • When the third duplicate Ack arrives (the fourth ACK for same segment):
    • Set congestion threshold cthresh to cwnd/2.
    • Retransmit missing segment
    • Set cwnd to cthresh+3to account for the number of segments that have left and the other side has cached
    • Each time an additional duplicate Ack arrives, increase cwnd by 1 and transmit segment if possible
    • When the next Ack arrives that acknowledges new data (that is, a cumulative ack), set cwnd to ssthresh

To review, read The Transmission Control Protocol. For more on TCP congestion control and avoidance, see End-to-End Congestion Control.

 

Unit 4 Vocabulary

This vocabulary and acronym list includes terms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • TCP
  • UDP
  • SCTP
  • 3-way handshake
  • SYN
  • RST
  • URG
  • PHS
  • FIN
  • ACK
  • RTT
  • RTO
  • SRTT
  • Slow Start
  • Fast Retransmit
  • Fast Recovery

Unit 5: The Network Layer

 

5a. Explain the correct network layer protocol to perform packet forwarding using both IPv4 and IPv6

  • What is the key function of the Network, or IP, Layer in the TCP/IP architecture?
  • Name some of the differences between an IPv4 address and an IPv6 address.

The network layer is the part of the TCP/IP architecture that is responsible for establishing, maintaining, and ending a communications path between nodes. It provides the functions for routing information to the destination node. Key functions of the network layer are:

  • determining best routes to use according to protocol being used;
  • ensuring that packets are directed toward their destination;
  • recording errors and notifying the transport layer of unrecoverable errors; and
  • creating control messages for line connection and termination requests.

The network layer is present in both routers and end systems across the network, though routers do not need to implement layers above the network layer. This is done by splitting the segments given by the transport layer and adding a network layer header. The network layer header for IPv4 is:


Where:

  • Version (4 bits): set to 4
  • Internet header length (IHL) (4 bits): length of header in 32-bit (4-byte) words. For example, a value of 5 here indicates that the header length is 5 x 4 = 20 bytes. This happens to be the minimum value you will find here. The maximum is 15 which means that the header is 60 bytes long.
  • Type of Service (TOS) (8 bits): provides guidance to end systems IP modules and to routers along the datagram's path
  • Total Length (16 bits): total data unit length, including header, in octets
  • Identifier (16 bits): together with source address, destination address, and user protocol, intended to uniquely identify a datagram
  • Flags (3 bits): the More Fragment bit is used to indicate that fragmentation has occurred and reassembly is needed; the Don't Fragment bit is used to prohibit fragmentation; the third bit is not currently used
  • Fragment Offset (13 bits): indicates where in the datagram this fragment belongs, measured in 64-bit units
  • Time to live (8 bits): measured in router hops
  • Protocol (8 bits): indicates the next level protocol which is to receive the data field at the destination
  • Header checksum (16 bits): frame check sequence on the header only; since some header fields may change, this is re-verified and recomputed at each router
  • Source address (32 bits): indicate the source network and host number
  • Destination address (32 bits): indicate the destination network and host number
  • Options (variable): encodes the options requested by the sender, such as security, strict source routing, loose source routing, record route, and timestamp
  • Padding (variable): used to ensure that the internet header ends on a 32 bit boundary
  • Data (variable): must be a multiple of eight bits in length – the total length of the data field plus header is a maximum of 65,535 octets

Although techniques like subnetting, supernetting, and NAT allowed for the IPv4 address space to improve the efficiency and addressable entities in the internet, in the mid-1990s, the IETF concluded that it was only a matter of years until IPv4 would collapse. This is especially relevant today, since we are quickly moving toward the IoT ("Internet of Things"), where every single item in your house could theoretically have its own IP address. In July of 1993, the IETF created the IPng ("new generation") directorate. In 1994, the directory selected the IPng architecture as defined in RFC 1752. They took advantage of that change to make IPv6 more efficient in the following ways:

  • Increase IP address from 32 bits to 128
  • Add flexibility by using fixed-size 40-octet header, followed by optional extension headers
    • Longer header but fewer fields (8 vs 12), so routers should have less processing
  • Accommodate higher network speeds, mix of data streams (graphics, video, audio)
  • Support QoS and flow labeling

The IPv6 header looks like this:


Where:

  • Version (4 bits): value is 6
  • Traffic class (8 bits): available for use by originating nodes and/or forwarding routers to identify and distinguish between different classes of priorities of IPv6 packets
  • Flow label (20 bits): may be used by a host to label those packets for which it is requesting special handling by routers within a network
  • Payload length (16 bits): the total length of all of the extension headers plus the transport-level PDU
  • Next header (8 bits): identifies the type of header immediately following the IPv6 header; this will either be an IPv6 extension header or a higher-layer header, such as TCP or UDP
  • Hop limit (8 bit): the remaining number of allowable hops for this packet; decreases by one for each node that forwards the packet, and discarded if it reaches zero
  • Source address (128 bits): the address of the source machine
  • Destination address (128 bits): the intended recipient – this may not be the ultimate destination if a routing header is present

To have a better idea of how immense that address space is, consider this: if a block of 1 million addresses is allocated every picosecond (10-12 seconds or 1 trillionth of a second), it would take 1013 years to use every address in the address space. That is 1000 times the age of the universe. If the entire Earth, both land and water, were covered with computers, IPv6 would still allow 7 × 1023 IP addresses per square meter: almost one for every molecule on Earth.

The IPv6 datagram format, when no extension headers are added, resembles the old IPv4 format, where the "Next" field becomes the "Protocol" field from IPv4:


The "Next" field contains the value 06, which is the value for TCP. Following the IPv6 header, we will just see the TCP header. The value for UDP is 11.

The situation will change if there are extension headers, such as when the packet requires routing information. In that case, a routing extension header will be added and the format will look like this:


The "Next" field in the base header now points to the Routing extension header. The value for routing is 43 in decimal, but you will see it as 2B (in hex). The routing extension header will also have a "Next" field, and it will point directly to the TCP header with a value of 06. This is more effective than IPv4, when the routing information was always included, whether or not there was a need to route the packet. In IPv6, extension headers are only included if they are actually used.

Possible extensions header values (shown here in decimal) are:

  • 00: hop by hop options
  • 43: Routing
  • 44: Fragmentation
  • 50: ESP header as defined by IPsec
  • 51: AH header as defined by IPsec
  • 60: Destination options
  • 59: Used by an extension header to indicate that there are no more headers to come

IPv4 and IPv6 are covered in Internet Protocol. If you would like to go into more detail, read RFC 1752, which is available on the IETF website. If you would like to see a full list of extension header values, read List of IP Protocol Numbers.

 

5b. Configure and illustrate IP addressing and explain its purpose, on both IPv4 and IPv6 networks

  • Translate IPv4 address 192.168.1.100 to an IPv6 address
  • What network does the IP address 103.122.136.100 with a mask of 255.255.0.0 belong to?

IPv4 addresses are 4 bytes long and are normally accompanied by a mask. The question above asks about an address belonging to the 103.122.0.0 that has a simple mask.

The established format to represent an IPv6 address is to arrange the address into groups of 2 bytes using hexadecimal numbers, as opposed to the dotted decimal notation of IPv4 address (4 bytes in decimal separated by dots). To translate an IPv4 address to IPv6, you need to add as a prefix (10 bytes of 0s) and then 2 bytes of 1 (in hex), followed by the 4 bytes of IPv4 address, also expressed in hex.

As an example, the IPv4 address 5.5.5.5 translates to 0:0:0:0:0:ffff:505:505 in IPv6, using the "compression" technique. The "compression" technique means that bytes of 0s are represented by a single 0 enclosed by colons. Also, leading 0s are always compressed. That is why in this example, :505 means two bytes, 05 and 05. The ffff section is for two bytes of hex ff, which is all 1s in binary.

An IPv6 system that receives an address with that prefix knows that it is an IPv4 being tunneled through an IPv6 network. IPv6 supports unicast, multicast and anycast addresses.

To review, see IP version 6.

 

5c. Compare and contrast Classless Interdomain Routing (CIDR) with subnetting activities within the network layer

  • A customer needs at least 4000 addresses for their network. A block of addresses is available starting with 155.162.8.0. Could you use that block of addresses by applying CIDR? Why or why not?
  • What is the subnet for the address 139.182.252.147/19? What is the first available host address, last available host address, and broadcast address for that subnet?

The subnetting and supernetting techniques were developed early on in the evolution of networking. Subnetting is the process of taking a big, classful network and dividing it into smaller subnetworks by increasing the number of bits representing the network. Supernetting is the opposite process, where the number of hosts in a network is increased by increasing the number of bits representing the host, regardless of the class of the network. That is why supernetting is referred to as Classless Interdomain Routing, or CIDR.

To use CIDR, you need as many contiguous bits as necessary to provide the required number of addresses. If you need 1000 addresses, you need 10 contiguous bits, since 210 = 1024. This would give you a little bit more than the 1000 you need, but is the closest you can go. 9 bits would only provide 512 addresses, and 11 bits would provide 2048 addresses.

You can take bits away from the host portion of the address to create subnets. A mask of 19 applied to a class B address would take 3 bits from the third byte to generate additional subnets. The first 2 bytes of a class B address are normally used to represent the network, and are followed by the host number in the third and fourth bytes. A classful address will generate a total of 65534 (216 - 2) host addresses. Taking 3 bits from the third byte will produce 8 different subnets, each with only 8190 (213 - 2) addresses available for hosts. This is plenty for most users. You then subtract 2 from the grand total, since you need to reserve the all-0s case for the network address, and the all-1s for the broadcast address. The /19 mask will generate the 0, 32, 64, 96, 128, 160, 192 and 224 subnets. The network for the case of 139.182.252.147/19 would be 139.182.224.0, the first address for a host would be 139.182.224.1, the last address for a host would be 139.182.255.254, and the broadcast address would be 139.182.255.255.

Remember that you can freely talk with members of the same subnet as you. However, to connect with a host in a different subnet, you need to go through a router. When you can't get a connection between a host and a remote host, always ask yourself if they are both in the same network. If not, check to see if you have a default gateway (router) configured, and if so, whether your machine and the default gateway are configured in the same subnet.

This is explained in detail in Supernetting and Subnetting. Additional information can be found in IP Version 4.

 

5d. Use protocols like Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP), and Network Address Translation (NAT) to manage IP address assignment, re-assignment, and resolution

  • You never configured your PC with any IP address or default gateway, but the "ipconfig" command shows an IP address of 192.168.100.1, a netmask of 255.255.255.0 and a default gateway of 192.168.100.100. You issue an "arp -a" command, and see that, except for the default gateway, the ARP table is empty. How did your PC obtain its IP address, mask, and default gateway? If you issue a command like "ping 192.168.200.1", what would your computer do first?
  • What is NAT overload?

In the early days of networking, protocols like the bootstrap protocol (BOOTP) were developed to automatically assign IP addresses to a machine from a database of available IP addresses of the organization. When a PC was first installed into the network, it would automatically connect with the BOOTP server to dynamically obtain its network configuration details. That meant that the end-user did not need to know or find their address or other configuration details. The Dynamic Host Configuration Protocol replaced BOOTP for dynamically assigning IP addresses, masks, and default gateways to hosts, and works by connecting to a DHCP server reachable by the PC. DHCP is a big time saver, since it means that end-users or network administrators do not need to manually configure that information.

PCs always talk at the data link level. In other words, regardless of its IP address, the PC needs to encapsulate any frame destined to any other PC within a data link layer header with a destination MAC address (which is on layer 2, the data link layer) of the intended recipient. If the intended recipient lies in a different network or subnet, the packet must be encapsulated with the MAC address of the router (or default gateway) and sent to it to be routed appropriately.

How do source machines know the MAC address of the intended destination? An obvious answer would be to just manually configure those addresses in the PC. However, that would be time-consuming and prone to errors. The Address Resolution Protocol, or ARP, handles the process of obtaining MAC addresses of intended destinations automatically. When you ping 192.168.200.1, the first thing the PC does is issue an ARP request for the default gateway. It realizes that, based on the mask, the destination is in a different subnet. The default gateway should respond with its own MAC address. Once the sending PC knows that, it encapsulates the ping (ICMP) command with a DLL header with a destination MAC of the default gateway. If you pinged 192.168.100.25, the sending machine would send an ARP request to directly find the MAC address for 192.168.100.25 since both addresses are in the same subnet.

Network Address Translation, NAT, on the other hand, is a switch-specific protocol designed to allow for a big organization to have multiple internal, non-public addresses and still be able to connect with outside, public addresses in the network. Simple one-to-one NAT translation is accomplished using a simple table that maps single internal addresses to single external addresses, such as:


Notice that there are three internal non-public addresses all connected to different HTTP servers, and each of them is mapped to a single external address.

For many-to-one mapping, port address translation (or PAT) is used, also known as NAT Overload. This is a dynamic NAT that allows for multiple private addresses to be translated with a single public address by using a port/IP address combination to identify the particular address mapping. An example could be something like this:


In this table, at least two inside (private) addresses map to the same outside (public) address. Two inside hosts are connected to the same HTTP server and the router uses NAT overload (PAT) to uniquely map the traffic of each one with the server.

These are two simple examples. In the real world, inside local, inside global, outside local, and outside global maps like this are common, especially with Cisco Routers.

NAT is explained in detail in NAT.

 

5e. Illustrate the use of Interior Routing Protocols based on shortest path, distance vector, and link-state routing models

  • What routing protocol suffers from the "count to infinity" problem?
  • What routing protocol requires the flooding of short messages with neighbor information throughout the network?

Distance vector routing is the oldest routing algorithm, and is not used often today. In this method, each router keeps a table indexed by each router in the subnet, and contains one entry per router. Each entry contains two parts: the preferred outgoing line and the distance to that destination. Periodically, each router sends each of its neighbors an updated list of its estimated distance to each destination in the whole network, and receives a similar one from its own neighbors. This information is used to update its internal table with the preferred route to any given destination using the Bellman-Ford and Ford-Fulkerson algorithms.

When using link-state routing, each router determines who its neighbors are and the cost to get to them. Once that information is available, the router builds a link-state advertisement (LSA). At predetermined intervals, these LSAs are flooded through the whole network. Each router in the network grabs the LSA and uses Dijkstra's algorithm to construct the shortest path to all possible destinations.

Distance vector lost popularity because it suffers from the "count to infinity" problem. Depending on network topology, routers could fall into an endless loop, where information is exchanged back and forth indefinitely as if "counting to infinity". Solutions have been suggested for this problem, like the split-horizon algorithm and split horizon with poison reverse algorithms. However, split horizon does not always work as expected, so most networks today use link-state routing, which is more reliable. Link state has the disadvantage that LSAs must be flooded through the whole network, which can cause high traffic and congestion. However, several techniques are used to deal with this problem, such as splitting up the network into hierarchical areas.

To review, see Routing in IP Networks.

 

5f. Compare and contrast interior routing protocols like Routing Information Protocol (RIP) and Open Shortest Path First (OSPF) with exterior routing protocols like Border Gateway Protocol (BGP)

  • What distance metric is used by routing protocols like RIP, OSPF, and BGP?
  • What routing technique is used by RIP, OSPF, and BGP?

An autonomous system (AS) is a group of routers that exchange information via a common routing protocol. An AS is composed of a set of routers and networks managed by a single organization. The protocol used to pass routing information between routers within the same AS is called an interior routing protocol (IRP). Examples of IRPs are open protocols like RIP and OSPF and proprietary protocols like Cisco's EIGRP. The protocols used to pass information between routers in different ASes are called exterior routing protocols (ERP). One example of an ERP is the border gateway protocol, or BGP.

The routing information protocol, or RIP, was developed in the 1980s to support the growing number of networks. It used the distance vector algorithm. It had inherent problems when used for larger networks – since it used the number of hops required to reach a host as a metric, problems like the count to infinity started to become common as networks grew. RIP needed to adapt, and the first "solution" to the count to infinity problem was to define 16 as the infinity metric. So, in a RIP exchange, a distance metric that would grow to 16 was considered to be infinity, and that router was assumed to be unreachable. Of course, this limited the diameter of networks to no more than 15 hops. The current version of RIP is RIPng, which is an extension of RIPv2 to support IPv6.

Open short path first, or OSPF, was introduced in 1990 as a more efficient protocol that would avoid the inherent limitations of RIP. OSPF uses a link-state routing algorithm that does not suffer from the count to infinity problem. OSPF has the additional advantage of offering a flexible routing metric based on type of service (ToS), which allows you to select routes to maximize reliability, maximize throughput, minimize monetary cost, or minimize delay. Up to 5 different routing tables can be created by each router. Link state routing requires the periodic flooding of LSA throughout the whole network, which can contribute to congestion. OSPF overcomes that limitation by dividing the full network into "areas" and flooding LSAs only in those areas.

The BGP protocol uses path-vector routing, which does not use routing metrics. Instead, routers provide information about which networks they can reach and the list of ASes that must be crossed to get there. The use of path-vector routing by BGP allows it to perform "policy routing". If a hostile AS is in a path, the router can decide to avoid that path and select a different one based on the information exchanged with its neighboring routers.

To review these protocols, see Routing in IP Networks.

 

5g. Use multicasting principles including addressing schemes and associated protocols

  • What address range is reserved for IP Multicast?
  • What are some of the differences between IGMPv1 and IGMPv2?

Multicasting is when a single host needs to send traffic to many but not all hosts in the network. This differs from broadcast, where one host sends to ALL hosts in the network. Theoretically, you could multicast by using multiple copies of a unicast stream. However, this would put a large burden on network resources, and would require the duplication of every frame. The group of class D addresses is reserved for multicasting applications, starting from 224 through 239. For multicasting to work, a mechanism called Internet group management protocol (IGMP) was created so that individual hosts could indicate their desire to participate or be excluded from a multicast group. A routing protocol is needed to collect data about networks containing members, paired with a routing algorithm to find the shortest path to each network containing networks. Protocols like DVMRP or PIM paired with the Reverse Path Forwarding (RPF) algorithms were developed for this. Another requirement is for a way to translate from the multicast address to an Ethernet address for transit through the L2 network.

Translation of an IP Multicast address to a L2 multicast address is a simple process. A block of MAC addresses starting with 01-00-5E has been reserved for IP multicast to Ethernet address mapping. The mapping mechanism involves placing the low order 23 bits of the Class D address into the low-order 23 bits of the reserved address block.

As an example, consider mapping the IP Multicast address 224.1.1.1 to its MAC address counterpart.

To do that we first translate each part to its binary form:

First, the prefix:

01-00-5E = 0000 0001 0000 0000 0101 1110

We are translating from hex to binary for each byte. This is color-coded as blue.

Now, the address to map is:

224.1.1.1 = 1110 0000 0000 0001 0000 0001 0000 0001

Here, we are translating from decimal to binary for each byte. The low-order 23 bits of the multicast address have been color coded as green.

Mapping simply involves taking the low-order green bits from the IP Multicast address and appending it to the L2 multicast prefix. Following the color coding scheme shown above, the situation will look like this:

0000 0001 0000 0000 0101 1110 0000 0001 0000 0001 0000 0001 = 01-00-5E-01-01-01

224.1.1.1 translates (or maps) to 01-00-5E-01-01-01. The 5 digits highlighted in red represent overlapping bits that have special significance as explained below.

Let's now consider the address 239.129.1.1. Following a similar procedure (again with the low order 23 bits in green):

239.129.1.1 = 1110 1111 1000 00010000 0001 0000 0001.

Append the green bits to the translation segment and you will get:

0000 0001 0000 0000 0101 1110 0001 0000 0001 0000 0001 = 01-00-5E-01-01-01

Interestingly enough, both IP multicast addresses translate to the same L2 address. The reason for that is that by only using the last 23 bits, there is an overlap of 5 bits, shown in red in both examples. They produce a total of 32 overlapping addresses that will translate to the same MAC address. In this example a block of addresses in this series: 224.1.1, 224.129.1.1, 225.1.1.1, 225.129.1.1 … 238.1.1, 238.129.1.1, 239.1.1.1, 239.129.1.1, will all map to the same L2 address of 01-00-5F-01-01-01. One MAC address always translates to 32 different IP multicast addresses, something that must be kept in mind.

IGMP is the protocol used by routers to exchange multicast group membership over a LAN. All IGMP messages are transmitted in IP datagrams. A protocol value of 2 in the IP header means an IGMP message. The IGMP message will have a header with the format:


Where:

  • Type could be:
    • Membership Query (0x11): sent by multicast router, and has two types of queries differentiated by the group address:
      • General: to learn which groups have members on an attached network
      • Group specific: to learn if a particular group has any members on an attached network
    • V2 Membership Report (0x16): sent by host to declare membership in group
    • V1 Membership Report (0x12): for backward compatibility with V1
    • Leave Group (0x17): Sent by host to leave a group
  • Max Response Time: specifies the maximum allowed time before sending responding reports in units or 1/10 of a second (only meaningful in a membership query)
  • Checksum: same checksum algorithm used by IPv4
  • Group Address:
    • In Membership Query Message: set to zero when sending a General Query, and set to the group address being queried when sending a Group-Specific Query
    • In Membership Report or Leave Group message: valid IP multicast group address of the group being reported or group being left

During operation, each host uses the IGMP protocol to make itself known as a member of a group with a given multicast address To join a group, the host sends an IGMP membership report message with the group multicast address in the group address field and in the IP address of the packet. A multicast router periodically issues general membership query messages to maintain a valid list of active group addresses. The query is sent to the "all systems on a LAN" address, 224.0.0.1.

Each time a router issues a general membership query, hosts must respond with a report message if they wish to maintain membership in the group. Hosts use a delayed-response strategy. They do not reply immediately, but after a random timer expires. The timer is started after the query is received. The host whose timer expires first will respond for a specific group. The other hosts on the LAN will suppress sending their own report after seeing the report already sent. Only one member needs to declare membership for all others to continue receiving the stream.

This is an example of an IGMP message:


Some interesting points to highlight here are:

  1. The protocol type in the IP header is set to 2, which means that it is an IGMP message and that is the next header to expect. Wireshark has already translated it for us, but if that had not been the case, you would have known that this is an IGMP message just by looking at the type field in the IPv4 header. As a review, If this field had been 1, we knew that it was an ICMP message. A 6 means that the payload is TCP and that's the next header to expect. A 17 (0x 11) means a UDP payload and so forth.
  2. This is an IGMP version 1 report (type 0x12)
  3. The source machine, 10.60.0.132 is reporting membership to multicast group 224.0.1.60

The IGMP portion of another actual IGMP message looks like this:


Some interesting points to consider:

  1. The source, 10.60.0.189 is sending this message to the multicast destination of 224.0.0.1, which means that it is sending it to all systems in the LAN.
  2. Type 0x11 means a query. This is sent by the multicast router to find out what group members are present.
  3. The Max Response Time is now present. Notice that the actual value that will be seen in the header is Hex 64. A quick translation shows you that Hex 64 is 100 decimal. As explained above, the number in this field is given in units of 1/10 of a second. 100 X 1/10 = 10 seconds. Fortunately, Wireshark has taken care of making this calculation for us, but that will not always be the case when you see a trace. In this case, the inquiring router will expect a response from members within 10 seconds of receiving this query.
  4. The multicast address shown here is 0.0.0.0. That's because this is a general query and the router is finding what members are present. Members of all groups will send a membership report. Had it been a group-specific query you will find here the actual address that the router wants to know if there is membership. For example, if the address here had been 224.1.1.60, the router would be asking if there is anyone belonging to that group present. Only the members of 224.1.1.60 would respond.

For leaving a group, IGMPv1 and IGMPv2 behave differently. With v1, the host quietly leaves the group without alerting the router. The router proceeds to send 3 general queries, 60 seconds apart. If there are still members of that multicast group present, they will respond as explained above. If no IGMP report for the group is received, the group times out. That means that the worst-case delay will be around 3 minutes. Traffic will continue flowing during that time, even though no members of that group are present.

With IGMPv2, a host that wants to leave the group must specifically advertise its intentions to the router. It does that by sending a leave message to 224.0.0.2. Upon receipt of the leave message, the router sends a group-specific query to find out if there are still members in the group that require traffic to be sent. If no IGMP report is received within 3 seconds, the group times out. This is much more effective, since traffic stops flowing shortly after the last member leaves the group.

Notice that we have been talking about addresses like 224.0.0.1 and 224.0.0.2. Those are special cases of multicast addresses defined specifically for this task. All routers supporting IP multicast have been configured to listen and be a group member to those addresses and react accordingly.

Implementation of the Reverse Path Forwarding (RPF) Algorithm requires that, when a router receives the multicast message, it checks its unicast routing table to determine which interface produces the shortest distance back to the source. If the route is over the interface in which it was received, the router enters information into the routing table and forwards to adjacent routers except for the interface it was received on. Otherwise, the message is discarded. This mechanism ensures a loop-free tree with the shortest distance from the source to all recipients.

Two other important multicast concepts are pruning and grafting. Pruning means that If a router determines that there are no group members on their directly attached "leaf" networks (that is, it has no participants in a multicast), it will send a prune message to the upstream router to let it know that it should not forward the multicast down. This mechanism results in a smaller, more efficient spanning tree, with all "leaves" having group members. Grafting is the reverse process. If a router that previously sent a prune message determines that it needs to start receiving the multicast, it will immediately ask the group's previous-hop router by sending a grafting message. This mechanism assures quick re-establishment or previously pruned branches.

To review, see Multicasting.

 

5h. Use quality of service (QoS) principles and associated protocols like Multiprotocol Label Switching (MPLS)

  • What are the mechanisms available for IP to provide Quality of Service, QoS?
  • How is QoS measured?

In the early days of the internet, traffic was very different than it is today. The internet was a "best-effort service" with no traffic isolation, which meant that all packets were serviced FIFO (first in, first out). There was no guarantee of service. Even though TCP guaranteed end-to-end flow control, it did not guarantee fair sharing in the network. This encouraged greedy behaviors from unscrupulous network users. Many corrupt implementations of TCP took advantage of that flaw to obtain a better share of the network.

The traffic in today's internet can be classified as one of two classes: elastic or inelastic. Elastic traffic can adjust over wide ranges to changes in delay and throughput and still meet the needs of its applications. That was the type of traffic that the early networks were designed to carry, and QoS requirements were not stringent. Inelastic traffic, on the other hand, does not easily adapt to such changes. This type of traffic presents a new set of requirements on throughput, delay, jitter, and packet loss. This is, of course, the case with today's internet with video streaming and other real-time applications. Some applications require preferential treatment over applications with more demanding requirements. That is exactly what inelastic traffic is, but elastic traffic is still always present and must be supported.

IP QoS refers to the performance of IP packets flowing through one or more networks. It is characterized by:

  • Service availability – the reliability of user's connection
  • Delay – the interval between transmitting and receiving a packet (also called latency)
  • Delay variation – the change in duration between all packets in a stream (also known as jitter)
  • Throughput – the rate packets are transmitted at
  • Packet loss rate – the maximum rate packets can be discarded at

The huge growth in traffic has put tremendous burdens on the internet. It was not enough to increase capacity; a traffic management framework was needed. Integrated services (IS) and differentiated services (DS) were developed to deal with this.

The Integrated Services (IS) architecture, defined in RFC 1633, is concerned with providing an integrated or collective service to the set of traffic demands placed on a given domain. IS providers view the totality of the current traffic demand and limit the demand that is satisfied to that which can be handled by the current capacity of the network. Resources are reserved within the domain to provide the QoS that a particular portion of the network requires. A popular reservation protocol is the resource reservation protocol, RSVP.

RSVP is specified in RFC 2205 and is characterized by:

  • Reserves for both unicast and multicast, adapting dynamically to changes in group membership and changing routes
  • Unidirectional reservations for simplex
  • Receiver-initiated reservation
  • Maintaining soft state in the internet, since state information expires unless regularly refreshed from the entity that requested the state
  • Different reservation styles
  • Transparent operation through non-RSVP routers
  • Support for IPv4 and IPv6

The Differentiated Services (DS) framework, described in RFC 2475, does not attempt to view the total traffic demand in an overall or integrated sense, but instead is a class-based model. A DS framework does not attempt to reserve network capacity in advance. Instead, packets are marked according to the service requirement, or group to which they belong. The mark, referred to as a DS Codepoint, is added to the packet in what used to be the type of service (ToS) field of the IPv4 header or traffic class field in the IPv6 header. The service provided by network elements depends on group membership. Packets belonging to different groups are handled differently. A service level agreement (SLA) is established between the service provider and the customer prior to the use of DS. The architecture provides an aggregation mechanism whereas traffic with the same DS field is treated the same. QoS is implemented in individual routers by queuing and forwarding packets based on the DS field.

MPLS is a popular protocol to achieve a system that is similar to a DS architecture. MPLS provides traffic management and connection-oriented QoS support while speeding up the IP packet forwarding process and retaining the flexibility of an IP-based networking approach. MPLS creates tunnels known as label-switched paths (LSPs) across the network. Label-edge routers (LERs) map different classes of traffic, known as forwarding equivalence classes (FECs), to LSPs. The LER adds a label to the packet; the label indicates an LSP. Label-switching routers (LSRs) along the path forward the packets based just on the MPLS label. The LSR swaps the incoming label with an outgoing label. MPLS is characterized by the following principles:

  • Imposition of a connection-oriented framework on an IP-based internet, providing a foundation for sophisticated and reliable QoS traffic contracts
  • Simplifying the process for committing resources in such a way as to balance the load in the face of a given demand (traffic engineering)
  • Providing an efficient mechanism for supporting VPNs
  • Ability to be used with several networking technologies

Important points to consider with MPLS are:

  • LSPs can follow different paths between endpoints
  • All links can be utilized, not just shortest path
  • Traffic engineering reserves and guarantees bandwidth
  • LSPs can be set up for different levels of service

To review, see Quality of Service.

 

5i. Use protocols like Internet Control Message Protocol (ICMP) to configure and troubleshoot a network in both IPv4 and IPv6

  • What are the 4 variations of the "Destination Unreachable" ICMP message type?
  • Under what conditions will a router emit a "Redirect" ICMP message type?

ICMP is a supporting protocol that provides a useful troubleshooting and error reporting tool for systems in a network. ICMP sits on top of IP: the ICMP header resides on top of the IP header, just like TCP or UDP. However, it is not a transport protocol. There are a couple of applications used by end-users, like ping command or traceroute. That being said, the majority of the ICMP messages are used by routers or hosts and are transparent to the end-user.

To review, see Internet Control Message Protocol, which gives a full list of all ICMP message types with their corresponding code numbers.

 

Unit 5 Vocabulary

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • Class A, B, C, and D IP address
  • Supernetting and Subnetting
  • CIDR
  • ARP
  • DHCP
  • Autonomous System
  • Interior Routing Protocol
  • Exterior Routing Protocol
  • Distance Vector Routing
  • Link State Routing
  • Hierarchical Routing
  • Path Vector Routing
  • RIP
  • OSPF
  • BGP
  • IP Multicasting
  • RPF
  • IGMP
  • DVMRP
  • PIM
  • QoS
  • Integrated Services
  • Differentiated Services
  • RSVP
  • MPLS
  • LSP
  • FEC
  • LER
  • LSR
  • ICMP

Unit 6: The Link Layer

 

6a. Explain how physical addressing resolves IP addresses at the link layer

  • What ARP opcode is used to request the mapping of an IP address into a MAC address?
  • What is gratuitous ARP used for?

The Address Resolution Protocol is widely used as a means for a host to map and resolve an IP address into the MAC address of the host assigned to that IP address. Consider this Wireshark ARP Capture:


The Opcode is 1, meaning this is a request. The destination MAC is ff:ff:ff:ff:ff:ff, which is meant to go to every host in the network. The host with the IP address 192.168.1.3 is asking for the MAC address of the target host with the IP address 192.168.1.102. The MAC address field of the target host is set to all 0s. Only the host configured with IP 192.16.1.102 will respond to this request. The target MAC is set to all 0s, since this is the address that it is looking for.

Now, consider this capture:


The Opcode is 2, which means it is the response to the request shown above. The host with the IP address 192.168.1.102 is responding to the request letting it know that its MAC address is 20:f1:9e:d8:18:e8. The frame is being sent to the destination of the actual MAC address of the host that sent the request, 60:6C:66:0e:b0:19.

An interesting case of the ARP protocol is "gratuitous ARP". When a host is attached to the network and remains silent, all other hosts need to issue ARP requests if they need to contact it. However, some hosts do not want to remain silent, like a fresh router attached to the network. In this case, they will issue a gratuitous ARP. Consider this trace:


This is a type 2 (that is, a response), even though nobody has issued a request. The source and destination IP are both the same: 10.0.0.6, No response is expected. This is a gratuitous ARP where 10.0.0.6 is simply advertising itself. Each host in the network will add 10.0.0.6 with MAC 00:00:0c:07:ac:01 to their ARP database. The next time any host needs to send traffic to that destination, it will go directly without the need for the original ARP request, since the ARP table will already contain the MAC address.

Also, notice that the ethernet frame in all these examples shows a type of hex 806. That is the type field for an ethernet frame that is carrying an ARP message. If this value had been type 0x800, it would have meant that it was carrying an IP packet. The place where the IP header normally sits is now occupied by an ARP header instead.

 

6b. Illustrate the methods for error control and flow control at the data link layer (DLL), including error detection codes and the sliding window protocol

  • Can you attain 100% utilization in using a go-back-n sliding window protocol?
  • What is the difference between a go-back-n and a selective repeat sliding window protocol?

There are three techniques at the link level for flow control: stop-and-wait, go-back-N, and selective-reject (also known as selective-repeat).

With stop-and-wait, the source transmits the frame. After it is received, the destination indicates a willingness to accept another frame in an acknowledgment. That means that the source must wait for the acknowledgment before sending another frame. If the propagation time of the frame is very large compared to the frame time (the time that it takes for one full frame to be transmitted), then the throughput will be reduced. This happens because the source will spend a short time sending the frame, then it will have to sit and wait for the frame to arrive to the other side and for the ACK to come back. The medium will be mostly idle. The time to process the frame and the time required for the ACK to be generated must be considered. A buffer must be present to save the transmitted frame in case it needs to be retransmitted. Once the ACK is received, it can be dropped, since the frame is already where it needs to be. The big problem is, of course, that there is only one frame in transit at a time. Stop-and-wait is rarely used because of this inefficiency.

What is the link utilization if we are using a stop-and-wait algorithm? We need to calculate the total time needed based on the above explanation where we use two propagations times to account for the frame getting from source to destination and for the ACK to come back from destination to source:

T = Tframe + Tprop +Tproc +Tack + Tprop +Tproc

Where:

Tframe = time to transmit frame

Tprop = propagation time

Tproc = processing time at station

Tack = time to transmit ack

Suppose a 1 Mbps satellite channel with 1000 bit frames, a 270 ms propagation delay (Tprop), and a 100 bit ack frame.

Notice that:

Tframe = 1000/1Mpbs = 1ms

Tack = 100/1Mbps = 0.1 ms

What is the link utilization if we are using a stop-and-wait algorithm?

Based on the above, the total time will be:

T = Tframe + Tprop +Tproc +Tack + Tprop +Tproc = 1ms + 270ms + 270ms + 0.1ms = 541.1 ms.

This means that the channel is being utilized for 1 msec out of a total of 541.1 msec . Utilization is therefore:

U = Tframe / T = 1/541.1 = 0.185%.

In this case, the rate is 100mbps, so the actual throughput will be

T = 0.00185 × 100mbps = 185Kbps

This is a big drop in the total possible throughput. This happens because the frame is very small in comparison to the total propagation time. As the difference between frame time and propagation time becomes closer, utilization improves. Consider a short 1 km link with a propagation delay of 5 msec, a rate of 10 Mbps, frames of 1000 bits, and very small processing and ack times that can be disregarded:

Tprop = 50 msec

Tframe = 1000bits/10 Mbps = 100 msec

U = 100 msec / 100 msec = 1 = 100%

By making the frame twice the one-way propagation delay, you can make the stop and wait protocol have 100% channel utilization. If you select a frame size of 25 msec, the utilization drops to 50%, and so on. These calculations assume that there are no errors and frames do not have to be retransmitted – that would introduce new complexity that would need to be considered.

The Sliding window technique offers a much better utilization. Sliding window techniques allow multiple frames to be in transit at the same time. A source can send frames without waiting for acknowledgments. Destination can accept, and buffer, n frames. Destinations acknowledge a frame by sending an acknowledgement with sequence number of the next frame expected (and implicitly ready for next n frames). This is the same technique used by the transport layer.

The most common form of error control based on sliding windows is the Go-Back-N technique. This technique uses a sender with a buffer to save unacknowledged frames but a receiver with a window size of one frame. The number of unacknowledged frames that can be in transit is determined by the sender's window size. Upon receiving a frame in error, the destination discards that frame and all subsequent frames until the damaged frame is finally received correctly. All subsequent frames need to be discarded, since the size of the receiver buffer is only 1 frame. The sender resends the frame and all subsequent frames either when it receives a Reject message or timer expires.

Alternatively, the selective-reject (or selective-repeat) can be used, where both sender and receive windows are greater than 1. Here, the sender sends multiple frames, and the receiver can save frames that are not received in the correct order. When a failure is detected, the sender will only send the missing frame, then resume regular transmission from the last sent frame. The utilization, in this case, can be calculated by the size of the window in terms of the number of unacknowledged frames that it can hold, like this:

 S=\frac{Nt_\text{frame}}{2t_\text{prop}+t_\text{frame}} ,

where N is the number of frames that the window can hold. It should be obvious that any channel utilization can be accomplished by increasing or decreasing the number of frames that the buffer window can hold. This must be handled with care, since buffer space is expensive. A tradeoff must be made by the design engineer between channel utilization and buffer space and their associated costs.

Flow control techniques like these should not be confused with error control. Bit errors are sometimes introduced into frames. A mechanism is needed to detect those errors so that corrective action can be taken. Detecting errors requires redundancy. With error detection codes, enough redundancy bits are included to detect errors. One very popular method for error detection is called the check-digit method, which adds additional digits to the number to be sent to make it evenly divisible by a third number. As a simple example, assume that the number 645 is going to be sent. Assume that we decide to use 7 as the divisor.

  1. Step 1: Left shift number: 6450
  2. Step 2: Divide by 7 (921) and find reminder: 3
  3. Step 3: Subtract the result of step 2 from step 1: 6450 – 3 = 6447
  4. Step 4: Check that result is divisible by 7: 6447/7 = 921
  5. Step 5: Transmit 6447.
  6. Step 6: If the received number is evenly divisible by 7, it is assumed to be error-free. If is not evenly divisible by 7, it is assumed to have arrived with errors.

You can see that this method detects single-digit errors. Single-digit errors for the number 6447 will look like 5447, 6347, 6457, 6446. All of these errors are detected as none of the numbers is divisible by 7. Even multiple digit errors like 6567 or 5356 will be detected. However, the method does not detect some errors, like 5047 or 6587.

In networks, the most common technique for error detection is the cyclic redundancy check. CRC is a binary check digit method, where r redundant bits are sent for m bits of data (we send a total of n=m+r bits of data). A large r will cause large overhead, so we try for r to be as smaller as possible than m. With ethernet, for example, we can check a frame of up to 12,000 bits (1500 bytes) with a 32-bit CRC code, or as it is commonly expressed, it uses CRC-32. The technique involves defining a divisor polynomial G(x) and using a technique similar to the one described above, but of course, with binary numbers.

To very briefly explain the process, an n bit message can be represented by an n-1 degree polynomial, where the value of each bit (0 or 1) is the coefficient for each term in the polynomial. Example: a message containing the bits 10011010 corresponds to the polynomial M(x)=x7 + x4 + x3 + x. Let r be the degree of some divisor polynomial, G(x). What we need to do is transmit a polynomial that is exactly divisible by G(x). The transmitted polynomial will be P(x). If some error occurs during transmission, it will be as if an error term E(x) has been added to P(x); the received message will be:

R(x)= P(x) + E(x)

The received message R(x) is exactly divisible by G(x) only if 1) E(x) is 0 (there were no errors) or 2) E(x) is exactly divisible by G(x). By carefully selecting G(x) we can make sure that case 2) is extremely rare, so that we can safely conclude that a 0 result means that the message is error-free.

 

6c. Illustrate different framing techniques used at the DLL, such as length count, bit stuffing, and flag delineation

  • What actual frame will be sent on the wire for the frame 0110111111101111001110010 when using bit stuffing with a frame delineation flag of 01111110?
  • For the same frame, what actual frame will be sent when using flag delineation with character stuffing?

One of the functions of the data link layer is to divide data into frames. Frames must be clearly delineated for the receiving side to extract them from all the data being received. This is done by bit stuffing, which uses a flag to enclose the frame. The flag is a reserved bit pattern that indicates the start and end of a frame. A frame consists of everything between two delimiters. The obvious problem is that the data might contain a bit pattern exactly the same as the frame delimiter pattern. Bit stuffing will help remove that problem. Extra bits are inserted to the data to break any pattern that resembles the frame delimiter. For example: assume that the 4 bit sequence 0111 is used as the frame delimiter. If the actual data in the frame has any instance of more than two consecutive 1s, replace it with 110: insert a zero bit after each pair of 1s in the data. That way, the data will never contain three ones in sequence, and so thus the flag will never happen inside the actual frame. Of course, the extra bits inserted introduces overhead and wasted bandwidth. That is a problem that is present with any method.

Sometimes, we might want to work with characters rather than individual bits. Character stuffing is a very similar process, where a full character is used as the delineation tool. Whenever the flag is part of the frame to be transmitted, it is stuffed with the appropriate ESC (or DLE) character. For example, if Flag is the character used to delineate the frame, and the frame to be sent contains something like 12FlagEsc34, then the system will stuff the Esc character to separate the flag that is part of the frame. What will be sent is Flag12EscFlagEscEsc34Flag. Notice that the Esc character is stuffed to break any flag or any other Esc character present in the frame.

One final frame delineation technique is character count, though it is not used often today. In this technique, each frame is preceded by a character that specifies the number of characters that will follow for that one frame, including the character count frame. For example, if there are two frames, one with three characters, c1-c2-c3, and the other with four characters, c1-c2-c3-c4, then this will be sent in the line:

4-c1-c2-c3-5-c1-c2-c3-c4

 

6d. Describe the difference between data link technologies, such as the Point-to-Point (PPP), Ethernet V2, and 802.3 protocols

  • What is the difference between ethernet V2 and 802.3? What do they have in common?
  • Are ethernet V2 and 802.3 backward compatible?

Several data link technologies have been proposed throughout the years. Ethernet first, and then IEEE created the 802.3 protocol. They are similar, and both use CSMA/CD access mechanisms. The main difference is the way the ethernet frame is encapsulated (that is, the way the DLL header is constructed). Let's take a look at both an Ethernet V2 header and an 802.3 header.


They each begin with a 7-byte preamble consisting of alternating 1s and 0s, followed by the start frame delimiter (SDF), which is the binary sequence 10101011. The 6-byte destination address and 6-byte source address follow. Then comes a 2-byte field that determines what type the packet is. If this field is less than 05DC, it is a length field and belongs to an 802.3 frame. The next 3 bites on an 802.3 frame will be the destination service access point (DSAP), source service access point (SSAP), and control fields. These are called the logical link control header, which is described in the IEEE 802.2 specification. If the DSAP/SSAP/CNTRL fields happen to be AA/AA/03, then the frame is a SNAP frame, which is followed by org.id and type fields. It is used for backward compatibility between ethernet V2 and 802.3. Finally, both ethernet V2 and 802.3 frames end with a 4-byte frame check sequence that uses CRC to detect damaged or corrupted frames.

05DC is hex for 1500, the maximum length of an Ethernet (or 802.3) frame. The ethernet frame does not have a length field; rather, the field is used to indicate the type of data that follows. A value of 0800 means that this header is encapsulating an IPV4 frame. 86DD means that this header is encapsulating an IPv6 payload. The type value 8100 means that the frame is a VLAN-tagged frame. 0806 for is the code for an ARP frame. Other type codes are available in open literature.

As an example, consider the following captured, raw frame:

02 60 8C 67 69 74 02 60 8C 74 11 78 00 81 F0 F0
DA 3A 0E 00 FF EF 16 00 00 00 00 00 6b 16 19 01
FF 53 4D 2D 00 00 00 00 08 00 00 00 00 00 00 00
00 00 18 08 05 00 00 00 94 07 0F 2E 00 58 00 01
00 40 00 16 00 20 20 00 00 00 00 00 00 00 00 00

Bytes shown in blue represent the destination address for this frame. Green is the source address. Red is either length or type. Since this value is less than 05 DC, we immediately know that this is a length and as such an 802.3 frame. F0 F0 DA are the DSAP, SSAP and Control values respectively. Please notice that the Preamble and SFD are not normally shown.

One former DLL protocol was high-level data link control (HDLC). It was based on IBM's SDLC protocol, but it never caught up with the open standards like ethernet and 802.3. Its framing was drastically different from ethernet. In HDLC, frames always start and end with the flag sequence 01111110 (hex 7E). It uses character stuffing in case a sequence like that appears in the data portion of the frame. The complete structure of the frame is flag-address-control-data-fcs-flag. Another old data link protocol, PPP, was used as the main communication protocol between two routers. This contrasts with ethernet V2 and 802.3, which are meant as a communication protocol between two hosts in the same LAN or between a host and router. PPP was loosely based on HDLC, and the frame structure is similar.

 

6e. Describe how packet collisions in a network are controlled using carrier-sense multiple access with collision detection (CSMA/CD)

  • Describe what improvements did Collision Detection bring to the CSMA protocol?
  • Under what conditions is CSMA a really effective protocol?

The way access to media is controlled in a data link layer network is called medium access control (MAC). In the real world, many stations contend for the same media in an ethernet system, since there is only one media shared among many hosts. Who can access the media first is the job of MAC protocols. Ethernet, of course, uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol.

When using CSMA/CD, the host senses the medium to determine if it is available for transmission:

  1. If the medium is idle, transmit
  2. If the medium is busy, continue to listen until the channel is idle, then transmit immediately
  3. If a collision is detected during transmission, immediately cease transmitting
  4. After a collision, wait a random amount of time, then attempt to transmit again (repeat from step 1)

The CSMA protocol has its roots with ALOHA, which was not efficient but accomplished its original goal. In an ALOHA system, stations simply transmitted without checking if anyone else was using the channel. Collisions could occur at any time, which meant that machines could have been nearly finished with a transmission when a second machine started to transmit. This would render the transmission that was in progress useless, and all that time wasted. Channel utilization for pure ALOHA was only about 18%.

Slotted ALOHA came next, and was designed to try and improve channel utilization. The system was divided into time slots that corresponded to the time it took for a standard size frame to be transmitted. Stations were required to be a little more patient and wait until the beginning of the next slot before transmitting. This assured that a station that successfully grabbed the medium and had been transmitting for a while would see its full frame transmitted before a new frame transmission was attempted by any other machine. That improved the efficiency of the channel utilization to about 36%. This was still relatively low, since even if no one was transmitting and using the medium, everyone would have to wait, and the medium was idle and wasted time.

That led to CSMA, where stations would constantly sense the channel and transmit only if they sensed that it was not busy. This is now referred to as "persistent" CSMA, since the stations were sensing the medium constantly. This was a great improvement over ALOHA, and is effective under low load conditions where collisions are limited. The next step came with non-persistent CSMA, which improved upon persistent CSMA. Using this method, channels only sense the channel when they need to transmit. If they sense the channel is busy, they wait a random amount of time before trying again. In persistent CSMA, many stations sensing the channel could all start transmission at the same time resulting in many collisions. With non-persistent CSMA, each station senses and tries again at a random different time. This reduces the rate of collisions and improves channel utilization.

 

6f. Build and troubleshoot a variety of L2 networks using bridges, L2 switches, and repeaters

  • What are the main differences between a bridge, an L2 switch, and a hub?

Repeaters and bridges are the main building blocks of a simple L2 network, but they are very different devices. A repeater implements the OSI physical layer and extends the collision domain of the network. Repeaters, commonly known as hubs, simply track incoming traffic and retransmit it on all outgoing lines. That means that only one transmission is possible at a time, since multiple transmissions will result in collisions. The repeater is designed to extend the physical length of a network segment by digital signal regeneration. Basically, it repeats the data from its input to all of its outputs.

The basic idea of a bridge is to transparently interconnect LANs. A bridge is a smart device that learns the presence of hosts. Incoming frames are switched to one outgoing line accordingly. This, as opposed to a repeater, allows for many transmissions to happen at the same time.

The basic operation of a bridge can be described as follows:

  • Forwarding by relaying data frames
  • Discarding damaged frames through CRC
  • Listening and learning the actual location of stations attached to it
  • Filtering to prevent unnecessary (or unwanted) traffic from flowing through a particular LAN segment if there is no host to receive it
  • Maintenance of the forwarding (and filtering) database (FDB), which is used to determine if a frame will be forwarded or filtered, and where dynamically learned entries will age out

Consider this figure:


The bridge will learn that host C is located in Ethernet 2 and forward the traffic. It will also learn that B is in Ethernet 1 and filter the traffic keeping it local to Ethernet 1. Finally, two very important concepts to remember. For the bridge to know that B is in Ethernet 1, B must have produced some traffic. Upon listening to that traffic, the switch will save that information in the forwarding database. If A sends traffic to a host D, which does not exist, the switch takes it as an unknown address. The problem is that it does not have a way of knowing if it is because host D has not produced any traffic and it has not learned its location, or if host D is simply nonexistent. The bridge will err on the side of caution by flooding (broadcasting) the traffic to all of its ports, just in case host D is present. Every time a switch receives traffic destined to an unknown address, it treats it as broadcast traffic and floods it out of all of its ports. Think about this in relation to gratuitous ARP. Of course, if the traffic is destined to the broadcast address (ff ff ff ff ff ff), the switch will just comply and treat it as broadcast to all ports belonging to the same VLAN.

When networks started to get more dynamic, companies started to follow an "ethernet to the desk" strategy. Each computer used by a developer would now connect to its own port in a bridge. The older "vampire taps" were no longer needed. At that time, bridges started to give way to layer 2 switches. The basic principle of the two devices is the same, but multiple users could now be attached directly to the same switch. Frame handling began to be done in hardware most of the time, and multiple data paths could handle multiple frames at a time. Layer 2 switches could also do "cut through", where as soon as the switch could read the destination address of the frame in the header, it would queue the frame to be immediately sent without considering how good the frame was. CRC checks could make bridges comparatively slow, but cut through dramatically improved the speed of switches. It had a cost, though, since corrupted frames could circulate around the network unnecessarily.

These concepts can be summarized in two figures:



 

6g. Use Virtual LANs (VLANs) to create multiple LANs in the same physical environment

  • What is the difference between a trunk (802.1q) and an access link in a VLAN environment?
  • What is the difference between dividing the network into VLANs and using subnetting to group traffic?

In its simplest form, a LAN is a broadcast (or flood) domain – a section of the network where any data link layer broadcast traffic is delivered to all end stations. Beyond those boundaries, broadcast traffic does not flow. The LAN boundaries are determined by cabling. Bridges receive and forward broadcast traffic, and devices on different LANs cannot see each other unless a device with ports in each LAN, like a router, routes the traffic between the two LANs. Because broadcast traffic is distributed throughout the entire LAN, they do not grow too much.

With multiple users in a single LAN, traffic can grow to unmanageable proportions. That is where VLANs become useful. A VLAN is an administratively configured broadcast domain. The network administrator determines which end station belongs to which VLAN. Broadcast traffic for one VLAN is only seen by members of that VLAN.

Normally, VLAN assignment is based on the physical port of the switch, but other methods can be used, like MAC-based or application-based VLANs. In the past, LANs were small and embraced only a single bridge. However, as networks grew and switches and routers were added, simple grouping became obsolete, especially if there were members of the same VLAN in different bridges. To deal with that problem, IEEE developed the 802.1q standard. 802.1q established a method for tagging Ethernet frames with VLAN membership information. It works in conjunction with 802.1p, which is a layer 2 standard for prioritizing traffic at the data link layer. 802.3ac combines both and defines a frame format that implements both priority and VLAN information. An 802.3 frame with a value of 8100 in the type field is a tagged frame. The next 3 bits carry the priority, the next bit the canonical indicator, and the following 12 bits the VLAN tag. This diagram shows a regular, untagged 802.3 frame followed by a tagged one.


The 802.1ac frame section is "shimmed" into the original frame starting where the original Type/Length field was. The original place of the Type/Length field is now filled with type 8100 which means "this is a tagged frame". That is followed by a three-bit priority, a one-bit "canonical indicator" always set to 0 for Ethernet, and finally, 12 bits of Vlan ID. 12 bits for type ID allows for a total of 212 = 4096 different VLAN IDs to be used. After that, the original frame is continued normally with the original T/L and the rest of the frame.

To use the protocol, you have to implement a VLAN registration protocol. The VLAN registration is propagated across the network. Incoming frames are tagged with a VLAN ID. Outgoing frames are untagged if needed. Tagged frames are sent between VLAN switches. The following terms are very important for VLANs:

  • Tagged frames are frames with a VLAN tag
  • Trunk links allow more than one VLAN frame through them, and can attach two VLAN-aware switches to carry frames with different tags
  • Access links reside at the edge of the network where legacy devices attach. They are untagged for VLAN unaware devices, and VLAN-aware switches add a tag to received frames and remove them before transmitting
  • Hybrid links carry tagged and untagged traffic, and allow VLAN-unaware hosts to reside in the same VLAN

A clear port is similar to an access port. It will always accept clear frames, and also accept tagged frames, but only if the tagged frame belongs to the native VLAN or a VLAN statically configured to that port. All other frames are dropped. It also removes any configured tags before transmitting frames.

An 802.1q port is the same as a trunk port. It transmits traffic with any configured tags. However, it only accepts clear (non-tagged) frames, or tagged frames that belong to native VLANs or those statically bound to the port.

That brings us to the concept of port binding. The native VLAN is the VLAN whose VLAN tag is inserted to non-tagged traffic received in the port. MAC addresses are learned as belonging to the native VLAN of the port only. A port can also be statically bound to any other VLAN. As such, it is configured to accept traffic with a VLAN tag different to the native VLAN. Multiple VLANs can be statically configured to a port. The port will forward traffic belonging to the statically-configured VLANs and drop any traffic with a different tag.

This figure clarifies these concepts.


  • Incoming traffic: Clear
    • Outgoing traffic:Tagged with VLAN 20
  • Incoming traffic: Tagged with VLAN 40
    • Outgoing traffic: Tagged with VLAN 40
  • Incoming traffic: Tagged with VLAN 50
    • Outgoing traffic: Dropped

 

6h. Illustrate the Spanning Tree Protocol (STP), why it is needed, and how it breaks loops in an L2 network

  • Why is the spanning tree protocol essential in multi-switch networks?
  • How is the root bridge in a spanning tree selected?

As networks grow, the possibility of involuntarily (or voluntarily) creating loops in networks increases. Loops can wreak havoc to a network that is based only on transparent bridging, especially during the presence of broadcast traffic. Because of the loop, traffic that was already forwarded by the bridge, will come back to the input and will again be sent out. This duplication of packets will cause network storms that will degrade network performance and in most cases render the network basically useless.

Spanning Tree Protocol (STP) was developed to solve the active loop problem. STP configures an active topology of bridged LANs into a single spanning tree, so there is at most one logical path between any two LAN segments. This eliminates network loops and provides a fault-tolerant path by automatically reconfiguring the spanning-tree topology as a result of a bridge failure or breakdown in the path.

STP operates transparently to the end nodes. It is implemented by the switches in the network and end hosts are unaware of what is going on with it. IEEE standard 802.1d describes a spanning tree algorithm that has been implemented by most bridge manufacturers. This standard defines each bridge to have a bridge ID. The ID is a 64-bit value composed of a priority followed by the bridge MAC address of the bridge.

The creation of a spanning tree starts with the selection of a root bridge. The root bridge provides a point of reference within the bridge LAN that makes the process of creating the spanning-tree faster. When the network is first brought up, all bridges participating in the spanning tree process talk to each other. The root bridge is selected based on its bridge ID. The bridge with the lowest valued bridge ID will become the root bridge. The bridge ID consists of the bridge priority followed by the bridge MAC address. If a network administrator wants a particular bridge to become the root, all they need to do is set the priority to a low value. If the priority is the same for each bridge, then the lowest MAC becomes the root. Reselection of a root bridge happens again in the event of a network reconfiguration, such as when a new bridge is connected to the network or an active bridge or link fails.

After a root bridge has been selected, a Designated Bridge is selected for each LAN segment in the network. This happens when the network is brought up, or when there is a topology change (when a new bridge is added or when an active link or bridge fails). The designated bridge for each LAN is selected as the bridge with the port with lowest root path cost. In the event of equal path costs, the bridge with the lowest bridge ID is selected as the designated bridge. To exchange all the information required for the selection of the root and designated bridges, bridges use a unique packet called a Bridge Protocol Data Unit or BPDU. BPDUs carry all the information needed by all switches to determine that a loop-free network exists. Bridges use a special multicast address in order to communicate amongst themselves using BPDUs. 802.1d defines the address 01-80-C2-00-00-00 as the multicast address for BPDUs. All 802.1d compliant bridges must use this address. The BPDU looks like this:


At any given time, a bridge's port will be in any of the following states:

  • The blocking state: Port does not relay frames between LAN segments. Received BPDUs are still processed in order to maintain the spanning tree. Learning process does not add station information from a blocked port to the filtering database. This state is entered upon bridge initialization, or from the Disabled state if enabled via bridge management. This state can also be entered from the Listening, Learning, or Forwarding states if a BPDU indicates that another bridge is a better-designated bridge for the attached LAN.
  • The Listening State: Port is preparing to participate in relaying of frames. No frames are relayed (to prevent loops during spanning-tree topology changes). BPDUs are both received and transmitted for participation in STP. This state is typically entered from the Blocking state after STP protocol has determined that the port should participate in frame relay. It is typically left upon expiration of the protocol timer and entering into the Learning state.
  • The Learning State: Port is preparing to participate in relaying of frames. No frames are relayed while in this state. Learning process is enabled to prevent unnecessary relay frames once the Forwarding state is entered. BPDUs are both received and transmitted for participation in STP. This state is entered from the Listening state upon expiration of the protocol timer.
  • The Forwarding State: Port actively relays frames between LAN segments and the learning process is active. This state is always entered from the Learning state and it may be left to go to the Blocking or Disabled states by either spanning tree or management action.
  • The Disabled State: This state is entered from any other state via management directive. No frames are relayed and no BPDU is examined or transmitted. It is left by management directive.

 

6i. Describe different allocation methods used by the data link layer

  • What access allocation mechanism provides a more deterministic behavior in an L2 network?

In a token ring topology, all hosts are attached to a ring. In order to access the channel, they must wait for a circulating token to arrive. When the token arrives, they grab it, and data flows for a certain amount of time. The time that the token can be kept, and that data can be sent, is selected when the ring is initializing.

The token ring has fallen in popularity and it is barely used, if at all, in networks today. What are the typical disadvantages of a token ring configuration? To start, every station in a token ring is a single point of failure. Since all stations are connected in a physical (or logical) ring, any host that fails breaks the ring and the network is down. Contrast that with Ethernet using CSMA. There, every station works independently and a failure of one will not affect the operation of all others in the network. True improvements in the token ring technology addressed this issue by creating bypass switches that will bypass the host if it fails. However, the single point of failure has simply moved from the host to the bypass switch. In addition to that, every time that a host fails, and the bypass switch takes over, a full reconfiguration and convergence of the network must take place. Stations must stop all traffic forwarding to enter a reconfiguration state to determine things like what is the maximum time to hold the token and what is the new time to wait for the token to come back to them before announcing a lost token condition and a possible ring breakage.

The token ring technology did have some good properties after all. For example, Token Ring can be relatively deterministic. Once the token is released, it is relatively easy to calculate when the next frame with data will arrive based on the number of stations in the ring and the pre-selected token hold time for each station. 10 stations down the ring with a hold time of 5 msec means that you should expect to see the token back in 50 msec. If it has not, the station initiates a "lost token" recovery process.

Token Ring will allow the use of large or small frames, as opposed to CSMA whose minimum frame size depends on the roundtrip time between the two farther stations in the network. Therefore, the minimum frame size on CSMA must be large if the network is large. With a Token Ring, all you need to do is have possession of the token to transmit. Your frames could be small without any effect on the operation of the network.

Another good feature of the token ring is its relatively steady performance under heavy loads as compared to CSMA. Heavy load means that the medium is busy most of the time, and that means that a CSMA host will not be able to transmit, or if it does, multiple collisions could occur. Token Ring, on the other hand, will take the same under low and high traffic load since your use of the network depends on the roundtrip time of the token. Under low load, on the contrary, CSMA performs much better. Low load equals fewer collisions, and that translates into better throughput. A Token Ring host must wait for the token to arrive before transmitting new data, thus subjecting it to, at times, substantial delays even if the medium is relatively open.

The final relative advantage of Token Ring is the support for priority assignment. A token could be labeled as high priority, and other hosts at lower priorities can not send traffic until the priority of the received token is set to a lower value than yours. A doctor's office requesting vital information during surgery is a classic example of where this could be useful.

Token Ring was originally greatly favored by IBM and standardized by IEEE 802.5. Token Bus, standardized by IEEE 802.4, was a variation of a traditional token ring where the physical link was replaced by a "logical" link although machines were using a bus, not an actual ring. Then there is the CSMA with collision avoidance, CSMA/CA developed for wireless networks, to be considered next.

 

6j. Use the 802.11 protocol to build and use wireless networks

  • What is the difference between DSSS and FHSS transmission methods in a wireless network?
  • Under what conditions would DSSS work better than FHSS?
  • What conditions in your network will move you into replacing 802.11ac for 802.11ax?

802.11 uses a CSMA/CA access mechanism, where collisions are avoided before they can happen. This is accomplished by having the devices reserve the channel before transmitting. The intended server sends a short message called a request to send (RTS). The other side responds with a clear to send (CTS). The channel is then available for use during that reservation period.

WiFi networks are classified based on their complexity, as follows:

  • Independent basic service set (IBSS), also known as ad hoc, is a system of peer-to-peer hosts that communicate with each other and is not intended to be attached to the internet.
  • Basic service set (BSS) is composed of users that wirelessly attach to an access point (AP). The AP is attached to a wired LAN with access to the internet. There is only one AP, and access to the internet is limited to users within a short radius of the AP.
  • Extended service set (ESS), otherwise known as an infrastructure network, contains more than one AP that multiple users can connect to. The AP is then attached to a wired LAN with access to the internet. Users can freely move from one AP to the next to continuously have non-interrupted access.

To provide privacy during communications, two methods are normally used: direct sequence spread spectrum (DSSS) and frequency-Hopping spread spectrum (FHSS).

DSSS systems generate a redundant bit pattern (chip) known by both sides for each bit to be transmitted. To a third-party user, the DSSS appears as low-power wideband noise. No message can be recovered without the knowledge of the chip pattern.

FHSS uses a narrow carrier that changes frequency in a pattern known to both the sender and the receiver only. If both are properly synchronized, it appears as a single channel. To the unintended user, it appears to be a short pulse of noise.

DSSS operates with a lower signal-to-noise ratio and can operate over longer distances than FHSS. However, due to the frequency spectrum consideration, DSSS is more prone to interference than FHSS. For that reason, DSSS should be considered in places with high interference and electromagnetic noise.

The original 802.11 release operated at a frequency of 2.4 GHz and achieved a maximum bandwidth of 2 Mbps. Multiple releases have followed, starting with 802.11a up to the current 802.11ax. 802.11ax offers many features that allow it to be aligned with the growth of the internet of things, IoT. The goal of IoT is for every device to be attached to the internet. The 802.16 wireless standards, also known as WiMax, use a connection-oriented architecture, with each connection getting one of four different classes of service: constant bit rate, real-time variable bit rate, non-real-time variable bit rate, and best effort. Constant bit rate was developed for the transmission of uncompressed voice. Variable bit rate is intended for compressed multimedia in a real-time environment. The non-real-time variable is intended for the transfer of large files that are not real-time.

 

Unit 6 Vocabulary

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • ARP
  • ARP opcode
  • Gratuitous ARP
  • Sliding Window
  • Go-Back-N
  • Selective Repeat
  • Selective Reject
  • CRC
  • Frame delineation
  • Bit stuffing
  • Character stuffing
  • Character Count
  • Ethernet V2
  • ALOHA
  • 802.3
  • 802.4
  • 802.5
  • CSMA
  • CSMA/CD
  • CSMA/CA
  • Token Ring
  • Token Bus
  • HDLC
  • PPP
  • Repeater/hub
  • Bridge/Switch
  • Bridge FDB
  • Store and Forward
  • Cut Through
  • VLAN
  • 802.1Q
  • Tagged frame
  • Access, Hybrid, and Trunk port
  • Port Native VLAN
  • Port VLAN binding
  • STP
  • BPDU
  • Blocking, Forwarding, Learning, Disabled port state
  • WiFi
  • WiMax
  • 802.11
  • 802.16
  • DSSS
  • FHSS
  • BSS
  • IBSS
  • ESS
  • Ad Hoc WiFi

Unit 7: Multimedia, Security, and Cloud Computation over the internet

 

7a. Compare application protocols, such as Voice over Internet Protocol (VoIP) and Internet Protocol television (IPTV)

  • What protocols are involved for an IPTV system to provide full video streaming?
  • What are some of the codecs used by media streaming companies today?

With the emergence of digital networks and the internet, Voice over IP (VoIP), or IP Telephony, began to be used to deliver voice and multimedia over IP. IPTV is closely related and refers to the delivery of television content over IP.

Audio and video are analog signals. Before sending these signals through digital infrastructure, they need to be transformed by the sender into a digital bitstream. The receiving side must decode the signal to obtain the analog video or audio content. A codec, which is a shortcut for coder/decoder, is the computer program used to do that. Both sides must agree ahead of time what codec will be used, so they both use the same mechanisms to encode and decode. Agreement of what codec to use is done when the session is established using a protocol like SIP. Codecs like MPEG combine both audio and video in one. Another big advantage of codecs is that they might also compress the data to reduce transmission bandwidth. Sending uncompressed video results in huge demands for bandwidth. Some popular codecs include AAC-LD (Facetime), Opus (WhatsApp), SILK (Skype), and G.722, G.711, and MPEG-4 for video on demand. Once the signal is in digital form, several protocols are needed to establish and maintain the connection and to deliver the content. Protocols like SIP, RTP and RTCP help with that task.

Let's now consider bandwidth requirements if uncompressed multimedia streams are to be sent. The introduction of multimedia streaming created many challenges for TV service providers. As an isolated example, consider the case where a service provider wants to offer high-definition TV (HDTV) with a resolution of 1080p to its customers. What bandwidth would be required to do this?

Resolution is normally expressed as horizontal pixels by vertical pixels. So, an HDTV of 1920 × 1080 means that the screen will be 1920 pixels in the horizontal direction and 1080 pixels in the vertical direction. In other words, one single frame has 1920 × 1080 = 2,073,600 pixels. We also need to consider the number of frames per unit of time. To make it simple, assume that 30 frames are sent every second. That means that we would need to send 30 × 2,073,600 = 62,208,000 pixels per second. Finally, each pixel is represented by bits, since information is being sent digitally. How many bits are there per pixel? A rule of thumb is that for true color depth, you need 3 bytes per pixel, which is equal to 24 bits per pixel. 

Therefore, the bandwidth is:

Bandwidth = (62,208,000 pixels/second) × (24 bits/pixel) = 1,492,992,000 bits per second.

This is over 1.5 Mbps. You can see why compression techniques and high bandwidth capabilities like those provided by fiber optics are so important. To review, see SIP and RTP and IPTV.

 

7b. Describe some typical challenges for TCP/IP security and their solutions

  • How are symmetric and asymmetric key encryption different?
  • How are symmetric and asymmetric key encryption similar?

Today's business climate requires companies to establish and maintain:

  • extranets to creating links with suppliers and business partners;
  • intranets, which are costly wide area networks that link facilities across the world; and
  • the ability to support remote users when employees need to telecommute or access the company network while traveling.

Before the internet became the preferred tool, companies relied on leased private lines. However, the internet is much more convenient, and allows for vastly increased productivity. Needless to say, security issues can be a problem, such as the privacy risks of confidential data being intercepted as it is accessed remotely. There is also the danger of losing data integrity, which involves the modification of confidential or non-confidential data. Finally, identity spoofing is a risk, where intruders impersonate users and gain access to confidential information.

The problem with the internet is that IP routing methods make large IP networks vulnerable to issues like spoofing, where one machine in the network imitates another, sniffing, where one machine eavesdrops on the transmissions between two other machines, or session hijacking, where an attacker employs both of these techniques to take over an established communication and masquerade as one of the communicating parties.

There are many methods for providing security over the internet. The simplest is basic data encryption. Data encryption simply means to take the original data and apply a key to encrypt it so that it appears garbled or otherwise unreadable to any unintended users.

Encryption is accomplished by:

  • taking an unencrypted message (plaintext);
  • applying an encryption algorithm using a secret "key" to generate encrypted text, called "ciphertext";
  • transmitting the encrypted ciphertext message;
  • having the recipient apply the same algorithm to the ciphertext using the same key to recover the original plaintext message.

Most encryption and hashing algorithms are complex, but we can see how they work by considering a simple XOR function. XOR encoding works by taking a bit pattern for the message, M, and performing an XOR function with bit pattern K (the encryption key) to get bit pattern C, the ciphertext that is sent through the network. The receiver will perform a similar procedure by applying XOR to C with K to get back M, the original message.

On the sending side:

M = Original Message = 00111010 11110110 00001111
                                                      XOR
K = Encryption Key    = 11100011 01010101 11110000
                                                        =
C = Ciphertext           = 11011001 10100011 11111111

As you can see, C is a completely different message that is sent through the internet.

On the receiving side:

C = Ciphertext              = 11011001 10100011 11111111
                                                          XOR
K = Encryption Key        = 11100011 01010101 11110000
                                                            =
M = Recovered Message = 00111010 11110110 00001111

The recovered message is the same as the original message.

If the encryption key can be kept secret between both parties, this method can work. However, a skilled hacker can, within a few seconds, break this kind of code and recover the key. In the real world, several more complex encryption algorithms based on mathematical transforms are used for encryption, such as:

  • Data Encryption Standard (DES, 56-bit key)
  • Triple Data Encryption Standard (3DES, 168-bit key)
  • Advanced Encryption Standard (AES, 128, 192, and 256-bit keys)

Even with these, high-tech hackers can find ways to crack keys that are used repeatedly. The "key" to having a safe key is to change them often. However, by doing this, making sure that both parties have the most recent key becomes a big challenge.

Consider symmetric key encryption. Both sides use the same key to encrypt and decrypt; that key is called a "shared secret". An approach that could be better would be asymmetric encryption, which uses private and public keys. The public key encrypts data, while the private key decrypts data. If A wants to send data to B, A will use B's public key to encrypt. B will decrypt using its own secret key. The secret key is never shared, and thus provides a level of security not present with symmetric key encryption. This method uses complex math with modular arithmetic, and key distribution is easier, since the public key can simply be broadcast or stored in a public shared database like a certificate authority. The private key always stays with the owner. How does this algorithm, which uses a key that everyone knows, work? By knowing the public key, you only know half of what you need to know:

Let

m = original message

K+b = B's public encryption key

K-b = B's private decryption key

K+b (m) = encrypted ciphertext using B's public key

K-b (m) = decrypted message using B's private key

For public key cryptography you encrypt with public key and decrypt with private key such that:

K-b (K+b (m)) = m

An intruder, C, can't decrypt the message, since C does not have K-b, and

K-c (K+b (m)) ≠ m

The decrypted message by the intruder C will not be the right one.

One potential issue is that asymmetric algorithms can be much slower (up to a thousand times slower, in fact) than symmetric algorithms. There are ways to overcome this limitation, however.

 

7c. Improve TCP/IP security by using security protocols

  • How does the Diffie-Hellman algorithm work?
  • What is IPsec used for?

One way of using some of the advantages of symmetric key cryptography without the burden of asymmetric key cryptography is via a method that allows hosts to dynamically create and share secret keys using a public network. This is like having the best of both worlds.

The best-known and most-used algorithm to accomplish this is Diffie-Helman. When using the Diffie-Hellman there is no need to store secret keys for long periods of time, which reduces risk. With Diffie-Helman, the nodes agree on two values ahead of time: P (a prime number larger than 2) and G (an integer smaller than P). These values can be made public. Each node will also select its own private value X, which is less than P-1. After that each node calculates a new value Y = GXMod P. Y is also a public key and can be exchanged through the internet.

However, the public key Y is useless without its other "half", which is the private (secret) key X. When receiving the public key Y, each node calculates a new common secret key, Z, which is equal to YX mod P. This uses modulo mathematics. Modulo (or mod) is an operation that divides the two numbers and returns the remainder. For example, 64 mod 11 = 9, because 64/11 = 5 with a remainder of 9. Z is then derived from the host's original secret key, X, and the other host's public key, Y. Using this technique, Z1 and Z2 are the same number. Because of this, Z can be used as the key for a symmetric method.

 

7d. Illustrate how IT professionals use Virtual Private Networks (VPNs) to enhance security in the workplace

  • What are the various kinds of VPNs? What are each of their strengths and weaknesses?
  • What is the difference between the Tunnel and Transport modes of IPsec? When is one preferred over the other?

VPNs use the public internet to carry private communications safely and inexpensively. VPNs supply network connectivity over long physical distances, and are a form of a WAN over public networks instead of private leased lines. VPNs support remote access client connections, LAN-to-LAN internetworks, and controlled access within an intranet. VPNs are based on a "tunneling" strategy – packets in one of several VPN protocol formats are encapsulated within IP packets.

One of the original VPN protocols was PPTP, which was developed by Microsoft. It was easy to configure, had low overhead, and ran much faster than its counterparts. Windows supported it by default, which made it attractive to some users. However, the lack of strong security features made it less desirable than its main rival, Layer 2 Transport Protocol (L2TP). The L2TP protocol constructs a tunnel to encapsulate L2 data in an IP packet via the public Internet. L2TP has lots of industry support, and runs over any transport. It makes a remote user look as though they are connected to the corporate network, even if they access it through the internet. The one big weakness of L2TP is its lack of encryption and security on a per-packet basis. L2, by itself, does not provide security, but was designed to work alongside IPSec. L2TP defines the protocol for the establishment of the tunnel, leaving IPsec to negotiate the security parameters and send encapsulated packets.

IPsec is a framework of open standards for ensuring secure private communication over IP networks. It ensures confidentiality, integrity, and authenticity of data communications across a public IP network using the Diffie-Hellman key exchange between peers on a public network. It uses public key cryptography for signing the Diffie-Hellman exchanges, and digital certificates signed by a certificate authority to act as digital ID cards. It offers two modes of operation, transport, and tunnel. In transport mode, only the IP payload is encrypted, and the original IP header is left intact. This only adds a few bytes to each packet, and allows other devices on the public network to see the final source and destination of the packet. It is better suited for telecommuting and remote access. In tunnel mode, the entire original IP datagram is encrypted, and it becomes the payload in a new IP packet. The company's IP addresses are hidden from the public network, and only the original and ultimate IP addresses are sent as clear text. Tunnel mode is better suited for site-to-site connectivity. IPsec adds a header to each packet. Two headers are possible, the authentication header and the ESP header. They can be used independently or together.

 

7e. Evaluate remote access and directory services

  • What are the differences between the two popular remote access protocols RCP and VNC?

Remote Desktop Protocol (RDP) is Microsoft's proprietary remote access protocol, while Virtual Network Computing (VNC) is an open, platform-independent remote access protocol. They are both designed to allow remote access or control of another computer. VNC normally connects directly to computers, while RDP typically connects to a shared server. By default, RDP servers listen on TCP port 3389, while VNC uses port 5900. The single most important issue to consider with RDP is that it is only compatible on Windows-based applications. For Windows-based platforms, it is the way to go. VNC, on the other hand, is platform-independent, so it should always be the protocol of choice for multi-platform environments.

To review see VNC and RDP.

 

7f. Apply fault tolerance techniques to improve network reliability

  • What are two popular methods for improving network reliability by implementing redundancy?

Although it was not originally designed to do so, the spanning tree protocol can be used to provide redundancy by making two parallel (but different) paths to each segment of a network. This creates a redundant path, but comes with the cost of forming network loops. STP breaks all loops in the topology, which allows for the network to continue operating but quickly converge into a new loop-free topology if a switch fails.

Virtual Router Redundancy Protocol (VRRP) was designed from scratch to provide router redundancy and increase the reliability of a network. VRRP eliminates the single point of failure inherent in static default routed environments. VRRP specifies an election process, in which one router declares itself the master router and the other becomes the backup. The backup router monitors the master's availability by receiving periodic VRRP master advertisements. If the backup fails to receive the master advertisements, it will assume ownership of the default router IP address. Cisco has also created a proprietary protocol called Host Standby Router Protocol (HSRP). VRRP was based on HSRP, and both suffer from a lack of load balancing. Cisco overcame that limitation by developing GLBP, which addressed the load balancing feature, but again, in Cisco environments. The counterpart in a multivendor environment is called Common Address Redundancy Protocol, CARP. The main use of CARP is still to provide failover redundancy, however with the addition of load balancing functionality.

 

7g. Describe the basis of cloud computing over the internet

  • What is the difference between the DaaS, SaaS, and FaaS cloud computing platforms?

Cloud computing refers to using resources available in the "cloud" (that is, on other servers) to have a vast supply of resources without the need to have them physically available. Cloud computing can be of great help for enterprises that want to expand their offerings and services without investing in high-tech infrastructure. There are, of course, some limitations to cloud computing. For example, when using cloud computing, you rely on a third party, and you must use their systems as-is, which gives you limited or no customization options. Also, since cloud computing resources are owned and controlled by a third party, you do not have control over things like downtime or security. Despite these limitations, cloud computing is widely used. 

Cloud computing works using a service-oriented platform like Desktop as a Service (DaaS), Infrastructure as a Service (IaaS), Software as a Service (SaaS), Function as a Service (FaaS), and more. DaaS is a platform where a cloud provider hosts the back-end that is required for a typical desktop infrastructure operation. With DaaS, desktop operating systems run inside virtual machines on servers in the cloud provider's data center. In IaaS, the cloud provider maintains the infrastructure components that would normally be in the customer's data center. This includes things like servers, storage, and networking hardware. In SaaS, software is provided and hosted by the cloud provider, and is normally accessed via the Internet from the customer's premises on a subscription basis. Software does not need to be installed in individual computers. In FaaS, customers can offer on-demand services for application functionality without the need for the customer to host the application on their premises. Customers normally pay when an action occurs, and when it is done, everything stops making it very cost-effective for the customer. 

To read more about cloud services, see Cloud Computing.

 

Unit 7 Vocabulary

This vocabulary and acronym list includes terms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • VoIP
  • VOD
  • IP Telephony
  • Codec
  • Ciphertext
  • Symmetric key encryption
  • Asymmetric key encryption
  • Diffie Helman
  • VPN
  • PPTP
  • L2TP
  • IPsec
  • Tunnel Mode
  • Transport Mode
  • RDP
  • VNC
  • VRRP
  • HRSP
  • GLBP
  • Public Cloud
  • Private Cloud
  • Hybrid Cloud
  • EaaS
  • SaaS
  • PaaS
  • DaaS
  • IaaS
  • AaaS
  • FaaS
  • MBaaS