CS402 Study Guide

Unit 4: The Transport Layer (TCP/UDP)

4a. Describe the use of the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to transfer data segments

  • Which protocol, UDP or TCP, would you select for an application that requires extreme reliability and guaranteed delivery?
  • Which protocol, UDP or TCP, would you select for an application that requires efficient data transmission at the cost of lacking flow control?

UDP and TCP are the two Transport Layer protocols used in TCP/IP networks. Both protocols run on top of IP, which is an unreliable system. TCP itself is a reliable protocol, while UDP is an unreliable protocol.

Port numbers are needed for data to be sent to the appropriate final destination. Both UDP and TCP carry port number information in their headers. Both protocols also provide for a checksum field to assure data integrity, although it is sometimes not used by UDP. Finally, both TCP and UDP headers have a length field, which prevents incorrect or runt segments from circulating through the network.

The essential difference between the two protocols is that TCP is a connection-oriented, reliable protocol, while UDP is a connectionless, unreliable protocol. TCP was first developed in 1973, but in the 1980s it became clear that its stringent requirements were not required in some cases, such as applications like inward data collection, outward data dissemination, or real-time video streaming. In these cases, not stopping when a frame is lost is better than stopping until the lost frame is retransmitted. UDP was developed with a header that contains the source and destination port numbers, and an optional checksum and a length field were added to provide some data integrity. That way, although there is no way for the segment to be retransmitted, the destination system will not use a segment that is obviously corrupted.

To review, see Principles of a Reliable Transport Protocol, The User Datagram Protocol, and The Transmission Control Protocol. Note that section 4.1.1 describes a reliable transport protocol on top of a "perfect" network layer.

 

4b. Explain the use of the TCP and UDP header fields

  • What is the difference between the RST and FIN flags of the TCP header?
  • What field in the TCP header is used for flow control?

The following figures show the TCP and UDP header fields.


The UDP header includes the port numbers used by sender and receiver (16 bits each), and an optional checksum and segment length (also 16 bits each). The protocol must provide port information to assure that the segment goes to the correct process. The checksum field assures header and data integrity, and the length field prevents incorrect or runt segments from circulating in the network.

TCP, on the other hand, is a complex protocol with many header fields. These are the fields in the TCP header:

  • Source and destination port (16 bits each): TCP port of the sender and receiver. They alert the receiver of the process that sent the segment and to what process it should be directed.
  • Sequence number (32 bits): The sequence number of the first byte of data in the data portion of the segment.
  • Acknowledgment number (32 bits): The next byte expected. The receiver has received up to and including every byte prior to the acknowledgment. You can see that the acknowledgment field is sent as part of the data segment. That is the way that TCP uses to "piggyback" the acknowledgment as part of the data. In the old days, a separate acknowledgment segment was sent, consuming unnecessary resources and bandwidth.
  • Header Lengthor Data offset (4 bits): The number of 32-bit words in the TCP header. Used to locate the start of the data section.
  • Reserved (6 bits): Reserved for future use. Should be set to 0.
  • Flags (6 bits): six 1-bit flags:
    • Urgent pointer (URG): If set, the urgent pointer field contains a valid pointer, used as shown below.
    • Acknowledgment valid (ACK bit): Set when the acknowledgment field is valid.
    • Reset (RST): There are occasions where problems or invalid segments are received that call for the immediate termination of a connection. The RST flag is used to immediately terminate a connection. This is done instead of issuing a FIN flag, as shown below.
    • Push (PSH): Used by the sender as a notification to tell the destination that data must be immediately passed to the receiving process. Normally, data is buffered by the receiving host until the allowed window size is reached. If the PHS flag is set, the receiver should immediately send the data without waiting for a full buffer. There is a subtle difference between the URG and the PSH flag. If using PSH, all data in the buffer must be correctly passed to the process, and all data continues in the correct order. If using URG, only the urgent data is given to the process, which might result in data being delivered out of order.
    • Synchronization (SYN): Used in the first step of the 3-way handshake for connection establishment. The flag alerts the receiver that a connection needs to be established, and provides synchronization information to the other side – that is, what sequence number they should start with.
    • Finish (FIN): Used to close a connection. The FIN terminates a connection graciously. This contrasts with the RST flag which abruptly terminates the connection. An abrupt termination using RST can result in some data loss which will never happen if the connection is graciously terminated with the FIN process.
  • Window Size (16 bits): The size of the receive window relative to the acknowledgment field, also known as the "advertised" window. Used to tell the other side the maximum amount of data that it can send with the next segment. This is TCP's built-in feature for flow control.
  • Checksum (16 bits): Protection against bit errors of the TCP header, payload; and a pseudo-header, which consists of the source IP address, the destination IP address, the protocol number for the TCP protocol (6), and the length of the TCP headers and payload.
  • Urgent pointer (16 bits): When urgent data is present, this field indicates the byte position relative to the sequence number. This data is given to the process immediately, even if there is more data in the queue to be delivered.
  • Options (variable): Optional parameters can be used here, such as maximum segment size, the window scale factor F (which multiplies the value of the window size field by 2F), and the timestamp.
  • Padding (variable): Used to ensure that the size of the TCP header is a multiple of 32 bits, and contains the necessary amount of bits so the whole header has a length that is a multiple of 32 bits.

To review, see The User Datagram Protocol and The Transmission Control Protocol.

 

4c. Explain the transport layer port addressing scheme and port address assignments

  • What is the well-known port used for SSL encrypted frames?
  • What is the well-known port used for HTTP connections?

A TCP connection is uniquely identified by a combination of IP address and port number, which point to a unique process in a unique host. Knowing common port numbers is essential to troubleshoot and understand network behavior. Consider this trace captured by a network sniffer tool like Wireshark:

Trace 1:

Internet Protocol Version 4, Src: 192.168.0.14, Dst: 23.286.196.8
Transmission Control Protocol, Src Port 59139, Dst Port 80 , Seq: 0, Len: 0
Source Port: 59139
Destination Port: 80
[Stream index: 0]
Sequence number: 0
Header length: 24 bytes
Header Length: 20 bytes
Flags: 0x002
Window Size: 4128
Checksum: 0x9v4 [valid]

In an example like this one, we immediately know that a local client HTTP process is using port 59139 as its source port on a TCP connection to an HTTP server. We know that the established TCP connection goes to an HTTP process in the server because its destination port is 80. Another common port number for HTTP is 8080. Why is the source not using a source port of 80 if the originating process is HTTP? This is because the source port is a random number selected by the system to uniquely identify that process in the system. That allows for multiple web browsers to connect to the same web server simultaneously. Each one will receive a randomly chosen port number, and the server will respond to that particular client's HTTP process using that port number. Because of this, you will not always see common port numbers like 80, 22, 25, 443, and so on; in most cases, one of the sides will be a randomly generated number.

Now consider this trace:

Trace 2:

Internet Protocol Version 4, Src: 192.168.0.14, Dst: 23.286.196.8
Transmission Control Protocol, Src Port 3566, Dst Port 443, Seq: 55, Ack: 54, Len: 0
Source port: 63566
Destination port: 443
[Stream index: 4]
[TCP Segment Len: 0]
Sequence number: 55
Acknowledgement number: 54
Header Length: 20 bytes
Flags: 0x014 (RST, ACK)
[Calculated window Size: 0]
[Window size scaling factor: -1 (unknown)
Checksum: 0x9v4 [valid]
Urgent pointer: 0

A quick look tells you that this packet is destined to port 443 at the destination. This means that it is intended for a website that uses SSL encryption, HTTPS. For a server to respond and for you to be able to establish a connection, the server must be listening to that port.

To review, see The Transmission Control Protocol. For a complete list of TCP and UDP port numbers, see this suggested article on a List of TCP and UDP Port Numbers.

 

4d. Describe the Stream Control Transmission Protocol (SCTP) and Real-time Transport Protocol (RTP) and the applications based on these protocols

  • When would you need to use SCTP instead of TCP or UDP?

STCP was developed to provide multi-streaming capabilities, and it provides reliable service to multiple streams. If one stream gets blocked, the other stream can deliver the data. SCTP is message-oriented like UDP, but it is also connection-oriented like TCP. Multihoming allows both ends of a connection to define multiple IP addresses for communication. One is used as the primary, and the remaining can be used as alternative addresses. Not all systems support SCTP, but it can still be used. In the absence of native support, SCTP can be tunneled over UDP.

To review, see Stream Control Transmission Protocol (SCTP).

 

4e. Explain the mechanics of a TCP connection establishment (3-way handshake) and release

  • What is the TCP 3-way handshake used for?

A 3-way handshake is the way TCP establishes a connection. The 3-way handshake starts with setting the SYN (synchronize) flag to alert the other end that a connection is going to be established. That flag is acknowledged, and the connection is accepted by the receiving end by responding with a SYN-ACK. To finalize the connection, the side that initiated the connection acknowledges the SYN-ACK with an ACK of its own. The 3-way handshake not only alerts the receiver that the sender wants to establish a connection, but also provides information to synchronize the sequence number on both sides. Early conceptions of the TCP/IP model used a 2-way handshake. The danger of a 2-way handshake is a real possibility of a deadlock situation.

See the real Wireshark trace below, which clearly shows a 3-way handshake. Line 22 is the first leg of the 3-way handshake with the SYN flag set. Host 192.168.1.3 is establishing a connection with host 151.101.116.153. Line 23 is the response from 151.101.116.153 accepting the connection with a SYN-ACK. Finally, in line 24, 192.168.1.3 finalizes the process by sending the final ACK. 


In contrast to a connection establishment, TCP uses a 4-way handshake to terminate a connection. To understand the process of terminating a connection, it might be better to think of the connections as a pair of unidirectional connections. Each connection is released independently of the other side. The FIN bit is used by either side to release a connection. When a FIN from one side is acknowledged, data flowing from that direction is shut down. However, data might continue to flow indefinitely in the other direction, which could happen if a 3-way handshake was used to terminate the connection. To avoid that undesirable situation, each side sends a segment with a FIN and receives a segment with an ACK – four different segments are needed. The first ACK could be piggybacked with the second FIN requiring only three total segments, but the protocol still requires two FIN and two ACK. This is also known as a symmetric connection release. As a comparison tool, this is a real Wireshark trace for a connection release:


Host 192.168.1.3 is closing the connection with host 104.94.115.9. 4 segments used, as explained above.

To review, see TCP Connection Establishment and TCP Connection Release. These explain how systems recover when one of the handshakes is lost. Page 94 illustrates how the connection establishment SYN flag was used by hackers to generate denial of service attacks, and what was done to remedy the problem.

 

4f. Illustrate the TCP transmission policy and window management

  • A host with an established TCP connection receives a segment with Seq=1024, Ack=2048, W=4096. What is the sequence number of the first segment that the host can send in response, and how many bytes can it send?

Sequence (Seq), acknowledgment (Ack), and window size (W) are header fields used by TCP for flow control and window management. Window size announces how much data the system is ready to accept. This only applies for a system at a steady-state with no segment loss or retransmissions. Otherwise, the system might be in the middle of a congestion avoidance process where the advertised window and the allowed (congestion) window might differ. That will make a big difference. Ack announces which byte number the host has received and processed. By design, if this number is x, that means that the host is acknowledging up to x-1 and allowing the remote system to send starting with sequence number x a total of w bytes. So, if ACK = x and W = w, bytes of up to a serial number of x-1 are being acknowledged and permission is granted to send w more bytes, starting with byte x and ending with x+w-1. Notice how flexible the system is. If the system wants to increase the credit from w to z, where z > w when no new data has arrived, B issues Ack = x, W = z. To acknowledge a segment containing n bytes, where n < w without granting additional credit, it will issue ACK = x + n, W=w-n.

To review, see TCP Reliable Data Transfer.

 

4g. Iillustrate congestion control protocols used by TCP such as Slow Start, Fast Retransmit, Fast Recovery

  • According to the Slow Start protocol, what value will the congestion window be set to after a segment is lost and there is a timeout?
  • When is the Fast Retransmit rule invoked?
  • What is the purpose of the Fast Recovery algorithm?

The slow start algorithm was developed to keep systems from becoming fully congested and to try to avoid further packet loss after a timeout occurs in steady-state. Three different windows are defined for a running system: the allowed window, referred to as awnd, the congestion window, cwnd, and credit, which is the advertised window sent in the most recent acknowledgment.

When a system is brought up, Slow Start defines a congestion window of size 1 (the maximum segment size). That means that awnd = 1 to start. The congestion window then grows exponentially toward the advertised window, which is the maximum that it will be allowed to send at any given moment. In other words, awnd starts at 1 and follows the cwnd exponentially until it reaches the advertised credit. At that moment, awnd, cwnd, and credit will be the same.

Assume that at a certain point, a packet is lost and a retransmission is needed. Packet loss means congestion. Sending more data is unacceptable, since it will only add to the congestion and aggravate the situation. That is where Slow Start comes back into play. It specifies that, right after a timeout occurs, a threshold is set with a size of half the congestion window at the time of the timeout. The congestion window is then set to 1 – the maximum segment size. At that point, it will increase exponentially up to the value of the threshold. After the threshold is reached, the congestion window will continue to increase linearly until the full size of the credit window is reached, as demonstrated in this image.


What determines when a timeout occurs? A timer is associated with each segment as it is sent. If the timer expires before the segment is acknowledged, the sender must retransmit. We call the value of that timer the retransmission timeout, or RTO. This timer must be higher than the expected time it will take for the segment to arrive and the ACK to arrive back. We refer to that time as the round trip time, or RTT. In a TCP system, the value of that time will be variable, since the system is dynamic and packets take different lengths of time to make a round trip.

What is the appropriate size for RTO? One simple solution would be to observe the pattern of delays for recent segments and set the RTO based on that observation. This could cause complications in the real world because you want to give more weight to recent RTT observations, since they are a better representation of the current state of the network. Several solutions were suggested, like using exponential averaging or smoothed round trip time (SRTT). To review the different ways of calculating RTO, see TCP's Retransmission Timeout.

If a segment is lost, the sending host has to wait until the RTO expires to retransmit. RTO is significantly higher than RTT, so stopping to wait for RTO to expire could waste significant amounts of time. Because of this, the sender will save a copy of the segment and will continue sending new segments until the RTO for that lost segment expires. That is when Fast Retransmit comes into play. In TCP, if a segment is received out of order, an ACK must be issued immediately for the last in-order segment. That is used by Fast Retransmit to avoid wasting time. If four Acks are received for the same segment, the segment has likely been lost. The Fast Retransmit rule then requires the host to retransmit immediately after the fourth Ack is received, rather than waiting for RTO to expire.

Since the fact that a segment was lost means that there was congestion in the system, congestion avoidance measures are appropriate. One possible scenario is to cut the congestion window to 1 and invoke the Slow Start/congestion avoidance procedure. This may be overly conservative, since multiple Acks indicate segments are getting through. The Fast Recovery algorithm was developed to overcome this limitation. It can be summarized as:

  • When the third duplicate Ack arrives (the fourth ACK for same segment):
    • Set congestion threshold cthresh to cwnd/2.
    • Retransmit missing segment
    • Set cwnd to cthresh+3to account for the number of segments that have left and the other side has cached
    • Each time an additional duplicate Ack arrives, increase cwnd by 1 and transmit segment if possible
    • When the next Ack arrives that acknowledges new data (that is, a cumulative ack), set cwnd to ssthresh

To review, read The Transmission Control Protocol. For more on TCP congestion control and avoidance, see End-to-End Congestion Control.

 

Unit 4 Vocabulary

This vocabulary and acronym list includes terms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • TCP
  • UDP
  • SCTP
  • 3-way handshake
  • SYN
  • RST
  • URG
  • PHS
  • FIN
  • ACK
  • RTT
  • RTO
  • SRTT
  • Slow Start
  • Fast Retransmit
  • Fast Recovery