CS402 Study Guide

Unit 6: The Link Layer

 

6a. Explain how physical addressing resolves IP addresses at the link layer

  • What ARP opcode is used to request the mapping of an IP address into a MAC address?
  • What is gratuitous ARP used for?

The Address Resolution Protocol is widely used as a means for a host to map and resolve an IP address into the MAC address of the host assigned to that IP address. Consider this Wireshark ARP Capture:


The Opcode is 1, meaning this is a request. The destination MAC is ff:ff:ff:ff:ff:ff, which is meant to go to every host in the network. The host with the IP address 192.168.1.3 is asking for the MAC address of the target host with the IP address 192.168.1.102. The MAC address field of the target host is set to all 0s. Only the host configured with IP 192.16.1.102 will respond to this request. The target MAC is set to all 0s, since this is the address that it is looking for.

Now, consider this capture:


The Opcode is 2, which means it is the response to the request shown above. The host with the IP address 192.168.1.102 is responding to the request letting it know that its MAC address is 20:f1:9e:d8:18:e8. The frame is being sent to the destination of the actual MAC address of the host that sent the request, 60:6C:66:0e:b0:19.

An interesting case of the ARP protocol is "gratuitous ARP". When a host is attached to the network and remains silent, all other hosts need to issue ARP requests if they need to contact it. However, some hosts do not want to remain silent, like a fresh router attached to the network. In this case, they will issue a gratuitous ARP. Consider this trace:


This is a type 2 (that is, a response), even though nobody has issued a request. The source and destination IP are both the same: 10.0.0.6, No response is expected. This is a gratuitous ARP where 10.0.0.6 is simply advertising itself. Each host in the network will add 10.0.0.6 with MAC 00:00:0c:07:ac:01 to their ARP database. The next time any host needs to send traffic to that destination, it will go directly without the need for the original ARP request, since the ARP table will already contain the MAC address.

Also, notice that the ethernet frame in all these examples shows a type of hex 806. That is the type field for an ethernet frame that is carrying an ARP message. If this value had been type 0x800, it would have meant that it was carrying an IP packet. The place where the IP header normally sits is now occupied by an ARP header instead.

 

6b. Illustrate the methods for error control and flow control at the data link layer (DLL), including error detection codes and the sliding window protocol

  • Can you attain 100% utilization in using a go-back-n sliding window protocol?
  • What is the difference between a go-back-n and a selective repeat sliding window protocol?

There are three techniques at the link level for flow control: stop-and-wait, go-back-N, and selective-reject (also known as selective-repeat).

With stop-and-wait, the source transmits the frame. After it is received, the destination indicates a willingness to accept another frame in an acknowledgment. That means that the source must wait for the acknowledgment before sending another frame. If the propagation time of the frame is very large compared to the frame time (the time that it takes for one full frame to be transmitted), then the throughput will be reduced. This happens because the source will spend a short time sending the frame, then it will have to sit and wait for the frame to arrive to the other side and for the ACK to come back. The medium will be mostly idle. The time to process the frame and the time required for the ACK to be generated must be considered. A buffer must be present to save the transmitted frame in case it needs to be retransmitted. Once the ACK is received, it can be dropped, since the frame is already where it needs to be. The big problem is, of course, that there is only one frame in transit at a time. Stop-and-wait is rarely used because of this inefficiency.

What is the link utilization if we are using a stop-and-wait algorithm? We need to calculate the total time needed based on the above explanation where we use two propagations times to account for the frame getting from source to destination and for the ACK to come back from destination to source:

T = Tframe + Tprop +Tproc +Tack + Tprop +Tproc

Where:

Tframe = time to transmit frame

Tprop = propagation time

Tproc = processing time at station

Tack = time to transmit ack

Suppose a 1 Mbps satellite channel with 1000 bit frames, a 270 ms propagation delay (Tprop), and a 100 bit ack frame.

Notice that:

Tframe = 1000/1Mpbs = 1ms

Tack = 100/1Mbps = 0.1 ms

What is the link utilization if we are using a stop-and-wait algorithm?

Based on the above, the total time will be:

T = Tframe + Tprop +Tproc +Tack + Tprop +Tproc = 1ms + 270ms + 270ms + 0.1ms = 541.1 ms.

This means that the channel is being utilized for 1 msec out of a total of 541.1 msec . Utilization is therefore:

U = Tframe / T = 1/541.1 = 0.185%.

In this case, the rate is 100mbps, so the actual throughput will be

T = 0.00185 × 100mbps = 185Kbps

This is a big drop in the total possible throughput. This happens because the frame is very small in comparison to the total propagation time. As the difference between frame time and propagation time becomes closer, utilization improves. Consider a short 1 km link with a propagation delay of 5 msec, a rate of 10 Mbps, frames of 1000 bits, and very small processing and ack times that can be disregarded:

Tprop = 50 msec

Tframe = 1000bits/10 Mbps = 100 msec

U = 100 msec / 100 msec = 1 = 100%

By making the frame twice the one-way propagation delay, you can make the stop and wait protocol have 100% channel utilization. If you select a frame size of 25 msec, the utilization drops to 50%, and so on. These calculations assume that there are no errors and frames do not have to be retransmitted – that would introduce new complexity that would need to be considered.

The Sliding window technique offers a much better utilization. Sliding window techniques allow multiple frames to be in transit at the same time. A source can send frames without waiting for acknowledgments. Destination can accept, and buffer, n frames. Destinations acknowledge a frame by sending an acknowledgement with sequence number of the next frame expected (and implicitly ready for next n frames). This is the same technique used by the transport layer.

The most common form of error control based on sliding windows is the Go-Back-N technique. This technique uses a sender with a buffer to save unacknowledged frames but a receiver with a window size of one frame. The number of unacknowledged frames that can be in transit is determined by the sender's window size. Upon receiving a frame in error, the destination discards that frame and all subsequent frames until the damaged frame is finally received correctly. All subsequent frames need to be discarded, since the size of the receiver buffer is only 1 frame. The sender resends the frame and all subsequent frames either when it receives a Reject message or timer expires.

Alternatively, the selective-reject (or selective-repeat) can be used, where both sender and receive windows are greater than 1. Here, the sender sends multiple frames, and the receiver can save frames that are not received in the correct order. When a failure is detected, the sender will only send the missing frame, then resume regular transmission from the last sent frame. The utilization, in this case, can be calculated by the size of the window in terms of the number of unacknowledged frames that it can hold, like this:

 S=\frac{Nt_\text{frame}}{2t_\text{prop}+t_\text{frame}} ,

where N is the number of frames that the window can hold. It should be obvious that any channel utilization can be accomplished by increasing or decreasing the number of frames that the buffer window can hold. This must be handled with care, since buffer space is expensive. A tradeoff must be made by the design engineer between channel utilization and buffer space and their associated costs.

Flow control techniques like these should not be confused with error control. Bit errors are sometimes introduced into frames. A mechanism is needed to detect those errors so that corrective action can be taken. Detecting errors requires redundancy. With error detection codes, enough redundancy bits are included to detect errors. One very popular method for error detection is called the check-digit method, which adds additional digits to the number to be sent to make it evenly divisible by a third number. As a simple example, assume that the number 645 is going to be sent. Assume that we decide to use 7 as the divisor.

  1. Step 1: Left shift number: 6450
  2. Step 2: Divide by 7 (921) and find reminder: 3
  3. Step 3: Subtract the result of step 2 from step 1: 6450 – 3 = 6447
  4. Step 4: Check that result is divisible by 7: 6447/7 = 921
  5. Step 5: Transmit 6447.
  6. Step 6: If the received number is evenly divisible by 7, it is assumed to be error-free. If is not evenly divisible by 7, it is assumed to have arrived with errors.

You can see that this method detects single-digit errors. Single-digit errors for the number 6447 will look like 5447, 6347, 6457, 6446. All of these errors are detected as none of the numbers is divisible by 7. Even multiple digit errors like 6567 or 5356 will be detected. However, the method does not detect some errors, like 5047 or 6587.

In networks, the most common technique for error detection is the cyclic redundancy check. CRC is a binary check digit method, where r redundant bits are sent for m bits of data (we send a total of n=m+r bits of data). A large r will cause large overhead, so we try for r to be as smaller as possible than m. With ethernet, for example, we can check a frame of up to 12,000 bits (1500 bytes) with a 32-bit CRC code, or as it is commonly expressed, it uses CRC-32. The technique involves defining a divisor polynomial G(x) and using a technique similar to the one described above, but of course, with binary numbers.

To very briefly explain the process, an n bit message can be represented by an n-1 degree polynomial, where the value of each bit (0 or 1) is the coefficient for each term in the polynomial. Example: a message containing the bits 10011010 corresponds to the polynomial M(x)=x7 + x4 + x3 + x. Let r be the degree of some divisor polynomial, G(x). What we need to do is transmit a polynomial that is exactly divisible by G(x). The transmitted polynomial will be P(x). If some error occurs during transmission, it will be as if an error term E(x) has been added to P(x); the received message will be:

R(x)= P(x) + E(x)

The received message R(x) is exactly divisible by G(x) only if 1) E(x) is 0 (there were no errors) or 2) E(x) is exactly divisible by G(x). By carefully selecting G(x) we can make sure that case 2) is extremely rare, so that we can safely conclude that a 0 result means that the message is error-free.

 

6c. Illustrate different framing techniques used at the DLL, such as length count, bit stuffing, and flag delineation

  • What actual frame will be sent on the wire for the frame 0110111111101111001110010 when using bit stuffing with a frame delineation flag of 01111110?
  • For the same frame, what actual frame will be sent when using flag delineation with character stuffing?

One of the functions of the data link layer is to divide data into frames. Frames must be clearly delineated for the receiving side to extract them from all the data being received. This is done by bit stuffing, which uses a flag to enclose the frame. The flag is a reserved bit pattern that indicates the start and end of a frame. A frame consists of everything between two delimiters. The obvious problem is that the data might contain a bit pattern exactly the same as the frame delimiter pattern. Bit stuffing will help remove that problem. Extra bits are inserted to the data to break any pattern that resembles the frame delimiter. For example: assume that the 4 bit sequence 0111 is used as the frame delimiter. If the actual data in the frame has any instance of more than two consecutive 1s, replace it with 110: insert a zero bit after each pair of 1s in the data. That way, the data will never contain three ones in sequence, and so thus the flag will never happen inside the actual frame. Of course, the extra bits inserted introduces overhead and wasted bandwidth. That is a problem that is present with any method.

Sometimes, we might want to work with characters rather than individual bits. Character stuffing is a very similar process, where a full character is used as the delineation tool. Whenever the flag is part of the frame to be transmitted, it is stuffed with the appropriate ESC (or DLE) character. For example, if Flag is the character used to delineate the frame, and the frame to be sent contains something like 12FlagEsc34, then the system will stuff the Esc character to separate the flag that is part of the frame. What will be sent is Flag12EscFlagEscEsc34Flag. Notice that the Esc character is stuffed to break any flag or any other Esc character present in the frame.

One final frame delineation technique is character count, though it is not used often today. In this technique, each frame is preceded by a character that specifies the number of characters that will follow for that one frame, including the character count frame. For example, if there are two frames, one with three characters, c1-c2-c3, and the other with four characters, c1-c2-c3-c4, then this will be sent in the line:

4-c1-c2-c3-5-c1-c2-c3-c4

 

6d. Describe the difference between data link technologies, such as the Point-to-Point (PPP), Ethernet V2, and 802.3 protocols

  • What is the difference between ethernet V2 and 802.3? What do they have in common?
  • Are ethernet V2 and 802.3 backward compatible?

Several data link technologies have been proposed throughout the years. Ethernet first, and then IEEE created the 802.3 protocol. They are similar, and both use CSMA/CD access mechanisms. The main difference is the way the ethernet frame is encapsulated (that is, the way the DLL header is constructed). Let's take a look at both an Ethernet V2 header and an 802.3 header.


They each begin with a 7-byte preamble consisting of alternating 1s and 0s, followed by the start frame delimiter (SDF), which is the binary sequence 10101011. The 6-byte destination address and 6-byte source address follow. Then comes a 2-byte field that determines what type the packet is. If this field is less than 05DC, it is a length field and belongs to an 802.3 frame. The next 3 bites on an 802.3 frame will be the destination service access point (DSAP), source service access point (SSAP), and control fields. These are called the logical link control header, which is described in the IEEE 802.2 specification. If the DSAP/SSAP/CNTRL fields happen to be AA/AA/03, then the frame is a SNAP frame, which is followed by org.id and type fields. It is used for backward compatibility between ethernet V2 and 802.3. Finally, both ethernet V2 and 802.3 frames end with a 4-byte frame check sequence that uses CRC to detect damaged or corrupted frames.

05DC is hex for 1500, the maximum length of an Ethernet (or 802.3) frame. The ethernet frame does not have a length field; rather, the field is used to indicate the type of data that follows. A value of 0800 means that this header is encapsulating an IPV4 frame. 86DD means that this header is encapsulating an IPv6 payload. The type value 8100 means that the frame is a VLAN-tagged frame. 0806 for is the code for an ARP frame. Other type codes are available in open literature.

As an example, consider the following captured, raw frame:

02 60 8C 67 69 74 02 60 8C 74 11 78 00 81 F0 F0
DA 3A 0E 00 FF EF 16 00 00 00 00 00 6b 16 19 01
FF 53 4D 2D 00 00 00 00 08 00 00 00 00 00 00 00
00 00 18 08 05 00 00 00 94 07 0F 2E 00 58 00 01
00 40 00 16 00 20 20 00 00 00 00 00 00 00 00 00

Bytes shown in blue represent the destination address for this frame. Green is the source address. Red is either length or type. Since this value is less than 05 DC, we immediately know that this is a length and as such an 802.3 frame. F0 F0 DA are the DSAP, SSAP and Control values respectively. Please notice that the Preamble and SFD are not normally shown.

One former DLL protocol was high-level data link control (HDLC). It was based on IBM's SDLC protocol, but it never caught up with the open standards like ethernet and 802.3. Its framing was drastically different from ethernet. In HDLC, frames always start and end with the flag sequence 01111110 (hex 7E). It uses character stuffing in case a sequence like that appears in the data portion of the frame. The complete structure of the frame is flag-address-control-data-fcs-flag. Another old data link protocol, PPP, was used as the main communication protocol between two routers. This contrasts with ethernet V2 and 802.3, which are meant as a communication protocol between two hosts in the same LAN or between a host and router. PPP was loosely based on HDLC, and the frame structure is similar.

 

6e. Describe how packet collisions in a network are controlled using carrier-sense multiple access with collision detection (CSMA/CD)

  • Describe what improvements did Collision Detection bring to the CSMA protocol?
  • Under what conditions is CSMA a really effective protocol?

The way access to media is controlled in a data link layer network is called medium access control (MAC). In the real world, many stations contend for the same media in an ethernet system, since there is only one media shared among many hosts. Who can access the media first is the job of MAC protocols. Ethernet, of course, uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol.

When using CSMA/CD, the host senses the medium to determine if it is available for transmission:

  1. If the medium is idle, transmit
  2. If the medium is busy, continue to listen until the channel is idle, then transmit immediately
  3. If a collision is detected during transmission, immediately cease transmitting
  4. After a collision, wait a random amount of time, then attempt to transmit again (repeat from step 1)

The CSMA protocol has its roots with ALOHA, which was not efficient but accomplished its original goal. In an ALOHA system, stations simply transmitted without checking if anyone else was using the channel. Collisions could occur at any time, which meant that machines could have been nearly finished with a transmission when a second machine started to transmit. This would render the transmission that was in progress useless, and all that time wasted. Channel utilization for pure ALOHA was only about 18%.

Slotted ALOHA came next, and was designed to try and improve channel utilization. The system was divided into time slots that corresponded to the time it took for a standard size frame to be transmitted. Stations were required to be a little more patient and wait until the beginning of the next slot before transmitting. This assured that a station that successfully grabbed the medium and had been transmitting for a while would see its full frame transmitted before a new frame transmission was attempted by any other machine. That improved the efficiency of the channel utilization to about 36%. This was still relatively low, since even if no one was transmitting and using the medium, everyone would have to wait, and the medium was idle and wasted time.

That led to CSMA, where stations would constantly sense the channel and transmit only if they sensed that it was not busy. This is now referred to as "persistent" CSMA, since the stations were sensing the medium constantly. This was a great improvement over ALOHA, and is effective under low load conditions where collisions are limited. The next step came with non-persistent CSMA, which improved upon persistent CSMA. Using this method, channels only sense the channel when they need to transmit. If they sense the channel is busy, they wait a random amount of time before trying again. In persistent CSMA, many stations sensing the channel could all start transmission at the same time resulting in many collisions. With non-persistent CSMA, each station senses and tries again at a random different time. This reduces the rate of collisions and improves channel utilization.

 

6f. Build and troubleshoot a variety of L2 networks using bridges, L2 switches, and repeaters

  • What are the main differences between a bridge, an L2 switch, and a hub?

Repeaters and bridges are the main building blocks of a simple L2 network, but they are very different devices. A repeater implements the OSI physical layer and extends the collision domain of the network. Repeaters, commonly known as hubs, simply track incoming traffic and retransmit it on all outgoing lines. That means that only one transmission is possible at a time, since multiple transmissions will result in collisions. The repeater is designed to extend the physical length of a network segment by digital signal regeneration. Basically, it repeats the data from its input to all of its outputs.

The basic idea of a bridge is to transparently interconnect LANs. A bridge is a smart device that learns the presence of hosts. Incoming frames are switched to one outgoing line accordingly. This, as opposed to a repeater, allows for many transmissions to happen at the same time.

The basic operation of a bridge can be described as follows:

  • Forwarding by relaying data frames
  • Discarding damaged frames through CRC
  • Listening and learning the actual location of stations attached to it
  • Filtering to prevent unnecessary (or unwanted) traffic from flowing through a particular LAN segment if there is no host to receive it
  • Maintenance of the forwarding (and filtering) database (FDB), which is used to determine if a frame will be forwarded or filtered, and where dynamically learned entries will age out

Consider this figure:


The bridge will learn that host C is located in Ethernet 2 and forward the traffic. It will also learn that B is in Ethernet 1 and filter the traffic keeping it local to Ethernet 1. Finally, two very important concepts to remember. For the bridge to know that B is in Ethernet 1, B must have produced some traffic. Upon listening to that traffic, the switch will save that information in the forwarding database. If A sends traffic to a host D, which does not exist, the switch takes it as an unknown address. The problem is that it does not have a way of knowing if it is because host D has not produced any traffic and it has not learned its location, or if host D is simply nonexistent. The bridge will err on the side of caution by flooding (broadcasting) the traffic to all of its ports, just in case host D is present. Every time a switch receives traffic destined to an unknown address, it treats it as broadcast traffic and floods it out of all of its ports. Think about this in relation to gratuitous ARP. Of course, if the traffic is destined to the broadcast address (ff ff ff ff ff ff), the switch will just comply and treat it as broadcast to all ports belonging to the same VLAN.

When networks started to get more dynamic, companies started to follow an "ethernet to the desk" strategy. Each computer used by a developer would now connect to its own port in a bridge. The older "vampire taps" were no longer needed. At that time, bridges started to give way to layer 2 switches. The basic principle of the two devices is the same, but multiple users could now be attached directly to the same switch. Frame handling began to be done in hardware most of the time, and multiple data paths could handle multiple frames at a time. Layer 2 switches could also do "cut through", where as soon as the switch could read the destination address of the frame in the header, it would queue the frame to be immediately sent without considering how good the frame was. CRC checks could make bridges comparatively slow, but cut through dramatically improved the speed of switches. It had a cost, though, since corrupted frames could circulate around the network unnecessarily.

These concepts can be summarized in two figures:



 

6g. Use Virtual LANs (VLANs) to create multiple LANs in the same physical environment

  • What is the difference between a trunk (802.1q) and an access link in a VLAN environment?
  • What is the difference between dividing the network into VLANs and using subnetting to group traffic?

In its simplest form, a LAN is a broadcast (or flood) domain – a section of the network where any data link layer broadcast traffic is delivered to all end stations. Beyond those boundaries, broadcast traffic does not flow. The LAN boundaries are determined by cabling. Bridges receive and forward broadcast traffic, and devices on different LANs cannot see each other unless a device with ports in each LAN, like a router, routes the traffic between the two LANs. Because broadcast traffic is distributed throughout the entire LAN, they do not grow too much.

With multiple users in a single LAN, traffic can grow to unmanageable proportions. That is where VLANs become useful. A VLAN is an administratively configured broadcast domain. The network administrator determines which end station belongs to which VLAN. Broadcast traffic for one VLAN is only seen by members of that VLAN.

Normally, VLAN assignment is based on the physical port of the switch, but other methods can be used, like MAC-based or application-based VLANs. In the past, LANs were small and embraced only a single bridge. However, as networks grew and switches and routers were added, simple grouping became obsolete, especially if there were members of the same VLAN in different bridges. To deal with that problem, IEEE developed the 802.1q standard. 802.1q established a method for tagging Ethernet frames with VLAN membership information. It works in conjunction with 802.1p, which is a layer 2 standard for prioritizing traffic at the data link layer. 802.3ac combines both and defines a frame format that implements both priority and VLAN information. An 802.3 frame with a value of 8100 in the type field is a tagged frame. The next 3 bits carry the priority, the next bit the canonical indicator, and the following 12 bits the VLAN tag. This diagram shows a regular, untagged 802.3 frame followed by a tagged one.


The 802.1ac frame section is "shimmed" into the original frame starting where the original Type/Length field was. The original place of the Type/Length field is now filled with type 8100 which means "this is a tagged frame". That is followed by a three-bit priority, a one-bit "canonical indicator" always set to 0 for Ethernet, and finally, 12 bits of Vlan ID. 12 bits for type ID allows for a total of 212 = 4096 different VLAN IDs to be used. After that, the original frame is continued normally with the original T/L and the rest of the frame.

To use the protocol, you have to implement a VLAN registration protocol. The VLAN registration is propagated across the network. Incoming frames are tagged with a VLAN ID. Outgoing frames are untagged if needed. Tagged frames are sent between VLAN switches. The following terms are very important for VLANs:

  • Tagged frames are frames with a VLAN tag
  • Trunk links allow more than one VLAN frame through them, and can attach two VLAN-aware switches to carry frames with different tags
  • Access links reside at the edge of the network where legacy devices attach. They are untagged for VLAN unaware devices, and VLAN-aware switches add a tag to received frames and remove them before transmitting
  • Hybrid links carry tagged and untagged traffic, and allow VLAN-unaware hosts to reside in the same VLAN

A clear port is similar to an access port. It will always accept clear frames, and also accept tagged frames, but only if the tagged frame belongs to the native VLAN or a VLAN statically configured to that port. All other frames are dropped. It also removes any configured tags before transmitting frames.

An 802.1q port is the same as a trunk port. It transmits traffic with any configured tags. However, it only accepts clear (non-tagged) frames, or tagged frames that belong to native VLANs or those statically bound to the port.

That brings us to the concept of port binding. The native VLAN is the VLAN whose VLAN tag is inserted to non-tagged traffic received in the port. MAC addresses are learned as belonging to the native VLAN of the port only. A port can also be statically bound to any other VLAN. As such, it is configured to accept traffic with a VLAN tag different to the native VLAN. Multiple VLANs can be statically configured to a port. The port will forward traffic belonging to the statically-configured VLANs and drop any traffic with a different tag.

This figure clarifies these concepts.


  • Incoming traffic: Clear
    • Outgoing traffic:Tagged with VLAN 20
  • Incoming traffic: Tagged with VLAN 40
    • Outgoing traffic: Tagged with VLAN 40
  • Incoming traffic: Tagged with VLAN 50
    • Outgoing traffic: Dropped

 

6h. Illustrate the Spanning Tree Protocol (STP), why it is needed, and how it breaks loops in an L2 network

  • Why is the spanning tree protocol essential in multi-switch networks?
  • How is the root bridge in a spanning tree selected?

As networks grow, the possibility of involuntarily (or voluntarily) creating loops in networks increases. Loops can wreak havoc to a network that is based only on transparent bridging, especially during the presence of broadcast traffic. Because of the loop, traffic that was already forwarded by the bridge, will come back to the input and will again be sent out. This duplication of packets will cause network storms that will degrade network performance and in most cases render the network basically useless.

Spanning Tree Protocol (STP) was developed to solve the active loop problem. STP configures an active topology of bridged LANs into a single spanning tree, so there is at most one logical path between any two LAN segments. This eliminates network loops and provides a fault-tolerant path by automatically reconfiguring the spanning-tree topology as a result of a bridge failure or breakdown in the path.

STP operates transparently to the end nodes. It is implemented by the switches in the network and end hosts are unaware of what is going on with it. IEEE standard 802.1d describes a spanning tree algorithm that has been implemented by most bridge manufacturers. This standard defines each bridge to have a bridge ID. The ID is a 64-bit value composed of a priority followed by the bridge MAC address of the bridge.

The creation of a spanning tree starts with the selection of a root bridge. The root bridge provides a point of reference within the bridge LAN that makes the process of creating the spanning-tree faster. When the network is first brought up, all bridges participating in the spanning tree process talk to each other. The root bridge is selected based on its bridge ID. The bridge with the lowest valued bridge ID will become the root bridge. The bridge ID consists of the bridge priority followed by the bridge MAC address. If a network administrator wants a particular bridge to become the root, all they need to do is set the priority to a low value. If the priority is the same for each bridge, then the lowest MAC becomes the root. Reselection of a root bridge happens again in the event of a network reconfiguration, such as when a new bridge is connected to the network or an active bridge or link fails.

After a root bridge has been selected, a Designated Bridge is selected for each LAN segment in the network. This happens when the network is brought up, or when there is a topology change (when a new bridge is added or when an active link or bridge fails). The designated bridge for each LAN is selected as the bridge with the port with lowest root path cost. In the event of equal path costs, the bridge with the lowest bridge ID is selected as the designated bridge. To exchange all the information required for the selection of the root and designated bridges, bridges use a unique packet called a Bridge Protocol Data Unit or BPDU. BPDUs carry all the information needed by all switches to determine that a loop-free network exists. Bridges use a special multicast address in order to communicate amongst themselves using BPDUs. 802.1d defines the address 01-80-C2-00-00-00 as the multicast address for BPDUs. All 802.1d compliant bridges must use this address. The BPDU looks like this:


At any given time, a bridge's port will be in any of the following states:

  • The blocking state: Port does not relay frames between LAN segments. Received BPDUs are still processed in order to maintain the spanning tree. Learning process does not add station information from a blocked port to the filtering database. This state is entered upon bridge initialization, or from the Disabled state if enabled via bridge management. This state can also be entered from the Listening, Learning, or Forwarding states if a BPDU indicates that another bridge is a better-designated bridge for the attached LAN.
  • The Listening State: Port is preparing to participate in relaying of frames. No frames are relayed (to prevent loops during spanning-tree topology changes). BPDUs are both received and transmitted for participation in STP. This state is typically entered from the Blocking state after STP protocol has determined that the port should participate in frame relay. It is typically left upon expiration of the protocol timer and entering into the Learning state.
  • The Learning State: Port is preparing to participate in relaying of frames. No frames are relayed while in this state. Learning process is enabled to prevent unnecessary relay frames once the Forwarding state is entered. BPDUs are both received and transmitted for participation in STP. This state is entered from the Listening state upon expiration of the protocol timer.
  • The Forwarding State: Port actively relays frames between LAN segments and the learning process is active. This state is always entered from the Learning state and it may be left to go to the Blocking or Disabled states by either spanning tree or management action.
  • The Disabled State: This state is entered from any other state via management directive. No frames are relayed and no BPDU is examined or transmitted. It is left by management directive.

 

6i. Describe different allocation methods used by the data link layer

  • What access allocation mechanism provides a more deterministic behavior in an L2 network?

In a token ring topology, all hosts are attached to a ring. In order to access the channel, they must wait for a circulating token to arrive. When the token arrives, they grab it, and data flows for a certain amount of time. The time that the token can be kept, and that data can be sent, is selected when the ring is initializing.

The token ring has fallen in popularity and it is barely used, if at all, in networks today. What are the typical disadvantages of a token ring configuration? To start, every station in a token ring is a single point of failure. Since all stations are connected in a physical (or logical) ring, any host that fails breaks the ring and the network is down. Contrast that with Ethernet using CSMA. There, every station works independently and a failure of one will not affect the operation of all others in the network. True improvements in the token ring technology addressed this issue by creating bypass switches that will bypass the host if it fails. However, the single point of failure has simply moved from the host to the bypass switch. In addition to that, every time that a host fails, and the bypass switch takes over, a full reconfiguration and convergence of the network must take place. Stations must stop all traffic forwarding to enter a reconfiguration state to determine things like what is the maximum time to hold the token and what is the new time to wait for the token to come back to them before announcing a lost token condition and a possible ring breakage.

The token ring technology did have some good properties after all. For example, Token Ring can be relatively deterministic. Once the token is released, it is relatively easy to calculate when the next frame with data will arrive based on the number of stations in the ring and the pre-selected token hold time for each station. 10 stations down the ring with a hold time of 5 msec means that you should expect to see the token back in 50 msec. If it has not, the station initiates a "lost token" recovery process.

Token Ring will allow the use of large or small frames, as opposed to CSMA whose minimum frame size depends on the roundtrip time between the two farther stations in the network. Therefore, the minimum frame size on CSMA must be large if the network is large. With a Token Ring, all you need to do is have possession of the token to transmit. Your frames could be small without any effect on the operation of the network.

Another good feature of the token ring is its relatively steady performance under heavy loads as compared to CSMA. Heavy load means that the medium is busy most of the time, and that means that a CSMA host will not be able to transmit, or if it does, multiple collisions could occur. Token Ring, on the other hand, will take the same under low and high traffic load since your use of the network depends on the roundtrip time of the token. Under low load, on the contrary, CSMA performs much better. Low load equals fewer collisions, and that translates into better throughput. A Token Ring host must wait for the token to arrive before transmitting new data, thus subjecting it to, at times, substantial delays even if the medium is relatively open.

The final relative advantage of Token Ring is the support for priority assignment. A token could be labeled as high priority, and other hosts at lower priorities can not send traffic until the priority of the received token is set to a lower value than yours. A doctor's office requesting vital information during surgery is a classic example of where this could be useful.

Token Ring was originally greatly favored by IBM and standardized by IEEE 802.5. Token Bus, standardized by IEEE 802.4, was a variation of a traditional token ring where the physical link was replaced by a "logical" link although machines were using a bus, not an actual ring. Then there is the CSMA with collision avoidance, CSMA/CA developed for wireless networks, to be considered next.

 

6j. Use the 802.11 protocol to build and use wireless networks

  • What is the difference between DSSS and FHSS transmission methods in a wireless network?
  • Under what conditions would DSSS work better than FHSS?
  • What conditions in your network will move you into replacing 802.11ac for 802.11ax?

802.11 uses a CSMA/CA access mechanism, where collisions are avoided before they can happen. This is accomplished by having the devices reserve the channel before transmitting. The intended server sends a short message called a request to send (RTS). The other side responds with a clear to send (CTS). The channel is then available for use during that reservation period.

WiFi networks are classified based on their complexity, as follows:

  • Independent basic service set (IBSS), also known as ad hoc, is a system of peer-to-peer hosts that communicate with each other and is not intended to be attached to the internet.
  • Basic service set (BSS) is composed of users that wirelessly attach to an access point (AP). The AP is attached to a wired LAN with access to the internet. There is only one AP, and access to the internet is limited to users within a short radius of the AP.
  • Extended service set (ESS), otherwise known as an infrastructure network, contains more than one AP that multiple users can connect to. The AP is then attached to a wired LAN with access to the internet. Users can freely move from one AP to the next to continuously have non-interrupted access.

To provide privacy during communications, two methods are normally used: direct sequence spread spectrum (DSSS) and frequency-Hopping spread spectrum (FHSS).

DSSS systems generate a redundant bit pattern (chip) known by both sides for each bit to be transmitted. To a third-party user, the DSSS appears as low-power wideband noise. No message can be recovered without the knowledge of the chip pattern.

FHSS uses a narrow carrier that changes frequency in a pattern known to both the sender and the receiver only. If both are properly synchronized, it appears as a single channel. To the unintended user, it appears to be a short pulse of noise.

DSSS operates with a lower signal-to-noise ratio and can operate over longer distances than FHSS. However, due to the frequency spectrum consideration, DSSS is more prone to interference than FHSS. For that reason, DSSS should be considered in places with high interference and electromagnetic noise.

The original 802.11 release operated at a frequency of 2.4 GHz and achieved a maximum bandwidth of 2 Mbps. Multiple releases have followed, starting with 802.11a up to the current 802.11ax. 802.11ax offers many features that allow it to be aligned with the growth of the internet of things, IoT. The goal of IoT is for every device to be attached to the internet. The 802.16 wireless standards, also known as WiMax, use a connection-oriented architecture, with each connection getting one of four different classes of service: constant bit rate, real-time variable bit rate, non-real-time variable bit rate, and best effort. Constant bit rate was developed for the transmission of uncompressed voice. Variable bit rate is intended for compressed multimedia in a real-time environment. The non-real-time variable is intended for the transfer of large files that are not real-time.

 

Unit 6 Vocabulary

This vocabulary list includes terms and acronyms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

  • ARP
  • ARP opcode
  • Gratuitous ARP
  • Sliding Window
  • Go-Back-N
  • Selective Repeat
  • Selective Reject
  • CRC
  • Frame delineation
  • Bit stuffing
  • Character stuffing
  • Character Count
  • Ethernet V2
  • ALOHA
  • 802.3
  • 802.4
  • 802.5
  • CSMA
  • CSMA/CD
  • CSMA/CA
  • Token Ring
  • Token Bus
  • HDLC
  • PPP
  • Repeater/hub
  • Bridge/Switch
  • Bridge FDB
  • Store and Forward
  • Cut Through
  • VLAN
  • 802.1Q
  • Tagged frame
  • Access, Hybrid, and Trunk port
  • Port Native VLAN
  • Port VLAN binding
  • STP
  • BPDU
  • Blocking, Forwarding, Learning, Disabled port state
  • WiFi
  • WiMax
  • 802.11
  • 802.16
  • DSSS
  • FHSS
  • BSS
  • IBSS
  • ESS
  • Ad Hoc WiFi