Data Link Layer

Chapter 5

Objectives of Chapter 5


Become familiar with…



The data link layer sits between the physical layer and the network layer.

The data link layer accepts messages from the network layer and controls the hardware that actually transmits them.

Both the sender and receiver have to agree on the rules or protocols that govern how their data link layers will communicate with each other.


A data link protocol provides three functions:


Media Access Control

Media access control (MAC) refers to the need to control when devices transmit.

MAC becomes important when several computers share the same communications circuit to ensure no two devices attempt to transmit data at the same time.

Controlled Access

Most computer networks managed by a host mainframe computer use controlled access.

Controlling Access

Polling is the process of sending a signal to a client that gives it permission to transmit or to ask it to receive.

There are several types of polling.


Contention is the opposite of controlled access. Computers wait until the circuit is free and then transmit whenever they have data to send.

Requires a technique to handle situations when two devices try to transmit at the same time (Collision)

Relative Performance

In general, contention approaches work better than controlled approaches for small networks that have low usage.

In high volume networks, many devices want to transmit at the same time, and a well-controlled circuit prevents collisions.


Error Control in Networks

There are two types of errors associated with networks.

There are two categories of network errors.

What are Network Errors?

Network errors are a fact of life in data communications networks.

Normally errors occur in bursts.

Dial-up lines are more prone to errors because they have less stable parameters.

What Causes Errors?

Line noise and Distortion cause errors.

Error Prevention

There are many ways to prevent errors:

Error Detection

It is possible to develop data transmission methodologies that vibe very high error detection and correction performance.

The only way to do error detection and correction is to send extra data with each message.

In general, the larger the amount of error detection data sent, the greater the ability to detect an error.

Error Detection

There are three common error detection methods.

Parity Checking

One of the oldest and simplest method, parity checking adds 1 additional bit to each byte in the message. The value of this parity bit is dependent on the number of 1’s in each byte transmitted. Even parity causes the sum of all bits (including the parity bit) to be even. Odd parity causes the sum to be odd.

Unfortunately if two bits are erroneous, the parity checking will fail. Parity checking results in about a 50% reliability rate.

Parity Checking

Assume we are using even parity with 7-bit ASCII.

The letter V in 7-bit ASCII is encoded as 0110101.

Because there are four 1s (an even number), parity is set to zero.

This would be transmitted as: 01101010.

Assume we are using odd parity with 7-bit ASCII.

The letter W in 7-bit ASCII is encoded as 0001101.

Because there are three 1s (an odd number), parity is set to one.

This would be transmitted as: 00011011.

Longitudinal Redundancy Checking (LRC)

LRC was developed to overcome the problem with parity’s low probability of detection.

LRC adds one additional character, called the block check character (BCC) to the end of the entire message or packet of data.

The value of the BCC is calculated much like the Parity bit, but for the entire message. Results in a 98% reliability rate.

Longitudinal Redundancy Checking

Polynomial Checking

Like LRC, polynomial checking adds 1 or more characters to the end of the message based on a mathematical algorithm.

With checksum, 1 byte is added to the end of the message. It is obtained by summing the message values,and dividing by 255. The remainder is the checksum. (95% effective)

With CRC, 8, 16, 24 or 32 bits are added, computed by calculating a remainder to a division problem. (99.969% with 8-bit, 99.99% with 16 bit).

Error Correction via Retransmission

The simplest, most effective, least expensive, and most commonly used method for error correction is retransmission.

A receiver that detects an error simply asks the sender to retransmit the message until it is received without error. (called Automatic Repeat reQuest (ARQ)).

Error Correction via Retransmission

With Stop and Wait ARQ the sender stops and waits for a response from the receiver after each message or data package.

Responses are:

With Continuous ARQ the sender does not wait for acknowledgement before sending next message. If it receives an NAK, it retransmits the needed messages.


Forward Error Correction

Forward error correction uses codes containing sufficient redundancy to prevent errors by detecting and correcting them at the receiving end without retransmission of the original message.


Data Link Protocols

Asynchronous Transmission

Asynchronous Transmission is often referred to as start-stop transmission because the transmitting device can transmit a character whenever it is convenient, and the receiving device will accept that character.

Each character is transmitted independently of all other characters.

To accomplish this a start bit (0), and a stop bit (1) are added to each character. The recognition of the start and stop of each message is called synchronization.

Asynchronous Transmission

Asynchronous File Transfer Protocols

In general, microcomputer file transfer protocols are used on asynchronous point-to-point circuits, typically across telephone lines via a modem.

Asynchronous FTP

Synchronous Transmission

With Synchronous Transmission all the letters or data in one group of data is transmitted at one time as a block of data called a frame or packet.

The start and end of each packet sometimes is marked by establishing by adding synchronization characters (SYN) at the start of each packet, called

Synchronous Transmission

There are many protocols for synchronous transmission that fall into three broad categories:

Synchronous Transmission

Synchronous Transmission

Synchronous Transmission

Token Ring (IEEE 802.5) was developed by IBM in the early 1980s, and later became a formal standard of the IEEE. It uses a controlled access media access protocol.

Ethernet (IEEE 802.3) is a byte-count protocol, because instead of using special characters or bit patterns to mark the end of a packet it includes a field that specifies the length of the message portion of the packet.

Synchronous Transmission

Synchronous Transmission

Compressed SLIP (CSLIP) uses compression to reduce the amount of data transmitted.

Synchronous Transmission


Transmission Efficiency

One objective of a data communications network is to move the highest possible volume of accurate information through the network.

Each communication protocol uses some bits or bytes to delineate the start and end of each message and for error control and has both information bits (to convey the user’s meaning) and overhead bits (for error checking, and marking the start and end of characters and packets).

Transmission Efficiency

Transmission efficiency is defined as the total number of information bits divided by the total number of bits in transmission.

Transmission Efficiency

ZMODEM is more efficient than YMODEM which is more efficient than XMODEM. The general rule is that the larger the message field, the more efficient the protocol.

In designing a protocol, there is a trade-off between large and small packets. Small packets are less efficient, but are less likely to contain errors and are less costly in terms of circuit capacity to retransmit if there is an error.

Transmission Efficiency

Transmission Efficiency

Throughput is the total number of information bits received per second, after taking into account the overhead bits and the need to retransmit packets containing errors.

Throughput (TRIB)

Calculating the actual throughput of data communication is complex.

The use of a shared multipoint circuit, rather than a dedicated point-to-point circuit will affect throughput, because the total capacity in the circuit must now be shared among several computers.

Throughput (TRIB)

The term transmission rate of information bits (TRIB) describes the effective rate of data transfer.

TRIB = Number of information bits accepted

Total time required to get the bits accepted

Throughput (TRIB)

Throughput (TRIB)

The following TRIB example shows the calculation of throughput assuming a 4800 bits per second half-duplex circuit.


(400/600) + 0.025

where: K = 7 bits per character (information)

M = 400 characters per block

R = 600 characters per second (derived from 4800 bps divided by 8 bits/character)

C = 10 control characters per block

P = 0.01 (10-2) or one retransmission out of 100 blocks transmitted 1%

T = 25 milliseconds (0.025) turnaround time

Throughput (TRIB)



(400/600) + 0.025

If all factors in the calculation remain constant except for the circuit, which is changed into full duplex (no turnaround time delays, T=0) then the TRIB increases to 4054 bps.

Look at the equation where the turnaround value (T) is 0.025. If there is a further propagation delay time of 475 milliseconds (0.475), this figure changes to 0.500. For demonstrating how a satellite channel affects TRIB, the total delay time is now 500 milliseconds. Still using the figures above (except for the new 0.500 delay time), we reduce the TRIB for our half-duplex, satellite link to 2317 bps, which is almost one-half for the full-duplex (no turnaround time) 4054 bps.

Next Day Air Service

End of Chapter 5