Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 9

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 80

CCNA Exploration - Network Fundamentals

9 Ethernet
9.0 Chapter Introduction

9.0.1 Chapter Introduction

Page 1:

Up to this point in the course, each chapter focused on the different functions of each layer of the
OSI and TCP/IP protocol models as well as how protocols are used to support network
communication. Several key protocols - TCP, UDP, and IP - are continually referenced in these
discussions because they provide the foundation for how the smallest of networks to the largest,
the Internet, work today. These protocols comprise the TCP/IP protocol stack and since the
Internet was built using these protocols, Ethernet is now the predominant LAN technology in the
world.

Internet Engineering Task Force (IETF) maintains the functional protocols and services for the
TCP/IP protocol suite in the upper layers. However, the functional protocols and services at the
OSI Data Link layer and Physical layer are described by various engineering organizations
(IEEE, ANSI, ITU) or by private companies (proprietary protocols). Since Ethernet is comprised
of standards at these lower layers, generalizing, it may best be understood in reference to the OSI
model. The OSI model separates the Data Link layer functionalities of addressing, framing and
accessing the media from the Physical layer standards of the media. Ethernet standards define
both the Layer 2 protocols and the Layer 1 technologies. Although Ethernet specifications
support different media, bandwidths, and other Layer 1 and 2 variations, the basic frame format
and address scheme is the same for all varieties of Ethernet.

This chapter examines the characteristics and operation of Ethernet as it has evolved from a
shared media, contention-based data communications technology to today's high bandwidth, full-
duplex technology.

Learning Objectives

Upon completion of this chapter, you will be able to:


 Describe the evolution of Ethernet
 Explain the fields of the Ethernet Frame
 Describe the function and characteristics of the media access control method used by
Ethernet protocol
 Describe the Physical and Data Link layer features of Ethernet
 Compare and contrast Ethernet hubs and switches
 Explain the Address Resolution Protocol (ARP)

9.0.1 - Chapter Introduction


The diagram depicts a collage of Ethernet technology photos and terms including shared media,
switching, and addressing. Ethernet is the predominant LAN technology in use today.

9.1 Overview of Ethernet

9.1.1 Ethernet - Standards and Implementation

Page 1:

IEEE Standards

The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his
coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was
published in 1980 by a consortium of Digital Equipment Corporation, Intel, and Xerox (DIX).
Metcalfe wanted Ethernet to be a shared standard from which everyone could benefit, and
therefore it was released as an open standard. The first products that were developed from the
Ethernet standard were sold in the early 1980s.

In 1985, the Institute of Electrical and Electronics Engineers (IEEE) standards committee for
Local and Metropolitan Networks published standards for LANs. These standards start with the
number 802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards
were compatible with those of the International Standards Organization (ISO) and OSI model.
To ensure compatibility, the IEEE 802.3 standards had to address the needs of Layer 1 and the
lower portion of Layer 2 of the OSI model. As a result, some small modifications to the original
Ethernet standard were made in 802.3.

Ethernet operates in the lower two layers of the OSI model: the Data Link layer and the Physical
layer.
9.1.1 - Ethernet - Standards and Implementation
The diagram depicts the O S I model with the Data Link and Physical Layers highlighted.
Ethernet operates in these lower two layers of the O S I model. The Data Link Layer is
subdivided into the LLC (8 0 2 dot 2) and MAC (8 0 2 dot 3) sublayers. Ethernet is defined by
Data Link Layer and Physical Layer protocols.

9.1.2 Ethernet - Layer 1 and Layer 2

Page 1:

Ethernet operates across two layers of the OSI model. The model provides a reference to which
Ethernet can be related but it is actually implemented in the lower half of the Data Link layer,
which is known as the Media Access Control (MAC) sublayer, and the Physical layer only.

Ethernet at Layer 1 involves signals, bit streams that travel on the media, physical components
that put signals on media, and various topologies. Ethernet Layer 1 performs a key role in the
communication that takes place between devices, but each of its functions has limitations.

As the figure shows, Ethernet at Layer 2 addresses these limitations. The Data Link sublayers
contribute significantly to technological compatibility and computer communications. The MAC
sublayer is concerned with the physical components that will be used to communicate the
information and prepares the data for transmission over the media..

The Logical Link Control (LLC) sublayer remains relatively independent of the physical
equipment that will be used for the communication process.

9.1.2 - Ethernet - Layer 1 and Layer 2


The diagram depicts how Layer 2 functions address Layer 1 limitations.

Layer 1 Limitations:
- Cannot communicate with upper layers.
- Cannot identify devices.
- Only recognizes streams of bits.
- Cannot determine the source of a transmission when multiple devices are transmitting.

Layer 2 Functions:
- Connects to upper layers via Logical Link Control (LLC).
- Uses addressing schemes to identify devices.
- Uses frames to organize bits into groups.
- Uses Media Access Control (MAC) to identify transmission sources.

9.1.3 Logical Link Control - Connecting to the Upper Layers

Page 1:

Ethernet separates the functions of the Data Link layer into two distinct sublayers: the Logical
Link Control (LLC) sublayer and the Media Access Control (MAC) sublayer. The functions
described in the OSI model for the Data Link layer are assigned to the LLC and MAC sublayers.
The use of these sublayers contributes significantly to compatibility between diverse end
devices.

For Ethernet, the IEEE 802.2 standard describes the LLC sublayer functions, and the 802.3
standard describes the MAC sublayer and the Physical layer functions. Logical Link Control
handles the communication between the upper layers and the networking software, and the lower
layers, typically the hardware. The LLC sublayer takes the network protocol data, which is
typically an IPv4 packet, and adds control information to help deliver the packet to the
destination node. Layer 2 communicates with the upper layers through LLC.

LLC is implemented in software, and its implementation is independent of the physical


equipment. In a computer, the LLC can be considered the driver software for the Network
Interface Card (NIC). The NIC driver is a program that interacts directly with the hardware on
the NIC to pass the data between the media and the Media Access Control sublayer.

http://standards.ieee.org/getieee802/download/802.2-1998.pdf

http://standards.ieee.org/regauth/llc/llctutorial.html

http://www.wildpackets.com/support/compendium/reference/sap_numbers
9.1.3 - Logical Link Control - Connecting to the Upper Layers
The diagram depicts the functions of the LLC sublayer as an interface to the 8 0 2 dot 3 MAC
sublayer.

LLC functions include:


- Making the connection with the upper layers.
- Framing the Network Layer packet.
- Identifying the Network Layer protocol.
- Remaining relatively independent of the physical equipment.

A hierarchical model of sublayers shows the following layers from top to bottom: the LLC
sublayer, the 8 0 2 dot 3 MAC sublayer, the physical signaling sublayer, and the physical
medium itself. Various Ethernet physical technologies are listed within the physical signaling
sublayer and the physical medium, which include:

- 10BASE5 (500m) 50 Ohm Coax N-Style


- 10BASE2 (185m) 50 Ohm Coax BNC
- 10BASE-T (100m) 100 Ohm UTP RJ-45
- 100BASE-TX (100m) 100 Ohm UTP RJ-45
- 1000BASE-CX (25m) 150 Ohm STP mini-DB-9
- 1000BASE-T (100m) 100 Ohm UTP RJ-45
- 1000BASE-ST (220-550m) multimode fiber SC
- 1000BASE-LX (550-5000m) multimode or single-mode fiber SC

9.1.4 MAC - Getting Data to the Media

Page 1:

Media Access Control (MAC) is the lower Ethernet sublayer of the Data Link layer. Media
Access Control is implemented by hardware, typically in the computer Network Interface Card
(NIC).

The Ethernet MAC sublayer has two primary responsibilities:

 Data Encapsulation
 Media Access Control

Data Encapsulation

Data encapsulation provides three primary functions:


 Frame delimiting
 Addressing
 Error detection

The data encapsulation process includes frame assembly before transmission and frame parsing
upon reception of a frame. In forming the frame, the MAC layer adds a header and trailer to the
Layer 3 PDU. The use of frames aids in the transmission of bits as they are placed on the media
and in the grouping of bits at the receiving node.

The framing process provides important delimiters that are used to identify a group of bits that
make up a frame. This process provides synchronization between the transmitting and receiving
nodes.

The encapsulation process also provides for Data Link layer addressing. Each Ethernet header
added in the frame contains the physical address (MAC address) that enables a frame to be
delivered to a destination node.

An additional function of data encapsulation is error detection. Each Ethernet frame contains a
trailer with a cyclic redundancy check (CRC) of the frame contents. After reception of a frame,
the receiving node creates a CRC to compare to the one in the frame. If these two CRC
calculations match, the frame can be trusted to have been received without error.

Media Access Control

The MAC sublayer controls the placement of frames on the media and the removal of frames
from the media. As its name implies, it manages the media access control. This includes the
initiation of frame transmission and recovery from transmission failure due to collisions.

Logical Topology
The underlying logical topology of Ethernet is a multi-access bus. This means that all the nodes
(devices) in that network segment share the medium. This further means that all the nodes in that
segment receive all the frames transmitted by any node on that segment.

Because all the nodes receive all the frames, each node needs to determine if a frame is to be
accepted and processed by that node. This requires examining the addressing in the frame
provided by the MAC address.

Ethernet provides a method for determining how the nodes share access to the media. The media
access control method for classic Ethernet is Carrier Sense Multiple Access with Collision
Detection (CSMA/CD). This method is described later in the chapter.

http://standards.ieee.org/regauth/groupmac/tutorial.html

9.1.4 - MAC - Getting Data to the Media


The diagram depicts the primary responsibilities of the Ethernet MAC sublayer of getting the
data to the media.
Data Encapsulation:
- Frame delimiting
- Addressing
- Error detection
Media Access Control:
- Control of frame placement on and off the media
- Media recovery

9.1.5 Physical Implementations of Ethernet

Page 1:

Most of the traffic on the Internet originates and ends with Ethernet connections. Since its
inception in the 1970s, Ethernet has evolved to meet the increased demand for high-speed LANs.
When optical fiber media was introduced, Ethernet adapted to this new technology to take
advantage of the superior bandwidth and low error rate that fiber offers. Today, the same
protocol that transported data at 3 Mbps can carry data at 10 Gbps.
The success of Ethernet is due to the following factors:

 Simplicity and ease of maintenance


 Ability to incorporate new technologies
 Reliability
 Low cost of installation and upgrade

The introduction of Gigabit Ethernet has extended the original LAN technology to distances that
make Ethernet a Metropolitan Area Network (MAN) and WAN standard.

As a technology associated with the Physical layer, Ethernet specifies and implements encoding
and decoding schemes that enable frame bits to be carried as signals across the media. Ethernet
devices make use of a broad range of cable and connector specifications.

In today's networks, Ethernet uses UTP copper cables and optical fiber to interconnect network
devices via intermediary devices such as hubs and switches. With all of the various media types
that Ethernet supports, the Ethernet frame structure remains consistent across all of its physical
implementations. It is for this reason that it can evolve to meet today's networking requirements.

9.1.5 - Physical Implementations of Ethernet


The diagram depicts photographs of various physical devices used in implementing Ethernet.
These include:
- UTP patch panels in a rack
- Ethernet switches
- Ethernet fiber connectors

9.2 Ethernet - Communication through the LAN

9.2.1 Historic Ethernet

Page 1:

The foundation for Ethernet technology was first established in 1970 with a program called
Alohanet. Alohanet was a digital radio network designed to transmit information over a shared
radio frequency between the Hawaiian Islands.
Alohanet required all stations to follow a protocol in which an unacknowledged transmission
required re-transmitting after a short period of waiting. The techniques for using a shared
medium in this way were later applied to wired technology in the form of Ethernet.

Ethernet was designed to accommodate multiple computers that were Interconnected on a shared
bus topology.

The first version of Ethernet incorporated a media access method known as Carrier Sense
Multiple Access with Collision Detection (CSMA/CD). CSMA/CD managed the problems that
result when multiple devices attempt to communicate over a shared physical medium.

9.2.1 - Historic Ethernet


The diagram depicts Ethernet's shared media and collision detection techniques, which were
adapted from the Alohanet radio network. Radio towers are shown with wireless links
connecting the Hawaiian Islands. Also shown is an early Ethernet shared-bus topology, which
evolved from the wireless Alohanet.

Page 2:

Early Ethernet Media

The first versions of Ethernet used coaxial cable to connect computers in a bus topology. Each
computer was directly connected to the backbone. These early versions of Ethernet were known
as Thicknet, (10BASE5) and Thinnet (10BASE2).

10BASE5, or Thicknet, used a thick coaxial that allowed for cabling distances of up to 500
meters before the signal required a repeater. 10BASE2, or Thinnet, used a thin coaxial cable that
was smaller in diameter and more flexible than Thicknet and allowed for cabling distances of
185 meters.

The ability to migrate the original implementation of Ethernet to current and future Ethernet
implementations is based on the practically unchanged structure of the Layer 2 frame. Physical
media, media access, and media control have all evolved and continue to do so. But the Ethernet
frame header and trailer have essentially remained constant.

The early implementations of Ethernet were deployed in a low-bandwidth LAN environment


where access to the shared media was managed by CSMA, and later CSMA/CD. In additional to
being a logical bus topology at the Data Link layer, Ethernet also used a physical bus topology.
This topology became more problematic as LANs grew larger and LAN services made
increasing demands on the infrastructure.

The original thick coaxial and thin coaxial physical media were replaced by early categories of
UTP cables. Compared to the coaxial cables, the UTP cables were easier to work with,
lightweight, and less expensive.

The physical topology was also changed to a star topology using hubs. Hubs concentrate
connections. In other words, they take a group of nodes and allow the network to see them as a
single unit. When a frame arrives at one port, it is copied to the other ports so that all the
segments on the LAN receive the frame. Using the hub in this bus topology increased network
reliability by allowing any single cable to fail without disrupting the entire network. However,
repeating the frame to all other ports did not solve the issue of collisions. Later in this chapter,
you will see how issues with collisions in Ethernet networks are managed with the introduction
of switches into the network.

Note: A logical multi-access topology is also referred to as a logical bus topology.

9.2.1 - Historic Ethernet


The diagram depicts early Ethernet media and topology migration. Shown is the physical and
logical bus topology in which multiple PC's are connected to a common wire or bus. This design
migrated to a physical star and logical bus topology, where multiple PC's are connected to a
central hub.

9.2.2 Ethernet Collision Management

Page 1:

Legacy Ethernet
In 10BASE-T networks, typically the central point of the network segment was a hub. This
created a shared media. Because the media is shared, only one station could successfully transmit
at a time. This type of connection is described as a half-duplex communication.

As more devices were added to an Ethernet network, the amount of frame collisions increased
significantly. During periods of low communications activity, the few collisions that occur are
managed by CSMA/CD, with little or no impact on performance. As the number of devices and
subsequent data traffic increase, however, the rise in collisions can have a significant impact on
the user's experience.

A good analogy is when we leave for work or school early in the morning, the roads are
relatively clear and not congested. Later when more cars are on the roads, there can be collisions
and traffic slows down.

Current Ethernet

A significant development that enhanced LAN performance was the introduction of switches to
replace hubs in Ethernet-based networks. This development closely corresponded with the
development of 100BASE-TX Ethernet. Switches can control the flow of data by isolating each
port and sending a frame only to its proper destination (if the destination is known), rather than
send every frame to every device.

The switch reduces the number of devices receiving each frame, which in turn reduces or
minimizes the possibility of collisions. This, and the later introduction of full-duplex
communications (having a connection that can carry both transmitted and received signals at the
same time), has enabled the development of 1Gbps Ethernet and beyond.

9.2.2 - Ethernet Collision Management


The diagram depicts the migration to Ethernet switches.
Hub-based:
A central hub is shown with several PC's, a server, and a printer attached.
Switch-based:
The same topology is shown, except the central hub is replaced with a switch.

9.2.3 Moving to 1Gbps and Beyond

Page 1:

The applications that cross network links on a daily basis tax even the most robust networks. For
example, the increasing use of Voice over IP (VoIP) and multimedia services requires
connections that are faster than 100 Mbps Ethernet.

Gigabit Ethernet is used to describe Ethernet implementations that provide bandwidth of 1000
Mbps (1 Gbps) or greater. This capacity has been built on the full-duplex capability and the UTP
and fiber-optic media technologies of earlier Ethernet.

The increase in network performance is significant when potential throughput increases from 100
Mbps to 1 Gbps and above.

Upgrading to 1 Gbps Ethernet does not always mean that the existing network infrastructure of
cables and switches has to be completely replaced. Some of the equipment and cabling in
modern, well-designed and installed networks may be capable of working at the higher speeds
with only minimal upgrading. This capability has the benefit of reducing the total cost of
ownership of the network.

9.2.3 - Moving to 1 Gbps and Beyond


The diagram depicts moving to 1 Gigabit per second Ethernet and beyond because of new
networking services that require high-bandwidth LAN's. Various icons are shown, which include
servers, wireless phones, streaming video, IP phones, and video conferencing equipment.

Page 2:

Ethernet Beyond the LAN


The increased cabling distances enabled by the use of fiber-optic cable in Ethernet-based
networks has resulted in a blurring of the distinction between LANs and WANs. Ethernet was
initially limited to LAN cable systems within single buildings, and then extended to between
buildings. It can now be applied across a city in what is known as a Metropolitan Area Network
(MAN).

9.2.3 - Moving to 1 Gbps and Beyond


The diagram depicts how Gigabit Ethernet technology is applied beyond the enterprise LAN to
MAN and WAN-based networks. Several buildings with LAN's are shown interconnected by a
Metropolitan-area Network (MAN). One of the buildings in the MAN is connected to a building
outside of the MAN via a WAN.

9.3 The Ethernet Frame

9.3.1 The Frame - Encapsulating the Packet

Page 1:

The Ethernet frame structure adds headers and trailers around the Layer 3 PDU to encapsulate
the message being sent.

Both the Ethernet header and trailer have several sections of information that are used by the
Ethernet protocol. Each section of the frame is called a field. There are two styles of Ethernet
framing: the DIX Ethernet standard which is now Ethernet II and the IEEE 802.3 standard which
has been updated several times to include new technologies.

The differences between framing styles are minimal. The most significant difference between the
two standards are the addition of a Start Frame Delimiter (SFD) and the change of the Type field
to a Length field in the 802.3, as shown in the figure.

Ethernet Frame Size

Both the Ethernet II and IEEE 802.3 standards define the minimum frame size as 64 bytes and
the maximum as 1518 bytes. This includes all bytes from the Destination MAC Address field
through the Frame Check Sequence (FCS) field. The Preamble and Start Frame Delimiter fields
are not included when describing the size of a frame. The IEEE 802.3ac standard, released in
1998, extended the maximum allowable frame size to 1522 bytes. The frame size was increased
to accommodate a technology called Virtual Local Area Network (VLAN). VLANs are created
within a switched network and will be presented in a later course.

If the size of a transmitted frame is less than the minimum or greater than the maximum, the
receiving device drops the frame. Dropped frames are likely to be the result of collisions or other
unwanted signals and are therefore considered invalid.

9.3.1 - The Frame - Encapsulating the Packet


The diagram depicts a comparison of IEEE 8 0 2 dot 3 and Ethernet Frame Structure and Field
Size.

IEEE 8 0 2 dot 3 Frame:


- Preamble, 7 bytes
- Start of Frame delimiter, 1 byte
- Destination Address, 6 bytes
- Source Address, 6 bytes
- Length, 2 bytes
- 8 0 2 dot 2 Header and Data, 46 to 1500 bytes
- Frame Check Sequence, 4 bytes

Ethernet Frame:
- Preamble, 8 bytes
- Destination Address, 6 bytes
- Source Address, 6 bytes
- Type, 2 bytes
- Data, 46 to 1500 bytes
- Frame Check Sequence, 4 bytes

Page 2:

Roll over each field name to see its description.

Preamble and Start Frame Delimiter Fields


The Preamble (7 bytes) and Start Frame Delimiter (SFD) (1 byte) fields are used for
synchronization between the sending and receiving devices. These first eight bytes of the frame
are used to get the attention of the receiving nodes. Essentially, the first few bytes tell the
receivers to get ready to receive a new frame.

Destination MAC Address Field

The Destination MAC Address field (6 bytes) is the identifier for the intended recipient. As you
will recall, this address is used by Layer 2 to assist devices in determining if a frame is addressed
to them. The address in the frame is compared to the MAC address in the device. If there is a
match, the device accepts the frame.

Source MAC Address Field

The Source MAC Address field (6 bytes) identifies the frame's originating NIC or interface.
Switches also use this address to add to their lookup tables. The role of switches will be
discussed later in the chapter.

Length/Type Field

For any IEEE 802.3 standard earlier than 1997 the Length field defines the exact length of the
frame's data field. This is used later as part of the FCS to ensure that the message was received
properly. If the purpose of the field is to designate a type as in Ethernet II, the Type field
describes which protocol is implemented.

These two uses of the field were officially combined in 1997 in the IEEE 802.3x standard
because both uses were common. The Ethernet II Type field is incorporated into the current
802.3 frame definition. When a node receives a frame, it must examine the Length field to
determine which higher-layer protocol is present. If the two-octet value is equal to or greater than
0x0600 hexadecimal or 1536 decimal, then the contents of the Data Field are decoded according
to the EtherType protocol indicated. Whereas if the value is equal to or less than 0x05DC
hexadecimal or 1500 decimal then the Length field is being used to indicate the use of the IEEE
802.3 frame format. This is how Ethernet II and 802.3 frames are differentiated.

Data and Pad Fields

The Data and Pad fields (46 - 1500 bytes) contains the encapsulated data from a higher layer,
which is a generic Layer 3 PDU, or more commonly, an IPv4 packet. All frames must be at least
64 bytes long. If a small packet is encapsulated, the Pad is used to increase the size of the frame
to this minimum size.

Links

IEEE mantains a list of EtherType public assignment.

http://standards.ieee.org/regauth/ethertype/eth.txt

9.3.1 - The Frame - Encapsulating the Packet


The diagram depicts Ethernet Frame Fields and a description of each field.

IEEE 8 0 2 dot 3 Ethernet Frame Fields:


Preamble - 7 byte Frame Preamble
Start of Frame Delimiter - 1 byte Start of Frame Delimiter
Destination Address - 6 byte Destination MAC Address
Source Address - 6 byte Source MAC Address
Length - 2 byte Frame Length or Encapsulated Protocol Type
8 0 2 dot 2 Header and Data - 46 to 1500 byte Data (Encapsulated Packet) plus padding if
required.
Frame Check Sequence - 4 byte Frame Check Sequence (CRC Checksum)

Page 3:

Frame Check Sequence Field


The Frame Check Sequence (FCS) field (4 bytes) is used to detect errors in a frame. It uses a
cyclic redundancy check (CRC). The sending device includes the results of a CRC in the FCS
field of the frame.

The receiving device receives the frame and generates a CRC to look for errors. If the
calculations match, no error occurred. Calculations that do not match are an indication that the
data has changed; therefore, the frame is dropped. A change in the data could be the result of a
disruption of the electrical signals that represent the bits.

9.3.1 - The Frame - Encapsulating the Packet


The diagram depicts the fields included in the Ethernet FCS calculation. These include the
destination address, source address, length, 8 0 2 dot 2 header, and data.

If the FCS calculated by the receiver (based on the contents of the received frame) does not equal
the FCS calculated by the source (which is included in the frame), the frame is considered
invalid and is dropped.

9.3.2 The Ethernet MAC Address

Page 1:

Initially, Ethernet was implemented as part of a bus topology. Every network device was
connected to the same, shared media. In low traffic or small networks, this was an acceptable
deployment. The main problem to solve was how to identify each device. The signal could be
sent to every device, but how would each device identify if it were the intended receiver of the
message?

A unique identifier called a Media Access Control (MAC) address was created to assist in
determining the source and destination address within an Ethernet network. Regardless of which
variety of Ethernet was used, the naming convention provided a method for device identification
at a lower level of the OSI model.

As you will recall, MAC addressing is added as part of a Layer 2 PDU. An Ethernet MAC
address is a 48-bit binary value expressed as 12 hexadecimal digits.
9.3.2 - The Ethernet MAC Address
The diagram depicts the MAC address used for addressing in Ethernet. PC's A, B, C, D, E, F,
and G are attached to shared media (multiple access). All Ethernet nodes share the media. To
receive the data sent to it, each node needs a unique address. A frame for PC D is on the wire. PC
D says: "Yes, that frame is for me." All other PC's reject the frame.

Page 2:

MAC Address Structure

The MAC address value is a direct result of IEEE-enforced rules for vendors to ensure globally
unique addresses for each Ethernet device. The rules established by IEEE require any vendor that
sells Ethernet devices to register with IEEE. The IEEE assigns the vendor a 3-byte code, called
the Organizationally Unique Identifier (OUI).

IEEE requires a vendor to follow two simple rules:

 All MAC addresses assigned to a NIC or other Ethernet device must use that vendor's
assigned OUI as the first 3 bytes.
 All MAC addresses with the same OUI must be assigned a unique value (vendor code or
serial number) in the last 3 bytes.

The MAC address is often referred to as a burned-in address (BIA) because it is burned into
ROM (Read-Only Memory) on the NIC. This means that the address is encoded into the ROM
chip permanently - it cannot be changed by software.

However, when the computer starts up, the NIC copies the address into RAM. When examining
frames, it is the address in RAM that is used as the source address to compare with the
destination address. The MAC address is used by the NIC to determine if a message should be
passed to the upper layers for processing.

Network Devices
When the source device is forwarding the message to an Ethernet network, the header
information within the destination MAC address is attached. The source device sends the data
through the network. Each NIC in the network views the information to see if the MAC address
matches its physical address. If there is no match, the device discards the frame. When the frame
reaches the destination where the MAC of the NIC matches the destination MAC of the frame,
the NIC passes the frame up the OSI layers, where the decapsulation process take place.

All devices connected to an Ethernet LAN have MAC-addressed interfaces. Different hardware
and software manufacturers might represent the MAC address in different hexadecimal formats.
The address formats might be similar to 00-05-9A-3C-78-00, 00:05:9A:3C:78:00, or
0005.9A3C.7800. MAC addresses are assigned to workstations, servers, printers, switches, and
routers - any device that must originate and/or receive data on the network.

9.3.2 - The Ethernet MAC Address


The diagram depicts the Ethernet MAC address structure. The address is divided into two parts,
the Organizational Unique Identifier (O U I) and vendor assigned portion (for NIC's and
interfaces).

Organizational Unique Identifier (O U I):


- 24 bits
- 6 hex digits
- 00 60 2F
- Cisco

Vendor Assigned (NIC's, Interfaces)


- 24 bits
- 6 hex digits
- 3A 07 BC
- Particular device

Different ways of representing the MAC address include:


- 00-60-2F-3A-07-BC
- 00:60:2F:3A:07:BC
- 0060.2F3A.07BC

9.3.3 Hexadecimal Numbering and Addressing

Page 1:

Hexadecimal Numbering
Hexadecimal ("Hex") is a convenient way to represent binary values. Just as decimal is a base
ten numbering system and binary is base two, hexadecimal is a base sixteen system.

The base 16 numbering system uses the numbers 0 to 9 and the letters A to F. The figure shows
the equivalent decimal, binary, and hexadecimal values for binary 0000 to 1111. It is easier for
us to express a value as a single hexadecimal digit than as four bits.

Understanding Bytes

Given that 8 bits (a byte) is a common binary grouping, binary 00000000 to 11111111 can be
represented in hexadecimal as the range 00 to FF. Leading zeroes are always displayed to
complete the 8-bit representation. For example, the binary value 0000 1010 is shown in
hexadecimal as 0A.

Representing Hexadecimal Values

Note: It is important to distinguish hexadecimal values from decimal values regarding the
characters 0 to 9, as shown in the figure.

Hexadecimal is usually represented in text by the value preceded by 0x (for example 0x73) or a
subscript 16. Less commonly, it may be followed by an H, for example 73H. However, because
subscript text is not recognized in command line or programming environments, the technical
representation of hexadecimal is preceded with "0x" (zero X). Therefore, the examples above
would be shown as 0x0A and 0x73 respectively.

Hexadecimal is used to represent Ethernet MAC addresses and IP Version 6 addresses. You have
seen hexadecimal used in the Packets Byte pane of Wireshark where it is used to represent the
binary values within frames and packets.

Hexadecimal Conversions
Number conversions between decimal and hexadecimal values are straightforward, but quickly
dividing or multiplying by 16 is not always convenient. If such conversions are required, it is
usually easier to convert the decimal or hexadecimal value to binary, and then to convert the
binary value to either decimal or hexadecimal as appropriate.

With practice, it is possible to recognize the binary bit patterns that match the decimal and
hexadecimal values. The figure shows these patterns for selected 8-bit values.

9.3.3 - Hexadecimal Numbering and Addressing


The diagram depicts a comparison of decimal, binary, and hexadecimal numbering.

Decimal and Binary equivalents of 0 to F Hexadecimal:


Decimal 0 = Binary 0000 = Hexadecimal 0
Decimal 1 = Binary 0001 = Hexadecimal 1
Decimal 2 = Binary 0010 = Hexadecimal 2
Decimal 3 = Binary 0011 = Hexadecimal 3
Decimal 4 = Binary 0100 = Hexadecimal 4
Decimal 5 = Binary 0101 = Hexadecimal 5
Decimal 6 = Binary 0110 = Hexadecimal 6
Decimal 7 = Binary 0111 = Hexadecimal 7
Decimal 8 = Binary 1000 = Hexadecimal 8
Decimal 9 = Binary 1001 = Hexadecimal 9
Decimal 10 = Binary 1010 = Hexadecimal A
Decimal 11 = Binary 1011 = Hexadecimal B
Decimal 12 = Binary 1100 = Hexadecimal C
Decimal 13 = Binary 1101 = Hexadecimal D
Decimal 14 = Binary 1110 = Hexadecimal E
Decimal 15 = Binary 1111 = Hexadecimal F

Selected Decimal, Binary, and Hexadecimal equivalents:


Decimal 0 = Binary 0000 0000 = Hexadecimal 00
Decimal 1 = Binary 0000 0001 = Hexadecimal 01
Decimal 2 = Binary 0000 0010 = Hexadecimal 02
Decimal 3 = Binary 0000 0011 = Hexadecimal 03
Decimal 4 = Binary 0000 0100 = Hexadecimal 04
Decimal 5 = Binary 0000 0101 = Hexadecimal 05
Decimal 6 = Binary 0000 0110 = Hexadecimal 06
Decimal 7 = Binary 0000 0111 = Hexadecimal 07
Decimal 8 = Binary 0000 1000 = Hexadecimal 08
Decimal 10 = Binary 0000 1010 = Hexadecimal 0A
Decimal 15 = Binary 0000 1111 = Hexadecimal 0F
Decimal 16 = Binary 0001 0000 = Hexadecimal 10
Decimal 32 = Binary 0010 0000 = Hexadecimal 20
Decimal 64 = Binary 0100 0000 = Hexadecimal 40
Decimal 128 = Binary 1000 0000 = Hexadecimal 80
Decimal 192 = Binary 1100 0000 = Hexadecimal C0
Decimal 202 = Binary 1100 1010 = Hexadecimal CA
Decimal 240 = Binary 1111 0000 = Hexadecimal F0
Decimal 255 = Binary 1111 1111 = Hexadecimal FF

Page 2:

Viewing the MAC

A tool to examine the MAC address of our computer is the ipconfig /all or ifconfig. In the
graphic, notice the MAC address of this computer. If you have access, you may wish to try this
on your own computer.

You may want to research the OUI of the MAC address to determine the manufacturer of your
NIC.

9.3.3 - Hexadecimal Numbering and Addressing


The diagram depicts viewing the MAC address of a PC using the i p config/all command at the
command prompt. Highlighted in the output is the following line:
Physical Address 00-18-DE-C7-F3-FB

9.3.4 Another Layer of Addressing

Page 1:

Data Link Layer

OSI Data Link layer (Layer 2) physical addressing, implemented as an Ethernet MAC address, is
used to transport the frame across the local media. Although providing unique host addresses,
physical addresses are non-hierarchical. They are associated with a particular device regardless
of its location or to which network it is connected.
These Layer 2 addresses have no meaning outside the local network media. A packet may have
to traverse a number of different Data Link technologies in local and wide area networks before
it reaches its destination. A source device therefore has no knowledge of the technology used in
intermediate and destination networks or of their Layer 2 addressing and frame structures.

Network Layer

Network layer (Layer 3) addresses, such as IPv4 addresses, provide the ubiquitous, logical
addressing that is understood at both source and destination. To arrive at its eventual destination,
a packet carries the destination Layer 3 address from its source. However, as it is framed by the
different Data Link layer protocols along the way, the Layer 2 address it receives each time
applies only to that local portion of the journey and its media.

In short:

 The Network layer address enables the packet to be forwarded toward its destination.
 The Data Link layer address enables the packet to be carried by the local media across
each segment.

9.3.4 - Another Layer of Addressing


The diagram compares the different layers of addressing, for the Data Link Layer and Network
Layer. Three LAN's are shown. Each LAN is connected to a router. Each router is connected to a
network cloud via WAN links. One of the LAN's has the following text associated with it: MAC
addresses are used within networks across the local media. An arrow between two of the LAN's
has the following text associated with it: IP addresses are used to communicate between
networks.

9.3.5 Ethernet Unicast, Multicast & Broadcast

Page 1:

In Ethernet, different MAC addresses are used for Layer 2 unicast, multicast, and broadcast
communications.
Unicast

A unicast MAC address is the unique address used when a frame is sent from a single
transmitting device to single destination device.

In the example shown in the figure, a host with IP address 192.168.1.5 (source) requests a web
page from the server at IP address 192.168.1.200. For a unicast packet to be sent and received, a
destination IP address must be in the IP packet header. A corresponding destination MAC
address must also be present in the Ethernet frame header. The IP address and MAC address
combine to deliver data to one specific destination host.

9.3.5 - Ethernet Unicast, Multicast & Broadcast


The animation depicts a host PC, H1, sending a unicast message to a server. To forward a unicast
packet to the server, H1 (source) must use the server (destination) IP address and its associated
MAC address. The Ethernet frame structure is shown with the unicast destination MAC address
and the unicast destination IP address of the server.

Source Host: H1
IP: 192.168.1.5
MAC: 00-07-E9-63-CE-53

Destination host: Server


IP: 192.168.1.200
MAC: 00-07-E9-42-AC-28

Page 2:

Broadcast

With a broadcast, the packet contains a destination IP address that has all ones (1s) in the host
portion. This numbering in the address means that all hosts on that local network (broadcast
domain) will receive and process the packet. Many network protocols, such as Dynamic Host
Configuration Protocol (DHCP) and Address Resolution Protocol (ARP), use broadcasts. How
ARP uses broadcasts to map Layer 2 to Layer 3 addresses is discussed later in this chapter.
As shown in the figure, a broadcast IP address for a network needs a corresponding broadcast
MAC address in the Ethernet frame. On Ethernet networks, the broadcast MAC address is 48
ones displayed as Hexadecimal FF-FF-FF-FF-FF-FF.

9.3.5 - Ethernet Unicast, Multicast & Broadcast


The animation depicts a host PC sending a broadcast message to all the hosts on the network. To
forward a broadcast packet to all the hosts, the PC (source) must use the broadcast IP and
broadcast MAC destination addresses. The Ethernet frame structure is shown with the broadcast
destination MAC address and the broadcast destination IP address for all the hosts.

Source Host: H1
IP: 192.168.1.5
MAC: 00-07-E9-63-CE-53

Destination hosts: All hosts on network


IP: 192.168.1.255
MAC: FF- FF- FF- FF- FF- FF

Page 3:

Multicast

Recall that multicast addresses allow a source device to send a packet to a group of devices.
Devices that belong to a multicast group are assigned a multicast group IP address. The range of
multicast addresses is from 224.0.0.0 to 239.255.255.255. Because multicast addresses represent
a group of addresses (sometimes called a host group), they can only be used as the destination of
a packet. The source will always have a unicast address.

Examples of where multicast addresses would be used are in remote gaming, where many
players are connected remotely but playing the same game, and distance learning through video
conferencing, where many students are connected to the same class.

As with the unicast and broadcast addresses, the multicast IP address requires a corresponding
multicast MAC address to actually deliver frames on a local network. The multicast MAC
address is a special value that begins with 01-00-5E in hexadecimal. The value ends by
converting the lower 23 bits of the IP multicast group address into the remaining 6 hexadecimal
characters of the Ethernet address. The remaining bit in the MAC address is always a "0".
An example, as shown in the graphic, is hexadecimal 01-00-5E-00-00-01. Each hexadecimal
character is 4 binary bits.

http://www.iana.org/assignments/ethernet-numbers

http://www.cisco.com/en/US/docs/app_ntwk_services/waas/acns/v51/configuration/central/guide
/51ipmul.html

http://www.cisco.com/en/US/docs/internetworking/technology/handbook/IP-Multi.html

9.3.5 - Ethernet Unicast, Multicast & Broadcast


The animation depicts a host PC sending a multicast message to some of the hosts on the
network. To forward a multicast packet to some of the hosts, the PC (source) must use the
multicast IP and MAC destination addresses. The Ethernet frame structure is shown with the
multicast destination MAC address and the multicast destination IP address.

Source Host: H1
IP: 192.168.1.5
MAC: 00-07-E9-63-CE-53

Destination hosts: multicast host group


IP: 224.0.0.1
MAC: 01-00-5E-00-00-01

9.4 Ethernet Media Access Control

9.4.1 Media Access Control in Ethernet

Page 1:

In a shared media environment, all devices have guaranteed access to the medium, but they have
no prioritized claim on it. If more than one device transmits simultaneously, the physical signals
collide and the network must recover in order for communication to continue.
Collisions are the cost that Ethernet pays to get the low overhead associated with each
transmission.

Ethernet uses Carrier Sense Multiple Access with Collision Detection (CSMA/CD) to detect and
handle collisions and manage the resumption of communications.

Because all computers using Ethernet send their messages on the same media, a distributed
coordination scheme (CSMA) is used to detect the electrical activity on the cable. A device can
then determine when it can transmit. When a device detects that no other computer is sending a
frame, or carrier signal, the device will transmit, if it has something to send.

9.4.1 - Media Access Control in Ethernet


The diagram depicts media access control in Ethernet, which uses Carrier Sense Multiple Access
with Collision Detection (CSMA/CD). CSMA/CD controls access to the shared media. If there is
a collision, it is detected and frames are retransmitted. Four PC's, A, B, C, and D are shown
connected to shared media and using CSMA/CD.

9.4.2 CSMA/CD - The Process

Page 1:

Carrier Sense

In the CSMA/CD access method, all network devices that have messages to send must listen
before transmitting.

If a device detects a signal from another device, it will wait for a specified amount of time before
attempting to transmit.

When there is no traffic detected, a device will transmit its message. While this transmission is
occurring, the device continues to listen for traffic or collisions on the LAN. After the message is
sent, the device returns to its default listening mode.
Multi-access

If the distance between devices is such that the latency of one device's signals means that signals
are not detected by a second device, the second device may start to transmit, too. The media now
has two devices transmitting their signals at the same time. Their messages will propagate across
the media until they encounter each other. At that point, the signals mix and the message is
destroyed. Although the messages are corrupted, the jumble of remaining signals continues to
propagate across the media.

Collision Detection

When a device is in listening mode, it can detect when a collision occurs on the shared media.
The detection of a collision is made possible because all devices can detect an increase in the
amplitude of the signal above the normal level.

Once a collision occurs, the other devices in listening mode - as well as all the transmitting
devices - will detect the increase in the signal amplitude. Once detected, every device
transmitting will continue to transmit to ensure that all devices on the network detect the
collision.

Jam Signal and Random Backoff

Once the collision is detected by the transmitting devices, they send out a jamming signal. This
jamming signal is used to notify the other devices of a collision, so that they will invoke a
backoff algorithm. This backoff algorithm causes all devices to stop transmitting for a random
amount of time, which allows the collision signals to subside.

After the delay has expired on a device, the device goes back into the "listening before transmit"
mode. A random backoff period ensures that the devices that were involved in the collision do
not try to send their traffic again at the same time, which would cause the whole process to
repeat. But, this also means that a third device may transmit before either of the two involved in
the original collision have a chance to re-transmit.

9.4.2 - CSMA/CD - The Process


The animation depicts the operation of CSMA/CD in Ethernet. Four PC's, A, B, C, and D are
shown connected to shared media using CSMA/CD.

In the animation, PC C wants to transmit and begins the process through the following steps:
1. Carrier Sense - Listen before transmitting, and monitor media for traffic.
2. Carrier signal detected.
3. Wait for a specified time after signal passes. Try again later.
4. Listen before transmitting - Monitoring media for traffic.
5. No carrier signal detected - PC C computer transmits.

PC A and D want to transmit:


1. Carrier Sense - both listen before transmitting, and monitor media for traffic.
2. No carrier signal detected - PC's A and D transmit at the same time.
3. Collision occurs.
4. PC's A and D issue a jam signal.
5. All PC's set a back-off timer and try again later.

Page 2:

Hubs and Collision Domains

Given that collisions will occur occasionally in any shared media topology - even when
employing CSMA/CD - we need to look at the conditions that can result in an increase in
collisions. Because of the rapid growth of the Internet:

 More devices are being connected to the network.


 Devices access the network media more frequently.
 Distances between devices are increasing.

Recall that hubs were created as intermediary network devices that enable more nodes to connect
to the shared media. Also known as multi-port repeaters, hubs retransmit received data signals to
all connected devices, except the one from which it received the signals. Hubs do not perform
network functions such as directing data based on addresses.
Hubs and repeaters are intermediary devices that extend the distance that Ethernet cables can
reach. Because hubs operate at the Physical layer, dealing only with the signals on the media,
collisions can occur between the devices they connect and within the hubs themselves.

Further, using hubs to provide network access to more users reduces the performance for each
user because the fixed capacity of the media has to be shared between more and more devices.

The connected devices that access a common media via a hub or series of directly connected
hubs make up what is known as a collision domain. A collision domain is also referred to as a
network segment. Hubs and repeaters therefore have the effect of increasing the size of the
collision domain.

As shown in the figure, the interconnection of hubs form a physical topology called an extended
star. The extended star can create a greatly expanded collision domain.

An increased number of collisions reduces the network's efficiency and effectiveness until the
collisions become a nuisance to the user.

Although CSMA/CD is a frame collision management system, it was designed to manage


collisions for only limited numbers of devices and on networks with light network usage.
Therefore, other mechanisms are required when large numbers of users require access and when
more active network access is needed.

We will see that using switches in place of hubs can begin to alleviate this problem.

http://standards.ieee.org/getieee802/802.3.html

9.4.2 - CSMA/CD - The Process


The diagram depicts problems associated with using hubs in extended star topologies. Using
hubs can create large collision domains. A network is shown with 35 hosts and 5 hubs
interconnected in a single collision-broadcast domain. When one host or server transmits a
message, all other devices in the domain receive the message. More importantly, only one device
in the entire network can send data at any one time.

Page 3:

In this Packet Tracer Activity, you will build large collision domains to view the effects of
collisions on data transmission and network operation.

Click the Packet Tracer icon to launch the Packet Tracer activity.

9.4.2 - CSMA/CD - The Process


Link to Packet Tracer Exploration: Observing the Effects of Collisions in a Shared Media
Environment.

In this Packet Tracer Activity, you will build large collision domains to view the effects of
collisions on data transmission and network operation.

9.4.3 Ethernet Timing

Page 1:

Faster Physical layer implementations of Ethernet introduce complexities to the management of


collisions.

Latency

As discussed, each device that wants to transmit must first "listen" to the media to check for
traffic. If no traffic exists, the station will begin to transmit immediately. The electrical signal
that is transmitted takes a certain amount of time (latency) to propagate (travel) down the cable.
Each hub or repeater in the signal's path adds latency as it forwards the bits from one port to the
next.
This accumulated delay increases the likelihood that collisions will occur because a listening
node may transition into transmitting signals while the hub or repeater is processing the message.
Because the signal had not reached this node while it was listening, it thought that the media was
available. This condition often results in collisions.

9.4.3 - Ethernet Timing

9.4.3 - Ethernet Timing


The diagram depicts issues associated with Ethernet delay (latency). A sending device is
connected to the receiving device with four hubs between them. An Ethernet frame takes a
measurable time to travel from the sending device to the receiving device. Each intermediary
device contributes to the overall latency.

Page 2:

Timing and Synchronization

In half-duplex mode, if a collision has not occurred, the sending device will transmit 64 bits of
timing synchronization information, which is known as the Preamble.

The sending device will then transmit the complete frame.

Ethernet with throughput speeds of 10 Mbps and slower are asynchronous. An asynchronous
communication in this context means that each receiving device will use the 8 bytes of timing
information to synchronize the receive circuit to the incoming data and then discard the 8 bytes.

Ethernet implementations with throughput of 100 Mbps and higher are synchronous.
Synchronous communication in this context means that the timing information is not required.
However, for compatibility reasons, the Preamble and Start Frame Delimiter (SFD) fields are
still present.

9.4.3 - Ethernet Timing


The diagram depicts frame synchronization for asynchronous communications. An Ethernet
frame is shown with the following text pointing to the Start Frame Field: 10 Megabits per second
and slower Ethernet use the first 64 bits of the frame preamble to synchronize the receiver.

Page 3:

Bit Time

For each different media speed, a period of time is required for a bit to be placed and sensed on
the media. This period of time is referred to as the bit time. On 10-Mbps Ethernet, one bit at the
MAC layer requires 100 nanoseconds (nS) to transmit. At 100 Mbps, that same bit requires 10
nS to transmit. And at 1000 Mbps, it only takes 1 nS to transmit a bit. As a rough estimate, 20.3
centimeters (8 inches) per nanosecond is often used for calculating the propagation delay on a
UTP cable. The result is that for 100 meters of UTP cable, it takes just under 5 bit times for a
10BASE-T signal to travel the length the cable.

For CSMA/CD Ethernet to operate, the sending device must become aware of a collision before
it has completed transmission of a minimum-sized frame. At 100 Mbps, the device timing is
barely able to accommodate 100 meter cables. At 1000 Mbps, special adjustments are required
because nearly an entire minimum-sized frame would be transmitted before the first bit reached
the end of the first 100 meters of UTP cable. For this reason, half-duplex mode is not permitted
in 10-Gigabit Ethernet.

These timing considerations have to be applied to the interframe spacing and backoff times (both
of which are discussed in the next section) to ensure that when a device transmits its next frame,
the risk of a collision is minimized.

Slot Time

In half-duplex Ethernet, where data can only travel in one direction at once, slot time becomes an
important parameter in determining how many devices can share a network. For all speeds of
Ethernet transmission at or below 1000 Mbps, the standard describes how an individual
transmission may be no smaller than the slot time.
Determining slot time is a trade-off between the need to reduce the impact of collision recovery
(backoff and retransmission times) and the need for network distances to be large enough to
accommodate reasonable network sizes. The compromise was to choose a maximum network
diameter (about 2500 meters) and then to set the minimum frame length long enough to ensure
detection of all worst-case collisions.

Slot time for 10- and 100-Mbps Ethernet is 512 bit times, or 64 octets. Slot time for 1000-Mbps
Ethernet is 4096 bit times, or 512 octets.

The slot time ensures that if a collision is going to occur, it will be detected within the first 512
bits (4096 for Gigabit Ethernet) of the frame transmission. This simplifies the handling of frame
retransmissions following a collision.

Slot time is an important parameter for the following reasons:

 The 512-bit slot time establishes the minimum size of an Ethernet frame as 64 bytes. Any
frame less than 64 bytes in length is considered a "collision fragment" or "runt frame"
and is automatically discarded by receiving stations.
 The slot time establishes a limit on the maximum size of a network's segments. If the
network grows too big, late collisions can occur. Late collisions are considered a failure
in the network because the collision is detected too late by a device during the frame
transmission to be automatically handled by CSMA/CD.

Slot time is calculated assuming maximum cable lengths on the largest legal network
architecture. All hardware propagation delay times are at the legal maximum and the 32-bit jam
signal is used when collisions are detected.

The actual calculated slot time is just longer than the theoretical amount of time required
to travel between the furthest points of the collision domain, collide with another
transmission at the last possible instant, and then have the collision fragments return to the
sending station and be detected. See the figure.

For the system to work properly, the first device must learn about the collision before it finishes
sending the smallest legal frame size.
To allow 1000 Mbps Ethernet to operate in half-duplex mode, the extension field was added to
the frame when sending small frames purely to keep the transmitter busy long enough for a
collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and
allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits
are discarded by the receiving device.

9.4.3 - Ethernet Timing


The diagram depicts Ethernet slot and bit times for various forms of Ethernet. A sending device,
Device 1, is connected to the receiving device, Device 2, with four hubs between them. The path
from Device 1 to Device 2 and back again to Device 1 is the slot time.

Ethernet technology speed, slot time, and interval.


Speed: 10 Megabits per second
Slot Time: 512 bit time
Time Interval: 51.2 microseconds

Speed: 100 Megabits per second


Slot Time: 512 bit time
Time Interval: 5.12 microseconds

Speed: 1 Gigabits per second


Slot Time: 4096 bit time
Time Interval: 4.096 microseconds

Speed: 10 Gigabits per second


Slot Time: not applicable
Time Interval: not applicable

Ethernet technology speed and bit time.


Ethernet Speed: 10 Megabits per second
Bit time: 100 Nanoseconds

Ethernet Speed: 100 Megabits per second


Bit time: 10 Nanoseconds

Ethernet Speed: 1000 Megabits per second = 1 Gigabit per second


Bit time: 1 Nanosecond

Ethernet Speed: 10,000 Megabits per second = 10 Gigabits per second


Bit time: .1 Nanosecond

9.4.4 Interframe Spacing and Backoff

Page 1:
Interframe Spacing

The Ethernet standards require a minimum spacing between two non-colliding frames. This
gives the media time to stabilize after the transmission of the previous frame and time for the
devices to process the frame. Referred to as the interframe spacing, this time is measured from
the last bit of the FCS field of one frame to the first bit of the Preamble of the next frame.

After a frame has been sent, all devices on a 10 Mbps Ethernet network are required to wait a
minimum of 96 bit times (9.6 microseconds) before any device can transmit its next frame. On
faster versions of Ethernet, the spacing remains the same - 96 bit times - but the interframe
spacing time period grows correspondingly shorter.

Synchronization delays between devices may result in the loss of some of frame preamble bits.
This in turn may cause minor reduction of the interframe spacing when hubs and repeaters
regenerate the full 64 bits of timing information (the Preamble and SFD) at the start of every
frame forwarded. On higher speed Ethernet some time sensitive devices could potentially fail to
recognize individual frames resulting in communication failure.

9.4.4 - Interframe Spacing and Back-off


The diagram depicts Ethernet interframe spacing. Interframe time reduces as Ethernet speed
increases.

Ethernet technology speed interframe spacing and the time required.


Speed: 10 Megabits per second
Interframe Spacing: 96 bit time
Time Required: 9.6 microseconds

Speed: 100 Megabits per second


Interframe Spacing: 96 bit time
Time Required: 0.96 microseconds

Speed: 1 Gigabits per second


Interframe Spacing: 96 bit time
Time Required: 0.096 microseconds

Speed: 10 Gigabits per second


Interframe Spacing: 96 bit time
Time Required: 0.0096 microseconds
A series of bytes is shown, with the first and last bytes sent for two frames. The time interval
between the transmission of the two frames is referred to as the interframe spacing.

Page 2:

Jam Signal

As you will recall, Ethernet allows all devices to compete for transmitting time. In the event that
two devices transmit simultaneously, the network CSMA/CD attempts to resolve the issue. But
remember, when a larger number of devices are added to the network, it is possible for the
collisions to become increasingly difficult to resolve.

As soon as a collision is detected, the sending devices transmit a 32-bit "jam" signal that will
enforce the collision. This ensures all devices in the LAN to detect the collision.

It is important that the jam signal not be detected as a valid frame; otherwise the collision would
not be identified. The most commonly observed data pattern for a jam signal is simply a
repeating 1, 0, 1, 0 pattern, the same as the Preamble.

The corrupted, partially transmitted messages are often referred to as collision fragments or
runts. Normal collisions are less than 64 octets in length and therefore fail both the minimum
length and the FCS tests, making them easy to identify.

9.4.4 - Interframe Spacing and Back-off


The diagram depicts stations detecting a collision and sending a jam signal.
Step 1. Carrier Sense - Stations listen for traffic prior to transmitting.
Step 2. Multiple Access - Two or more stations transmit at the same time.
Step 3. Collision - Signal collides.
Step 4. Collision Detection - Jam signal is issued by both stations attempting to send.

Page 3:

Backoff Timing
After a collision occurs and all devices allow the cable to become idle (each waits the full
interframe spacing), the devices whose transmissions collided must wait an additional - and
potentially progressively longer - period of time before attempting to retransmit the collided
frame. The waiting period is intentionally designed to be random so that two stations do not
delay for the same amount of time before retransmitting, which would result in more collisions.
This is accomplished in part by expanding the interval from which the random retransmission
time is selected on each retransmission attempt. The waiting period is measured in increments of
the parameter slot time.

If media congestion results in the MAC layer unable to send the frame after 16 attempts, it gives
up and generates an error to the Network layer. Such an occurrence is rare in a properly
operating network and would happen only under extremely heavy network loads or when a
physical problem exists on the network.

The methods described in this section allowed Ethernet to provide greater service in a shared
media topology based on the use of hubs. In the coming switching section, we will see how, with
the use of switches, the need for CSMA/CD starts to diminish or, in some cases, is removed
altogether.

9.4.4 - Interframe Spacing and Back-off


The diagram depicts the use of back off timing to help avoid subsequent collisions. After a jam
signal is received, all stations cease transmission and each waits a random time period, set by the
back-off timer, before trying to send another frame.

9.5 Ethernet Physical Layer

9.5.1 Overview of Ethernet Physical Layer

Page 1:

The differences between standard Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit
Ethernet occur at the Physical layer, often referred to as the Ethernet PHY.

Ethernet is covered by the IEEE 802.3 standards. Four data rates are currently defined for
operation over optical fiber and twisted-pair cables:

 10 Mbps - 10Base-T Ethernet


 100 Mbps - Fast Ethernet
 1000 Mbps - Gigabit Ethernet
 10 Gbps - 10 Gigabit Ethernet

While there are many different implementations of Ethernet at these various data rates, only the
more common ones will be presented here. The figure shows some of the Ethernet PHY
characteristics.

The portion of Ethernet that operates on the Physical layer will be discussed in this section,
beginning with 10Base-T and continuing to 10 Gbps varieties.

9.5.1 - Overview of Ethernet Physical Layer


The diagram depicts Ethernet IEEE 8 0 2 dot 3 standards. Four data rates are currently defined
for operation over optical fiber and twisted-pair cables:
10 Megabits per second - 10Base-T Ethernet
100 Megabits per second - Fast Ethernet
1000 Megabits per second - Gigabit Ethernet
10 Gigabits per second - 10 Gigabit Ethernet

A tabular listing of various Ethernet technologies and physical characteristics is provided.

Type of Ethernet: 10Base-5


Bandwidth: 10 Megabits per second
Cable Type: Thicknet Coaxial
Duplex: Half
Maximum Distance: 500 meters

Type of Ethernet: 10Base-2


Bandwidth: 10 Megabits per second
Cable Type: Thin-net Coaxial
Duplex: Half
Maximum Distance: 185 meters

Type of Ethernet: 10Base-T


Bandwidth: 10 Megabits per second
Cable Type: Cat 3/Cat 5 U TP
Duplex: Half
Maximum Distance: 100 meters

Type of Ethernet: 100Base-T


Bandwidth: 100 Megabits per second
Cable Type: Cat 5 UTP
Duplex: Half
Maximum Distance: 100 meters

Type of Ethernet: 100Base-TX


Bandwidth: 200 Megabits per second
Cable Type: Cat 5 UTP
Duplex: Full
Maximum Distance: 100 meters

Type of Ethernet: 100Base-FX


Bandwidth: 100 Megabits per second
Cable Type: Multimode fiber
Duplex: Half
Maximum Distance: 400 meters

Type of Ethernet: 100Base-FX


Bandwidth: 200 Megabits per second
Cable Type: Multimode fiber
Duplex: Full
Maximum Distance: 2 kilometers

Type of Ethernet: 1000Base-T


Bandwidth: 1 Gigabits per second
Cable Type: Cat 5e UTP
Duplex: Full
Maximum Distance: 100 meters

Type of Ethernet: 1000Base-TX


Bandwidth: 1 Gigabits per second
Cable Type: Cat 6 UTP
Duplex: Full
Maximum Distance: 100 meters

Type of Ethernet: 1000Base-SX


Bandwidth: 1 Gigabits per second
Cable Type: Multimode fiber
Duplex: Full
Maximum Distance: 550 meters

Type of Ethernet: 1000Base-LX


Bandwidth: 1 Gigabits per second
Cable Type: Single-Mode fiber
Duplex: Full
Maximum Distance: 5 kilometers

Type of Ethernet: 1000Base-CX4


Bandwidth: 10 Gigabits per second
Cable Type: Twin axial
Duplex: Full
Maximum Distance: 15 meters

Type of Ethernet: 10GBase-T


Bandwidth: 10 Gigabits per second
Cable Type: Cat 6a/Cat 7 UTP
Duplex: Full
Maximum Distance: 100 meters

Type of Ethernet: 10GBase-LX4


Bandwidth: 10 Gigabits per second
Cable Type: Multimode fiber
Duplex: Full
Maximum Distance: 300 meters

Type of Ethernet: 10GBase-LX4


Bandwidth: 10 Gigabits per second
Cable Type: Single-mode fiber
Duplex: Full
Maximum Distance: 10 kilometers

9.5.2 10 and 100 Mbps Ethernet

Page 1:

The principal 10 Mbps implementations of Ethernet include:

 10BASE5 using Thicknet coaxial cable


 10BASE2 using Thinnet coaxial cable
 10BASE-T using Cat3/Cat5 unshielded twisted-pair cable

The early implementations of Ethernet, 10BASE5, and 10BASE2 used coaxial cable in a
physical bus. These implementations are no longer used and are not supported by the newer
802.3 standards.

10 Mbps Ethernet - 10BASE-T


10BASE-T uses Manchester-encoding over two unshielded twisted-pair cables. The early
implementations of 10BASE-T used Cat3 cabling. However, Cat5 or later cabling is typically
used today.

10 Mbps Ethernet is considered to be classic Ethernet and uses a physical star topology. Ethernet
10BASE-T links could be up to 100 meters in length before requiring a hub or repeater.

10BASE-T uses two pairs of a four-pair cable and is terminated at each end with an 8-pin RJ-45
connector. The pair connected to pins 1 and 2 are used for transmitting and the pair connected to
pins 3 and 6 are used for receiving. The figure shows the RJ45 pinout used with 10BASE-T
Ethernet.

10BASE-T is generally not chosen for new LAN installations. However, there are still many
10BASE-T Ethernet networks in existence today. The replacement of hubs with switches in
10BASE-T networks has greatly increased the throughput available to these networks and has
given Legacy Ethernet greater longevity. The 10BASE-T links connected to a switch can support
either half-duplex or full-duplex operation.

9.5.2 - 10 and 100 Megabits per second Ethernet


The diagram depicts 10Base-T Ethernet RJ-45 Pin outs. RJ-45 connectors, pair and pin colors,
and locations are shown.

A tabular listing provides the pin number and the signal for which it is used.

Pin 1 Signal: TD+ (Transmit Data, positive-going differential signal)


Pin 2 Signal: TD- (Transmit Data, negative-going differential signal)
Pin 3 Signal: RD+ (Receive Data, positive-going differential signal)
Pin 4 Signal: Unused
Pin 5 Signal: Unused
Pin 6 Signal: RD- (Receive Data, negative-going differential signal)
Pin 7 Signal: Unused
Pin 8 Signal: Unused

Page 2:

100 Mbps - Fast Ethernet


In the mid to late 1990s, several new 802.3 standards were established to describe methods for
transmitting data over Ethernet media at 100 Mbps. These standards used different encoding
requirements for achieving these higher data rates.

100 Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper
wire or fiber media. The most popular implementations of 100 Mbps Ethernet are:

 100BASE-TX using Cat5 or later UTP


 100BASE-FX using fiber-optic cable

Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two
separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.

100BASE-TX

100BASE-TX was designed to support transmission over either two pairs of Category 5 UTP
copper wire or two strands of optical fiber. The 100BASE-TX implementation uses the same two
pairs and pinouts of UTP as 10BASE-T. However, 100BASE-TX requires Category 5 or later
UTP. The 4B/5B encoding is used for 100BASE-TX Ethernet.

As with 10BASE-TX, 100Base-TX is connected as a physical star. The figure shows an example
of a physical star topology. However, unlike 10BASE-T, 100BASE-TX networks typically use a
switch at the center of the star instead of a hub. At about the same time that 100BASE-TX
technologies became mainstream, LAN switches were also being widely deployed. These
concurrent developments led to their natural combination in the design of 100BASE-TX
networks.

100BASE-FX

The 100BASE-FX standard uses the same signaling procedure as 100BASE-TX, but over optical
fiber media rather than UTP copper. Although the encoding, decoding, and clock recovery
procedures are the same for both media, the signal transmission is different - electrical pulses in
copper and light pulses in optical fiber. 100BASE-FX uses Low Cost Fiber Interface Connectors
(commonly called the duplex SC connector).

Fiber implementations are point-to-point connections, that is, they are used to interconnect two
devices. These connections may be between two computers, between a computer and a switch, or
between two switches.

9.5.2 - 10 and 100 Megabits per second Ethernet


The diagram depicts the basic star topology used with 10BASE-T and 100BASE-TX Ethernet.
Three PC's, a server, and a printer are connected to a central switch in a star pattern.

9.5.3 1000 Mbps Ethernet

Page 1:

1000 Mbps - Gigabit Ethernet

The development of Gigabit Ethernet standards resulted in specifications for UTP copper, single-
mode fiber, and multimode fiber. On Gigabit Ethernet networks, bits occur in a fraction of the
time that they take on 100 Mbps networks and 10 Mbps networks. With signals occurring in less
time, the bits become more susceptible to noise, and therefore timing is critical. The question of
performance is based on how fast the network adapter or interface can change voltage levels and
how well that voltage change can be detected reliably 100 meters away, at the receiving NIC or
interface.

At these higher speeds, encoding and decoding data is more complex. Gigabit Ethernet uses two
separate encoding steps. Data transmission is more efficient when codes are used to represent the
binary bit stream. Encoding the data enables synchronization, efficient usage of bandwidth, and
improved signal-to-noise ratio characteristics.

1000BASE-T Ethernet
1000BASE-T Ethernet provides full-duplex transmission using all four pairs in Category 5 or
later UTP cable. Gigabit Ethernet over copper wire enables an increase from 100 Mbps per wire
pair to 125 Mbps per wire pair, or 500 Mbps for the four pairs. Each wire pair signals in full
duplex, doubling the 500 Mbps to 1000 Mbps.

1000BASE-T uses 4D-PAM5 line encoding to obtain 1 Gbps data throughput. This encoding
scheme enables the transmission signals over four wire pairs simultaneously. It translates an 8-bit
byte of data into a simultaneous transmission of four code symbols (4D), which are sent over the
media, one on each pair, as 5-level Pulse Amplitude Modulated (PAM5) signals. This means that
every symbol corresponds to two bits of data. Because the information travels simultaneously
across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at
the receiver. The figure shows a representation of the circuitry used by 1000BASE-T Ethernet.

1000BASE-T allows the transmission and reception of data in both directions - on the same wire
and at the same time. This traffic flow creates permanent collisions on the wire pairs. These
collisions result in complex voltage patterns. The hybrid circuits detecting the signals use
sophisticated techniques such as echo cancellation, Layer 1 Forward Error Correction (FEC), and
prudent selection of voltage levels. Using these techniques, the system achieves the 1-Gigabit
throughput.

To help with synchronization, the Physical layer encapsulates each frame with start-of-stream
and end-of-stream delimiters. Loop timing is maintained by continuous streams of IDLE
symbols sent on each wire pair during the interframe spacing.

Unlike most digital signals where there are usually a couple of discrete voltage levels,
1000BASE-T uses many voltage levels. In idle periods, nine voltage levels are found on the
cable. During data transmission periods, up to 17 voltage levels are found on the cable. With this
large number of states, combined with the effects of noise, the signal on the wire looks more
analog than digital. Like analog, the system is more susceptible to noise due to cable and
termination problems.

9.5.3 - 1000 Megabits per second Ethernet


The diagram depicts 1000BASE-T circuitry. Each of the four twisted pairs has a transmit and
receive function and is connected to a hybrid transceiver at each end on the transmitting or
receiving device. Each of the four pairs transmits at 250 megabits per second in full duplex
mode.
Page 2:

1000BASE-SX and 1000BASE-LX Ethernet Using Fiber-Optics

The fiber versions of Gigabit Ethernet - 1000BASE-SX and 1000BASE-LX - offer the following
advantages over UTP: noise immunity, small physical size, and increased unrepeated distances
and bandwidth.

All 1000BASE-SX and 1000BASE-LX versions support full-duplex binary transmission at 1250
Mbps over two strands of optical fiber. The transmission coding is based on the 8B/10B
encoding scheme. Because of the overhead of this encoding, the data transfer rate is still 1000
Mbps.

Each data frame is encapsulated at the Physical layer before transmission, and link
synchronization is maintained by sending a continuous stream of IDLE code groups during the
interframe spacing.

The principal differences among the 1000BASE-SX and 1000BASE-LX fiber versions are the
link media, connectors, and wavelength of the optical signal. These differences are shown in the
figure.

9.5.3 - 1000 Megabits per second Ethernet


The diagram depicts 1000Base-X fiber link support. A tabular listing of various link
configurations is shown.

Link configuration support:


1000Base-SX (850 Nanometer Wavelength) supports the following:
- 125/62.5 micrometer multimode optical fiber
- 125/50 micrometer multimode optical fiber

1000Base-LX (1300 Nanometer Wavelength) supports the following:


- 125/62.5 micrometer multimode optical fiber
- 125/50 micrometer multimode optical fiber
- 125/10 micrometer single mode optical fiber
9.5.4 Ethernet - Future Options

Page 1:

The IEEE 802.3ae standard was adapted to include 10 Gbps, full-duplex transmission over fiber-
optic cable. The 802.3ae standard and the 802.3 standards for the original Ethernet are very
similar. 10-Gigabit Ethernet (10GbE) is evolving for use not only in LANs, but also for use in
WANs and MANs.

Because the frame format and other Ethernet Layer 2 specifications are compatible with previous
standards, 10GbE can provide increased bandwidth to individual networks that is interoperable
with the existing network infrastructure.

10Gbps can be compared to other varieties of Ethernet in these ways:

 Frame format is the same, allowing interoperability between all varieties of legacy, fast,
gigabit, and 10 gigabit Ethernet, with no reframing or protocol conversions necessary.
 Bit time is now 0.1 ns. All other time variables scale accordingly.
 Because only full-duplex fiber connections are used, there is no media contention and
CSMA/CD is not necessary.
 The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few
additions to accommodate 40 km fiber links and interoperability with other fiber
technologies.

With 10Gbps Ethernet, flexible, efficient, reliable, relatively low cost end-to-end Ethernet
networks become possible.

Future Ethernet Speeds

Although 1-Gigabit Ethernet is now widely available and 10-Gigabit products are becoming
more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40-, 100-, or even
160-Gbps standards. The technologies that are adopted will depend on a number of factors,
including the rate of maturation of the technologies and standards, the rate of adoption in the
market, and the cost of emerging products.
9.5.4 - Ethernet - Future Options
The diagram depicts how the common Ethernet frame can be applied to different network types.
Headquarters, branch offices, and customer premises are shown. These are interconnected by
WAN, MAN, and access network clouds where Ethernet can be applied. An Ethernet frame with
the preamble, destination address, source address, type, data, and FCS is shown.

9.6 Hubs and Switches

9.6.1 Legacy Ethernet - Using Hubs

Page 1:

In previous sections, we have seen how classic Ethernet uses shared media and contention-based
media access control. Classic Ethernet uses hubs to interconnect nodes on the LAN segment.
Hubs do not perform any type of traffic filtering. Instead, the hub forwards all the bits to every
device connected to the hub. This forces all the devices in the LAN to share the bandwidth of the
media.

Additionally, this classic Ethernet implementation often results in high levels of collisions on the
LAN. Because of these performance issues, this type of Ethernet LAN has limited use in today's
networks. Ethernet implementations using hubs are now typically used only in small LANs or in
LANs with low bandwidth requirements.

Sharing media among devices creates significant issues as the network grows. The figure
illustrates some of the issues presented here.

Scalability

In a hub network, there is a limit to the amount of bandwidth that devices can share. With each
device added to the shared media, the average bandwidth available to each device decreases.
With each increase in the number of devices on the media, performance is degraded.

Latency
Network latency is the amount of time it takes a signal to reach all destinations on the media.
Each node in a hub-based network has to wait for an opportunity to transmit in order to avoid
collisions. Latency can increase significantly as the distance between nodes is extended. Latency
is also affected by a delay of the signal across the media as well as the delay added by the
processing of the signals through hubs and repeaters. Increasing the length of media or the
number of hubs and repeaters connected to a segment results in increased latency. With
greater latency, it is more likely that nodes will not receive initial signals, thereby increasing the
collisions present in the network.

Network Failure

Because classic Ethernet shares the media, any device in the network could potentially cause
problems for other devices. If any device connected to the hub generates detrimental traffic, the
communication for all devices on the media could be impeded. This harmful traffic could be due
to incorrect speed or full-duplex settings on a NIC.

Collisions

According to CSMA/CD, a node should not send a packet unless the network is clear of traffic.
If two nodes send packets at the same time, a collision occurs and the packets are lost. Then both
nodes send a jam signal, wait for a random amount of time, and retransmit their packets. Any
part of the network where packets from two or more nodes can interfere with each other is
considered a collision domain. A network with a larger number of nodes on the same segment
has a larger collision domain and typically has more traffic. As the amount of traffic in the
network increases, the likelihood of collisions increases.

Switches provide an alternative to the contention-based environment of classic Ethernet.

9.6.1 - Legacy Ethernet - Using Hubs


The diagram depicts issues that contribute to the poor performance of hub-based LAN's. Initially
six hosts are connected to a hub. As a result, network bandwidth is shared by six hosts. Hub-
based LAN performance issues include:
- Lack of scalability: Another hub is connected to the first one, with six more hosts attached. The
same network bandwidth is now shared by 12 hosts.
- Increased latency: With additional hubs, network latency increases.
- More collisions: With additional hosts, network collisions increase. There is still one collision
domain.

9.6.2 Ethernet - Using Switches

Page 1:

In the last few years, switches have quickly become a fundamental part of most networks.
Switches allow the segmentation of the LAN into separate collision domains. Each port of
the switch represents a separate collision domain and provides the full media bandwidth to the
node or nodes connected on that port. With fewer nodes in each collision domain, there is an
increase in the average bandwidth available to each node, and collisions are reduced.

A LAN may have a centralized switch connecting to hubs that still provide the connectivity to
nodes. Or, a LAN may have all nodes connected directly to a switch. Theses topologies are
shown in the figure.

In a LAN where a hub is connected to a switch port, there is still shared bandwidth, which may
result in collisions within the shared environment of the hub. However, the switch will isolate the
segment and limit collisions to traffic between the hub's ports.

9.6.2 - Ethernet - Using Switches


The diagram depicts options for switch uses.

Option 1: A switch is acting as a bridge between two shared-media hubs. This creates two
collision domains, one for each shared media LAN.

Option 2: A switch is at the center of a LAN with no hubs present. Each computer has its own
collision domain.

Page 2:

Nodes are Connected Directly


In a LAN where all nodes are connected directly to the switch, the throughput of the network
increases dramatically. The three primary reasons for this increase are:

 Dedicated bandwidth to each port


 Collision-free environment
 Full-duplex operation

These physical star topologies are essentially point to point links.

Click the performance factors in the figure.

Dedicated Bandwidth

Each node has the full media bandwidth available in the connection between the node and the
switch. Because a hub replicates the signals it receives and sends them to all other ports, classic
Ethernet hubs form a logical bus. This means that all the nodes have to share the same bandwidth
of this bus. With switches, each device effectively has a dedicated point-to-point connection
between the device and the switch, without media contention.

As an example, compare two 100 Mbps LANs, each with 10 nodes. In network segment A, the
10 nodes are connected to a hub. Each node shares the available 100 Mbps bandwidth. This
provides an average of 10 Mbps to each node. In network segment B, the 10 nodes are connected
to a switch. In this segment, all 10 nodes have the full 100 Mbps bandwidth available to them.

Even in this small network example, the increase in bandwidth is significant. As the number of
nodes increases, the discrepancy between the available bandwidth in the two implementations
increases significantly.

Collision-Free Environment
A dedicated point-to-point connection to a switch also removes any media contention between
devices, allowing a node to operate with few or no collisions. In a moderately-sized classic
Ethernet network using hubs, approximately 40% to 50% of the bandwidth is consumed by
collision recovery. In a switched Ethernet network - where there are virtually no collisions - the
overhead devoted to collision recovery is virtually eliminated. This provides the switched
network with significantly better throughput rates.

Full-Duplex Operation

Switching also allows a network to operate as a full-duplex Ethernet environment. Before


switching existed, Ethernet was half-duplex only. This meant that at any given time, a node
could either transmit or receive. With full-duplex enabled in a switched Ethernet network, the
devices connected directly to the switch ports can transmit and receive simultaneously, at the full
media bandwidth.

The connection between the device and the switch is collision-free. This arrangement effectively
doubles the transmission rate when compared to half-duplex. For example, if the speed of the
network is 100 Mbps, each node can transmit a frame at 100 Mbps and, at the same time, receive
a frame at 100 Mbps.

Using Switches Instead of Hubs

Most modern Ethernet use switches to the end devices and operate full duplex. Because switches
provide so much greater throughput than hubs and increase performance so dramatically, it is fair
to ask: why not use switches in every Ethernet LAN? There are three reasons why hubs are still
being used:

 Availability - LAN switches were not developed until the early 1990s and were not
readily available until the mid 1990s. Early Ethernet networks used UTP hubs and many
of them remain in operation to this day.
 Economics - Initially, switches were rather expensive. As the price of switches has
dropped, the use of hubs has decreased and cost is becoming less of a factor in
deployment decisions.
 Requirements - The early LAN networks were simple networks designed to exchange
files and share printers. For many locations, the early networks have evolved into the
converged networks of today, resulting in a substantial need for increased bandwidth
available to individual users. In some circumstances, however, a shared media hub will
still suffice and these products remain on the market.

The next section explores the basic operation of switches and how a switch achieves the
enhanced performance upon which our networks now depend. A later course will present more
details and additional technologies related to switching.

9.6.2 - Ethernet - Using Switches


The diagram depicts features of switch-based LAN's. Initially, six hosts and a router are
connected to a central switch. Then another switch is connected to the first one with six more
hosts attached. Features include the following:

-Dedicated Bandwidth: Each host accesses full network bandwidth.


-Collision-Free: Network free of collisions, multiple collision domains.
-Full-Duplex: Full-duplex allows communication in both directions at the same time.

Page 3:

In this activity, we provide a model for comparing the collisions found in hub-based networks
with the collision-free behavior of switches.

Click the Packet Tracer icon for more details.

9.6.2 - Ethernet - Using Switches


Link to Packet Tracer Exploration: From Hubs to Switches

In this activity, we provide a model for comparing the collisions found in hub-based networks
with the collision-free behavior of switches.

9.6.3 Switches - Selective Forwarding

Page 1:

Ethernet switches selectively forward individual frames from a receiving port to the port where
the destination node is connected. This selective forwarding process can be thought of as
establishing a momentary point-to-point connection between the transmitting and receiving
nodes. The connection is made only long enough to forward a single frame. During this instant,
the two nodes have a full bandwidth connection between them and represent a logical point-to-
point connection.
To be technically accurate, this temporary connection is not made between the two nodes
simultaneously. In essence, this makes the connection between hosts a point-to-point connection.
In fact, any node operating in full-duplex mode can transmit anytime it has a frame, without
regard to the availability of the receiving node. This is because a LAN switch will buffer an
incoming frame and then forward it to the proper port when that port is idle. This process is
referred to as store and forward.

With store and forward switching, the switch receives the entire frame, checks the FSC for
errors, and forwards the frame to the appropriate port for the destination node. Because the nodes
do not have to wait for the media to be idle, the nodes can send and receive at full media speed
without losses due to collisions or the overhead associated with managing collisions.

Forwarding is Based on the Destination MAC

The switch maintains a table, called a MAC table. that matches a destination MAC address with
the port used to connect to a node. For each incoming frame, the destination MAC address in the
frame header is compared to the list of addresses in the MAC table. If a match is found, the port
number in the table that is paired with the MAC address is used as the exit port for the frame.

The MAC table can be referred to by many different names. It is often called the switch table.
Because switching was derived from an older technology called transparent bridging, the table is
sometimes called the bridge table. For this reason, many processes performed by LAN switches
can contain bridge or bridging in their names.

A bridge is a device used more commonly in the early days of LAN to connect - or bridge - two
physical network segments. Switches can be used to perform this operation as well as allowing
end device connectivity to the LAN. Many other technologies have been developed around LAN
switching. Many of these technologies will be presented in a later course. One place where
bridges are prevalent is in Wireless networks. We use Wireless Bridges to interconnect two
wireless network segments. Therefore, you may find both terms - switching and bridging - in use
by the networking industry.
9.6.3 - Switches - Selective Forwarding
The animation depicts selective forwarding of individual frames from a receiving port to the port
where the destination node is connected. A 12-port switch is shown with the following
connections in the switching table:

Host with MAC address 0A is connected to port 1.


Host with MAC address 0B is connected to port 3.
Host with MAC address 0C is connected to port 6.
Host with MAC address 0D is connected to port 9.

Two frames are shown:


Frame 1: Destination address is 0C, and the source address is 0A.
Frame 2: Destination address is 0C, and the source address is 0D.

As the animation progresses, source host 0A and 0D transmit to destination host 0C. The switch
looks up the destination MAC address in the frame header and compares it to the list of
addresses in its MAC address table. The switch sees that it has two frames destined for the same
host. It buffers the frames in its memory buffers and sends them out the designated port one at a
time.

Next the animation displays a new block diagram showing some of the key internal components
of the switch. These include the MAC address table, switching logic, memory buffers, CPU, and
Flash. Host 0A transmits a frame to destination host 0C. The switch uses its switching logic to
look up the destination address in its MAC address table and buffers the frame in its memory
buffers. It then sends the frame to host 0C on port 6.

The animation continues showing source hosts 0A and 0B transmitting simultaneously to


destination host 0C. The switch looks up the destination address in its MAC address table and
buffers the two frames in its memory buffers. It then sends the frames one at a time to host 0C on
port 6.

Page 2:

Switch Operation

To accomplish their purpose, Ethernet LAN switches use five basic operations:

 Learning
 Aging
 Flooding
 Selective Forwarding
 Filtering
Learning

The MAC table must be populated with MAC addresses and their corresponding ports. The
Learning process allows these mappings to be dynamically acquired during normal operation.

As each frame enters the switch, the switch examines the source MAC address. Using a lookup
procedure, the switch determines if the table already contains an entry for that MAC address. If
no entry exists, the switch creates a new entry in the MAC table using the source MAC address
and pairs the address with the port on which the entry arrived. The switch now can use this
mapping to forward frames to this node.

Aging

The entries in the MAC table acquired by the Learning process are time stamped. This timestamp
is used as a means for removing old entries in the MAC table. After an entry in the MAC table is
made, a procedure begins a countdown, using the timestamp as the beginning value. After the
value reaches 0, the entry in the table will be refreshed when the switch next receives a frame
from that node on the same port.

Flooding

If the switch does not know to which port to send a frame because the destination MAC address
is not in the MAC table, the switch sends the frame to all ports except the port on which the
frame arrived. The process of sending a frame to all segments is known as flooding. The switch
does not forward the frame to the port on which it arrived because any destination on that
segment will have already received the frame. Flooding is also used for frames sent to the
broadcast MAC address.

Selective Forwarding
Selective forwarding is the process of examining a frame's destination MAC address and
forwarding it out the appropriate port. This is the central function of the switch. When a frame
from a node arrives at the switch for which the switch has already learned the MAC address, this
address is matched to an entry in the MAC table and the frame is forwarded to the corresponding
port. Instead of flooding the frame to all ports, the switch sends the frame to the destination node
via its nominated port. This action is called forwarding.

Filtering

In some cases, a frame is not forwarded. This process is called frame filtering. One use of
filtering has already been described: a switch does not forward a frame to the same port on which
it arrived. A switch will also drop a corrupt frame. If a frame fails a CRC check, the frame is
dropped. An additional reason for filtering a frame is security. A switch has security settings for
blocking frames to and/or from selective MAC addresses or specific ports.

9.6.3 - Switches - Selective Forwarding


The diagram depicts switch operation, which includes the switch learning process:
- Learning - Records the MAC address and port number.
- Flooding - Sends the frame to all ports, except the incoming port.
- Selective Forwarding - Sends the frame only to the destination port.

A 12-port switch is shown with the following connections:

Host1 with MAC address 0A is connected to port FA1.


Host2 with MAC address 0C is connected to port FA6.
Host3 with MAC address 0B is connected to port FA3.
Host4 with MAC address 0D is connected to port FA8.

The following steps explain the basic switch operation.

Step 1. Upon initialization of the switch, the MAC address table is empty.

Step 2. Host1 sends data to Host2. The frame sent contains both a source MAC address and a
destination MAC address.

Step 3. Learning takes place. The switch reads the source MAC address, 0A, from the frame
received on port FA1 and stores it in the MAC address table for use in the forwarding of frames
to Host1.

Step 4. Flooding takes place. The destination MAC address, 0C, is not in the MAC table. The
switch floods the frame out all ports except port FA1, the port of the sender. Host3 and Host4
receive the frame, but the address in the frame does not match their MAC address. They drop the
frame. The destination MAC address in the frame matches Host2, and it accepts the frame.

Step 5: Host2 sends a frame to Host1 containing a reply. The source address in the frame is the
MAC address of Host2. The destination address in the frame matches the MAC address for
Host1.

Step 6: Learning takes place. The switch reads the source MAC address, 0C, from the frame
received on port FA6 and stores it in the MAC address table for use in the forwarding of frames
to Host2.

Step 7: Selective forwarding takes place. The destination MAC address, 0A, is in the MAC
address table. The switch selectively forwards the frame out port FA1 only. The destination
MAC address in the frame matches the MAC address for Host1. Host1 accepts the frame.

9.6.4 Ethernet - Comparing Hubs and Switches

Page 1:

9.6.4 - Ethernet - Comparing Hubs and Switches


The diagram depicts an activity in which you must determine how the switch forwards a frame
based on the source MAC and destination MAC addresses and information in the switch MAC
table.

A 12-port switch is shown with the following connections:


- Host 0A is connected to port FA1.
- Host 0B is connected to port FA3.
- Host 0C is connected to port FA5.
- Host 0D is connected to port FA7.
- Hub1 is connected to port FA9.
- Hosts 0E and 0F are connected to Hub1.

Additional help:
FF is a broadcast MAC address and is forwarded to all ports with the exception of the origin
port.
A frame is flooded to all ports (except the origin) only if the switch does not have the destination
MAC within the MAC table.
The switch adds a new MAC address to the MAC table based on the source MAC address. If the
source MAC address is already in the table, nothing is added or learned. If the source MAC
address is not in the table, the address is added.
A switch drops a frame if the destination and source devices are both connected to the same port
and the switch has the destination MAC address in the MAC table. In this activity, this occurs on
the single port connected to the hub with two host devices.

Answer the questions using the information provided.


Note: You may wish to contact your instructor for help with this activity.

Example 1 scenario and questions.


Frame information:
- Destination MAC address: 0D
- Source MAC address: 0B

MAC table information:


Port FA1 MAC address learned is 0A.
Port FA5 MAC address learned is 0C.
Port FA9 MAC address learned is 0E.
No other ports on the switch have learned a MAC address.

Question 1A. Where will the switch forward the frame? Indicate Yes or No for the ports.

FA1: Yes or No
FA2: Yes or No
FA3: Yes or No
FA4: Yes or No
FA5: Yes or No
FA6: Yes or No
FA7: Yes or No
FA8: Yes or No
FA9: Yes or No
FA10: Yes or No
FA11: Yes or No
FA12: Yes or No

Question 1B. When the switch forwards the frame, which statement or statements are true?
A. Switch adds the source MAC address to the MAC table.
B. Frame is a broadcast frame and will be forwarded to all ports.
C. Frame is a unicast frame and will be sent to specific ports only.
D. Frame is a unicast frame and will be flooded to all ports.
E. Frame is a unicast frame, but it will be dropped at the switch.

Example 2 scenario and questions.


Frame information:
- Destination MAC address: 0F
- Source MAC address: 0B

MAC table information:


Port FA9 MAC address learned is 0E.
No other ports on the switch have learned a MAC address.

Question 2A. Where will the switch forward the frame? Indicate Yes or No for the ports.
FA1: Yes or No
FA2: Yes or N
FA3: Yes or No
FA4: Yes or No
FA5: Yes or No
FA6: Yes or No
FA7: Yes or No
FA8: Yes or No
FA9: Yes or No
FA10: Yes or No
FA11: Yes or No
FA12: Yes or No

Question 2B. When the switch forwards the frame, which statement or statements are true?
A. Switch adds the source MAC address to the MAC table.
B. Frame is a broadcast frame and will be forwarded to all ports.
C. Frame is a unicast frame and will be sent to specific ports only.
D. Frame is a unicast frame and will be flooded to all ports.
E. Frame is a unicast frame, but it will be dropped at the switch.

Example 3 scenario and questions.


Frame information:
- Destination MAC address: 0C
- Source MAC address: 0A

MAC table information:


Port FA1 MAC address learned is 0A.
Port FA5 MAC address learned is 0C.
Port FA7 MAC address learned is 0D.
Port FA9 MAC address learned is 0E.
No other ports on the switch have learned a MAC address.

Question 3A. Where will the switch forward the frame? Indicate Yes or No for the ports.

FA1: Yes or No
FA2: Yes or No
FA3: Yes or No
FA4: Yes or No
FA5: Yes or No
FA6: Yes or No
FA7: Yes or No
FA8: Yes or No
FA9: Yes or No
FA10: Yes or No
FA11: Yes or No
FA12: Yes or No

Question 3B. When the switch forwards the frame, which statement or statements are true?
A. Switch adds the source MAC address to the MAC table.
B. Frame is a broadcast frame and will be forwarded to all ports.
C. Frame is a unicast frame and will be sent to specific ports only.
D. Frame is a unicast frame and will be flooded to all ports.
E. Frame is a unicast frame, but it will be dropped at the switch.

Page 2:

In this activity, you will have the opportunity to visualize and experiment with the behavior of
switches in a network.

Click the Packet Tracer icon for more details.

9.6.4 - Ethernet - Comparing Hubs and Switches


Link to Packet Tracer Exploration: Switch Operation

In this activity, you have the opportunity to visualize and experiment with the behavior of
switches in a network.

9.7 Address Resolution Protocol (ARP)

9.7.1 The ARP Process - Mapping IP to MAC Addresses

Page 1:

The ARP protocol provides two basic functions:

 Resolving IPv4 addresses to MAC addresses


 Maintaining a cache of mappings

Resolving IPv4 Addresses to MAC Addresses

For a frame to be placed on the LAN media, it must have a destination MAC address. When a
packet is sent to the Data Link layer to be encapsulated into a frame, the node refers to a table in
its memory to find the Data Link layer address that is mapped to the destination IPv4 address.
This table is called the ARP table or the ARP cache. The ARP table is stored in the RAM of the
device.

Each entry, or row, of the ARP table has a pair of values: an IP Address and a MAC address. We
call the relationship between the two values a map - it simply means that you can locate an IP
address in the table and discover the corresponding MAC address. The ARP table caches the
mapping for the devices on the local LAN.

To begin the process, a transmitting node attempts to locate in the ARP table the MAC address
mapped to an IPv4 destination. If this map is cached in the table, the node uses the MAC address
as the destination MAC in the frame that encapsulates the IPv4 packet. The frame is then
encoded onto the networking media.

Maintaining the ARP Table

The ARP table is maintained dynamically. There are two ways that a device can gather MAC
addresses. One way is to monitor the traffic that occurs on the local network segment. As a node
receives frames from the media, it can record the source IP and MAC address as a mapping in
the ARP table. As frames are transmitted on the network, the device populates the ARP table
with address pairs.

Another way a device can get an address pair is to broadcast an ARP request. ARP sends a Layer
2 broadcast to all devices on the Ethernet LAN. The frame contains an ARP request packet with
the IP address of the destination host. The node receiving the frame that identifies the IP address
as its own responds by sending an ARP reply packet back to the sender as a unicast frame. This
response is then used to make a new entry in the ARP table.

These dynamic entries in the ARP table are timestamped in much the same way that MAC table
entries are timestamped in switches. If a device does not receive a frame from a particular device
by the time the timestamp expires, the entry for this device is removed from the ARP table.
Additionally, static map entries can be entered in an ARP table, but this is rarely done. Static
ARP table entries do not expire over time and must be manually removed.

Creating the Frame

What does a node do when it needs to create a frame and the ARP cache does not contain a map
of an IP address to a destination MAC address? When ARP receives a request to map an IPv4
address to a MAC address, it looks for the cached map in its ARP table. If an entry is not found,
the encapsulation of the IPv4 packet fails and the Layer 2 processes notify ARP that it needs a
map.

The ARP processes then send out an ARP request packet to discover the MAC address of the
destination device on the local network. If a device receiving the request has the destination IP
address, it responds with an ARP reply. A map is created in the ARP table. Packets for that IPv4
address can now be encapsulated in frames.

If no device responds to the ARP request, the packet is dropped because a frame cannot be
created. This encapsulation failure is reported to the upper layers of the device. If the device is an
intermediary device, like a router, the upper layers may choose to respond to the source host with
an error in an ICMPv4 packet.

Click the step numbers in the figure to see the process used to get the MAC address of node on
the local physical network.

In the lab, you will use Wireshark to observe ARP requests and responses across a network.

9.7.1 - The ARP Process - Mapping IP to MAC Addresses


The diagram depicts the Address Resolution Protocol (ARP) process used for mapping IP and
MAC addresses for hosts on the local network. Follow the steps for generating a new pair of
addresses in the ARP table when the destination is on the local network.

Step 1: Four PC's are shown. PC's A, B, C, and D and a router are attached to common shared
media. PC A wants to send a frame to PC C. These PC's have the following IP and MAC
addresses:

PC A IP address: 10.10.0.1, MAC address: 00-0d-88-c7-9a-24


PC C IP address: 10.10.0.3, MAC address: 00-0d-56-09-fb-d1

Step 2: No ARP entry. PC A says: I need to send a frame to 10.10.0.3, but I don't know the MAC
address.

Step 3: Broadcast ARP request to devices. PC A says: If your IP address is 10.10.0.3, please tell
10.10.0.1 (00-0d-88-c7-9a-24).

Step 4: Unicast ARP reply with MAC address. PC C says: I am 10.10.0.3. My MAC address is
00-0d-56-09-fb-d1.

Step 5: IP and MAC addresses for PC D are stored in ARP cache. PC A says: I will store
10.10.0.3 and 00-0d-56-09-fb-d1 in my ARP cache.

Step 6: ARP entry enables frame to be sent. PC A says: I can now send the frame to 10.10.0.3
with the MAC address 00-0d-56-09-fb-d1.

9.7.2 The ARP Process - Destinations outside the Local Network

Page 1:

All frames must be delivered to a node on the local network segment. If the destination IPv4 host
is on the local network, the frame will use the MAC address of this device as the destination
MAC address.

If the destination IPv4 host is not on the local network, the source node needs to deliver the
frame to the router interface that is the gateway or next hop used to reach that destination. The
source node will use the MAC address of the gateway as the destination address for frames
containing an IPv4 packet addressed to hosts on other networks.

The gateway address of the router interface is stored in the IPv4 configuration of the hosts. When
a host creates a packet for a destination, it compares the destination IP address and its own IP
address to determine if the two IP addresses are located on the same Layer 3 network. If the
receiving host is not on the same network, the source uses the ARP process to determine a MAC
address for the router interface serving as the gateway.
In the event that the gateway entry is not in the table, the normal ARP process will send an ARP
request to retrieve the MAC address associated with the IP address of the router interface.

Click the step numbers in the figure to see the process used to get the MAC address of the
gateway.

9.7.2 - The ARP Process - Destinations Outside the Local Network


The diagram depicts the ARP process used for mapping IP and MAC addresses to communicate
outside the local network. Follow the steps for generating a new pair of addresses in the ARP
table when the destination is outside the local network.

Step 1: Four PC's are shown. PC's A, B, C, and D and a router are attached to common shared
media. PC A wants to send a frame to a PC that is outside the local network. It needs to send the
frame to the router default gateway. The PC and router have the following IP and MAC
addresses:

PC A IP address: 10.10.0.1, MAC address: 00-0d-88-c7-9a-24


Router IP address: 10.10.0.254, MAC address: 00-10-7b-e7-fa-ef

Step 2: No ARP entry for the gateway. PC A says: I need to send a frame to 172.16.0.10, but it is
outside my network, and I don't know the MAC address of my gateway (10.10.0.254).

Step 3: Broadcast ARP request to devices. PC A says: If your IP address is 10.10.0.254, please
tell 10.10.0.2 (00-0d-88-c7-9a-24).

Step 4: Reply with MAC address of gateway. The router says: I am 10.10.0.254, so I respond
with my MAC address 00-10-7b-e7-fa-ef.

Step 5: IP and MAC addresses are stored in ARP cache. PC A says: I will store 10.10.0.254 and
00-10-7b-e7-fa-ef in my ARP cache.

Step 6: ARP entry enables frame to be sent. PC A says: I can now send the frame with a packet
to 172.16.0.10 with the MAC address 00-10-7b-e7-fa-ef. The router says: I will forward the
packet in this frame based on a route in my routing table.

Page 2:

Proxy ARP
There are circumstances under which a host might send an ARP request seeking to map an IPv4
address outside of the range of the local network. In these cases, the device sends ARP requests
for IPv4 addresses not on the local network instead of requesting the MAC address associated
with the IPv4 address of the gateway. To provide a MAC address for these hosts, a router
interface may use a proxy ARP to respond on behalf of these remote hosts. This means that the
ARP cache of the requesting device will contain the MAC address of the gateway mapped to any
IP addresses not on the local network. Using proxy ARP, a router interface acts as if it is the host
with the IPv4 address requested by the ARP request. By "faking" its identity, the router accepts
responsibility for routing packets to the "real" destination.

One such use of this process is when an older implementation of IPv4 cannot determine whether
the destination host is on the same logical network as the source. In these implementations, ARP
always sends ARP requests for the destination IPv4 address. If proxy ARP is disabled on the
router interface, these hosts cannot communicate out of the local network.

Another case where a proxy ARP is used is when a host believes that it is directly connected to
the same logical network as the destination host. This generally occurs when a host is configured
with an improper mask.

As shown in the figure, Host A has been improperly configured with a /16 subnet mask. This
host believes that it is directly connected to all of the 172.16.0.0 /16 network instead of to the
172.16.10.0 /24 subnet.

When attempts are made to communicate with any IPv4 host in the range of 172.16.0.1 to
172.16.255.254, Host A will send an ARP request for that IPv4 address. The router can use a
proxy ARP to respond to requests for the IPv4 address of Host C (172.16.20.100) and Host D
(172.16.20.200). Host A will subsequently have entries for these addresses mapped to the MAC
address of the e0 interface of the router (00-00-0c-94-36-ab).

Yet another use for a proxy ARP is when a host is not configured with a default gateway. Proxy
ARP can help devices on a network reach remote subnets without the need to configure routing
or a default gateway.

By default, Cisco routers have proxy ARP enabled on LAN interfaces.


http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080094adb.shtml

9.7.2 - The ARP Process - Destinations Outside the Local Network


The diagram depicts how proxy ARP allows a router to respond for a remote host. Two hosts, A
and B, are located on Subnet A. Router R1 interface E0 is on Subnet A (172.16.10.0/24). Two
other hosts, C and D, are located on Subnet B. Router R1 interface E1 is on Subnet B
(172.16.20.0/24).

Host A has been improperly configured with a /16 subnet mask. This host believes that it is
directly connected to all of the 172.16.0.0/16 network, instead of to the 172.16.10.0/24 subnet.
The router can act as a proxy ARP and respond to requests for the IPv4 address of hosts on the
LAN B network.

9.7.3 The ARP Process - Removing Address Mappings

Page 1:

For each device, an ARP cache timer removes ARP entries that have not been used for a
specified period of time. The times differ depending on the device and its operating system. For
example, some Windows operating systems store ARP cache entries for 2 minutes. If the entry is
used again during that time, the ARP timer for that entry is extended to 10 minutes.

Commands may also be used to manually remove all or some of the entries in the ARP table.
After an entry has been removed, the process for sending an ARP request and receiving an ARP
reply must occur again to enter the map in the ARP table.

In the lab for this section, you will use the arp command to view and to clear the contents of a
computer's ARP cache. Note that this command, despite its name, does not invoke the execution
of the Address Resolution Protocol in any way. It is merely used to display, add, or remove the
entries of the ARP table. ARP service is integrated within the IPv4 protocol and implemented by
the device. Its operation is transparent to both upper layer applications and users.

9.7.3 - The ARP Process - Removing Address Mappings


The diagram depicts the use of the ARP process to remove address mappings. The diagram
shows the same PC's and router as described in the previous diagram, but now PC C is removed
from the network. If PC C's IP and MAC addresses are not removed from PC A's ARP cache, PC
A may still try to communicate with C.

9.7.4 ARP Broadcasts - Issues

Page 1:

Overhead on the Media

As a broadcast frame, an ARP request is received and processed by every device on the local
network. On a typical business network, these broadcasts would probably have minimal impact
on network performance. However, if a large number of devices were to be powered up and all
start accessing network services at the same time, there could be some reduction in performance
for a short period of time. For example, if all students in a lab logged into classroom computers
and attempted to access the Internet at the same time, there could be delays.

However, after the devices send out the initial ARP broadcasts and have learned the necessary
MAC addresses, any impact on the network will be minimized.

Security

In some cases, the use of ARP can lead to a potential security risk. ARP spoofing, or ARP
poisoning, is a technique used by an attacker to inject the wrong MAC address association into a
network by issuing fake ARP requests. An attacker forges the MAC address of a device and then
frames can be sent to the wrong destination.

Manually configuring static ARP associations is one way to prevent ARP spoofing. Authorized
MAC addresses can be configured on some network devices to restrict network access to only
those devices listed.

9.7.4 - ARP Broadcasts - Issues


The diagram depicts ARP issues. These include broadcasts and security. ARP broadcasts can
create overhead on the media and flood the local media. Regarding security, a false ARP
message can provide an incorrect MAC address that will then hijack frames using that address
(called a spoof).

9.8 Chapter Labs

9.8.1 Lab - Address Resolution Protocol (ARP)

Page 1:

This lab introduces the Windows arp utility command to examine and change ARP cache entries
on a host computer. Then Wireshark is used to capture and analyze ARP exchanges between
network devices.

Click the lab icon for more details.

9.8.1 - Lab - Address Resolution Protocol (ARP)


Link to Hands-on Lab: Address Resolution Protocol (ARP)

This lab introduces the Windows arp utility command to examine and change ARP cache entries
on a host computer. Then Wireshark is used to capture and analyze ARP exchanges between
network devices.

Page 2:

In this activity, you will use Packet Tracer to examine and change ARP cache entries on a host
computer.

Click the Packet Tracer icon to launch the Packet Tracer activity.

9.8.1 - Lab - Address Resolution Protocol (ARP)


Link to Packet Tracer Exploration: Address Resolution Protocol (ARP)

In this activity, you use Packet Tracer to examine and change ARP cache entries on a host
computer.
9.8.2 Lab - Cisco Switch MAC Table Examination

Page 1:

In this lab, you will connect to a switch via a Telnet session, log in, and use the required
operating system commands to examine the stored MAC addresses and their association to
switch ports.

Click the lab icon for more details.

9.8.2 - Lab - Cisco Switch Table MAC Table Examination


Link to Hands-on Lab: Cisco Switch Table MAC Table Examination

In this lab, you connect to a switch via a Telnet session, log in, and use the required operating
system commands to examine the stored MAC addresses and their association to switch ports.

Page 2:

In this activity, you will use Packet tracer to examine the stored MAC addresses and their
association to switch ports.

Click the Packet Tracer icon to launch the Packet Tracer activity.

9.8.2 - Lab - Cisco Switch Table MAC Table Examination


Link to Packet Tracer Exploration: Cisco Switch Table MAC Table Examination

In this activity, you use Packet Tracer to examine the stored MAC addresses and their
association to switch ports.

9.8.3 Lab - Intermediary Device as an End Device

Page 1:

This lab uses Wireshark to capture and analyze frames to determine which network nodes
originated the frames. A Telnet session between a host computer and switch is then captured and
analyzed for frame content.
Click the lab icon for more details.

9.8.3 - Lab - Intermediary Device as an End Device


Link to Hands-on Lab: Intermediary Device as an End Device

This lab uses Wireshark to capture and analyze frames to determine which network nodes
originated the frames. A Telnet session between a host computer and switch is then captured and
analyzed for frame content.

Page 2:

In this activity, you will use Packet Tracer to analyze frames originating from a switch.

Click the Packet Tracer icon to launch the Packet Tracer activity.

9.8.3 - Lab - Intermediary Device as an End Device


Link to Packet Tracer Exploration: Intermediary Device as an End Device

In this activity, you use Packet Tracer to analyze frames originating from a switch.

9.9 Chapter Summary

9.9.1 Summary and Review

Page 1:

Ethernet is an effective and widely used TCP/IP Network Access protocol. Its common frame
structure has been implemented across a range of media technologies, both copper and fiber,
making the most common LAN protocol in use today.

As an implementation of the IEEE 802.2/3 standards, the Ethernet frame provides MAC
addressing and error checking. Being a shared media technology, early Ethernet had to apply a
CSMA/CD mechanism to manage the use of the media by multiple devices. Replacing hubs with
switches in the local network has reduced the probability of frame collisions in half-duplex links.
Current and future versions, however, inherently operate as full-duplex communications links
and do not need to manage media contention to the same detail.

The Layer 2 addressing provided by Ethernet supports unicast, multicast, and broadcast
communications. Ethernet uses the Address Resolution Protocol to determine the MAC
addresses of destinations and map them against known Network layer addresses.

9.9.1 - Summary and Review


In this chapter, you learned to:
- Identify the basic characteristics of network media used in Ethernet.
- Describe the Physical and Data Link Layer features of Ethernet.
- Describe the function and characteristics of the media access control method used by Ethernet
protocol.
- Explain the importance of Layer 2 addressing used for data transmission and determine how the
different types of addressing impacts network operation and performance.
- Compare and contrast the application and benefits of using Ethernet switches in a LAN as
opposed to using hubs.
- Explain the ARP process.

Page 2:

9.9.1 - Summary and Review


This is a review and is not a quiz. Questions and answers are provided.
Question 1. Name the two Data Link sublayers and list their purposes.
Answer:
Logical Link Control (LLC)
Handles the communication between the upper layers and the lower layers, typically hardware.
Implemented in software and is independent of the physical equipment. It can be considered as
the NIC driver software in a PC. Provides sub-addressing to identify the network protocol that
uses the link layer service. Destination Service Access Point (D SAP) and the Source Service
Access Point (S SAP) identify a protocol, or set of protocols, in the next higher O S I layer,
which is the Network Layer.

Media Access Control (MAC)


The Ethernet MAC sublayer has the following responsibilities:
- Data Encapsulation: Frame assembly before transmission, and frame parsing and error
detection during and after reception.
- Media Access Control: Controls frames on and off the media, including initiation of frame
transmission and recovery from transmission failure.
- Addressing: Provides the physical address (MAC address) that enables frames to be delivered
to destination hosts.
Media Access Control is implemented by hardware, typically in the host NIC or equivalent.

Question 2. Describe some of the limiting features of legacy Ethernet technologies.


Answer:
- Low bandwidth
- Half-duplex
- Coaxial cable, especially Thick net, which is difficult to install and requires large radius corners

- Physical bus - termination issues


- Bayonet/vampire type connectors - difficult to install and a source of problems

Question 3. List the fields and purposes of the fields of an Ethernet frame and their purposes.
Answer:

Preamble and Start Frame Delimiter:


The preamble and SFD are used for synchronization.

Destination MAC Address:


The destination MAC address (6 bytes in length) is the identifier foridentifies the intended
recipient.

Source MAC Address:


The source MAC address (6 bytes in length) identifies the frame originating NIC or interface.

Length/Type:
The Length field (2 bytes in length) defines the exact length of the frame's data field. The Type
field describes which protocol is listed inside the frame.

Data and Pad:


The Data and Pad field (46 to 1500 bytes in length) contains the data from a higher layer, which
is a generic L3 PDU or more usually an IP packet if using TCP/IP. The Pad is required to pad out
the frame to the minimum size if a very small packet is encapsulated.

Frame Check Sequence (FCS):


The FCS field (4 bytes in length) is used to detect errors in a frame. It uses a cyclic redundancy
check (CRC), which begins with the sending station, and includes the results of that CRC in the
FCS field. The receiving station receives the frame and runs the exact same CRC to check for
errors. If the calculations match, there is no error; otherwise, the frame is dropped.

Question 4. Describe the structure of an Ethernet MAC address.


Answer: The Ethernet MAC address is a 48-bit binary value expressed as 12 hexadecimal digits.
The first 24 bits (3 bytes) are the Organizationally Unique Identifier (O U I). The second 24 bits
(3 bytes) identify the device and must be unique for a particular O U I value.

Question 5. Why are Layer 2 MAC addresses necessary?


Answer: An Ethernet MAC address is used to transport the frame across the local media. A
particular MAC address has no meaning or use outside the local segment. It is unique; it is non-
hierarchal and associated with a particular device, regardless of its location or to which network
it is connected. In contrast, Layer 3 addresses are used end-to-end across networks.

Question 6. Describe how Ethernet implements unicast, multicast, and broadcast


communications.
Answer:
Unicast:
A unicast MAC address is the unique address used when a message is sent from one transmitting
device to one destination device. All hosts examine the frame, but if it is not addressed to them,
the frame is dropped. Only the host whose MAC address matches the frame destination address
accepts the frame and processes the message through the upper layers.

Multicast:
Multicast MAC addresses are a group of common MAC addresses that all devices have to enable
delivery of frames carrying multicast packets, such as streaming audio or video. For IP
multicasting, the Ethernet multicast MAC addresses begin with 0100.5E or 0100.5F. Frames
with a destination address in this range are delivered to those devices on the LAN whose upper
layers have established a multicast session.

Broadcast:
The Ethernet broadcast MAC address is FFFF.FFFF.FFFF. Frames with this destination address
are delivered to and processed by all devices on that LAN subnet.

Question 7. Use examples to describe the CSMA/CD process.


Answer:
Carrier Sense
All network devices that have messages to send must listen before transmitting. If a signal from
another device is detected, the device sits back and waits a random amount of time before trying
again. When no traffic is detected, the device transmits its message.

Multiple Access
If the latency of one device's signals means that they are not detected by a second device, the
second device may then start to transmit, too. The two messages will propagate across the media
until they encounter each other. The jumble of remaining signals continues to propagate across
the media.

Collision Detection
All devices detect the increase in the amplitude of the signal above the normal level that a
collision produces. Once When detected, every device transmitting will continue to transmit to
ensure that all devices on the network detect the collision.

Jam Signal
Further, once the collision is detected, all devices send out a jamming signal.
Random Back-off
This jamming signal invokes the back-off algorithm, which causes all devices to stop
transmitting for a random amount of time. This allows for the collision signals to subside from
the medium. After the delay has expired, all devices go back into listening before transmitting
mode. The random back-off time means that a third device transmit before either of the two
involved in the original collision.

Question 8. Describe an Ethernet collision domain.


Answer: The group of connected devices that can cause collisions to occur with each other is
known as a collision domain. Collision domains occur at Layer 1 of the networking reference
model.

Hubs and repeaters are intermediary devices at Layer 1 that extend the distance that Ethernet
cables can reach. Hubs (also known as multiport repeaters) enable more devices to connect to the
shared media. Both types of devices have the effect of increasing the size of the collision
domain. Providing network access for more users with hubs reduces the performance for each,
because the fixed capacity of the media has to be shared among more and more devices.

Question 9. Compare the specifications of early Ethernet technologies to current versions.


Answer:
Bandwidth: 10 Megabits per second to 100 Megabits per second compared to 1000 Megabits per
second to 10,000 Megabits per second.

Distance:
Copper media—500 meters to 200 meters to 100 meters (Lower cost and higher bandwidth
outweighed shorter distance).
Fiber media—400 meters to 10 kilometers.

Media:
Coaxial cable to unshielded twisted-pair to optic fiber.
Multiple-hosts per segment (shared media) to single hosts per segment.
Half-duplex to full-duplex.

Cost: Cost per megabits per second per meter has fallen.

Question 10. State the benefits of moving from a hub-based to a switched local network.
Answer:
Scalability:
Hubs share limited bandwidth among users.
Switches provide the full available bandwidth to each host.

Latency:
Latency is the amount of time that a packet takes to get to the destination.
More nodes on a segment increase latency as each waits to transmit.
Hubs regenerate frames, which also adds delay.
Switches also buffer frames, but with only one host on each segment, there is no delay when
each host wants to transmit.

Network Failure:
Incompatible speeds, for example, 100 megabits-per-second device connected to a 10 megabits-
per-second hub.
Switches can be configured to manage different segment speeds.

Collisions:
Hubs increase the size of the collision domain. Using hubs (Layer 1 devices) to increase the
number of nodes on the same segment can increase the number of collisions.
Switches divide collision domains at Layer 2, reducing, if not eliminating, collisions to each
segment.

Question 11. List and describe the stages of operation of an Ethernet switch.
Answer:

Learning
When a frame of data is received from a node, the switch reads the source MAC address and
saves the address to the lookup table against the incoming interface. The switch now knows out
which interface to forward frames with this address.

Flooding
When the switch does not have a destination MAC address in its lookup table, it sends (floods)
the frame out all interfaces, except the one on which the frame arrived.

Forwarding
When the switch has the destination MAC address in its lookup table and the interface mapped to
the MAC address is not the interface that it received the frame on, it forwards the frame out that
interface.

Filtering
When the switch has the destination MAC address in its lookup table and the interface mapped to
the MAC address is the interface that it received the frame on, it drops the frame. Other
interfaces or segments are spared unnecessary and potentially collision-causing traffic.

Aging
Each MAC-IP address entry on a lookup table has a time stamp that is reset each time the entry
is referred to. If the timer expires, the entry is purged from the table. This reduces the number of
entries to look up and frees up memory.

Question 12. Describe the forwarding of a frame through a switch.


Answer: Ethernet switches selectively forward individual frames from a receiving port to the
port where the destination node is connected. A switch will buffer an incoming frame and then
forwards it to the proper port when that port is idle.

This process is referred to as store and forward. With store-and-forward switching, the switch
receives the entire frame, checks the FSC for errors, and forwards the frame to the appropriate
port for the destination node. Because the nodes do not have to wait for the media to be idle, the
nodes can send and receive at full media speed without losses due to collisions or the overhead
associated with managing collisions.

Question 13. When and why does a network host need to broadcast an ARP request?
Answer: When a host has a packet to send to a known IP address but does not know the
destination MAC address to use the frame, it sends an ARP broadcast to all hosts on the network
requesting that the host with the known IP address reply with its MAC address. This enables the
originating host to store and use the IP and MAC address pair.

Question 14. What is the purpose of the Proxy ARP process?


Answer: To enable the requesting host to map the IP address of a destination in a non-local
network with the MAC address of the gateway (local network router interface). This enables the
frame to be sent to the router, which will forward on the packet.

Question 15. Explain why entries in a network host's ARP cache are cleared if not used for a
period of time.
Answer: Unlimited ARP cache hold times could cause errors when devices leave the network or
change the Layer 3 address. Over time, this could fill the available cache memory.

Page 3:

In this activity, you will continue to build a more complex model of the Exploration lab network.

Packet Tracer Skills Integration Instructions (PDF)

Click the Packet Tracer icon to launch the Packet Tracer activity.

9.9.1 - Summary and Review


Link to Packet Tracer Exploration: Skills Integration Challenge: Switched Ethernet

In this activity, you will continue to build a more complex model of the Exploration lab network.

Page 4:

To Learn More
Reflection Questions

Discuss the move of Ethernet from a LAN technology to also becoming a Metropolitan and
Wide Area technology. What has made this possible?

Initially used only for data communications networks, Ethernet is now also being applied in real-
time industrial control networking. Discuss the physical and operational challenges that Ethernet
has to overcome to be fully applied in this area.

9.9.1 - Summary and Review


The diagram depicts a collage of people using computers and networks.

9.10 Chapter Quiz

9.10.1 Chapter Quiz

Page 1:

9.10.1 - Chapter Quiz


1.Match the Ethernet frame field to the proper function.
Fields:
A. Preamble (7 Bytes) and Start of Frame Delimiter (1 Byte)
B. Frame Checksum Sequence (4 Bytes)
C. Destination Address (6 Bytes)
D. Source Address (6 Bytes)
E. Length/Type (2 Bytes)
F. 8 0 2 dot 2 Header and Data (46 to 1500 Bytes)

Functions:
1. Contains the encapsulated data from a higher layer.
2. Identifies the intended recipient.
3. Value equal to or greater than 0x0600 indicates the encapsulated protocol.
5. Used for synchronization between the sending and receiving devices.
6. Used to detect errors in a frame.
7. Identifies the frame's originating NIC or interface.

2.What is a primary function of CSMA/CD in an Ethernet network?


A.Assigns MAC addresses to hosts.
B.Defines the area in which collisions can occur.
C.Identifies the start and end of an Ethernet frame.
D.Provides a method to determine when and how hosts access the Ethernet medium.
E.Creates the header in the Ethernet frame as data is encapsulated.
F.Maps an IP address to a MAC address.

3.What technology is the IEEE 8 0 2 dot 3a c standard specifically designed to accommodate?


A.Simple Network Management Protocol
B.Address Resolution Protocol
C.Virtual Local Area Network
D.Voice over IP
E.Internet Control Message Protocol

4.What is the purpose of media access control?


A.It identifies which workstation has sent a frame.
B.It determines which Layer 3 protocol should handle a frame.
C.It identifies which Ethernet frame format to use on the network.
D.It determines which workstation on a shared medium LAN is allowed to transmit data.

5.How are collisions detected on an Ethernet network?


A.Stations identify the altered FCS field on the colliding packets.
B.The signal amplitude on the networking media is higher than normal.
C.Traffic on the network cannot be detected due to a blockage.
D.The signal amplitude on the networking media is lower than normal.

6.Which Layer 2 sublayer provides service to the network layer of the O S I Model?
A.FCS
B.IEEE 8 0 2 dot 3
C.LLC
D.MAC

7.Refer to the following diagram description to answer the question.


Diagram description:
A PC, a printer, and a router are connected to a hub. The router is also connected to the Internet.

Which devices in the diagram must have a MAC address?


A.Only the PC
B.Only the router
C.PC and router
D.PC, hub, and router
E.PC, printer, and router

8.Refer to the following diagram description to answer the question.


Diagram description:
Four PC's, Host A, Host B, Host C, and Host D, are connected to a hub.

Host A has reached 50% completion in sending a 1 KB Ethernet frame to Host D, when Host B
wishes to transmit its own frame to Host C. What must Host B do?
A.Host B can transmit immediately since it is connected on its own cable segment.
B.Host B must wait to receive a CSMA transmission from the hub, to signal its turn.
C.Host B must send a request signal to Host A by transmitting an interframe gap.
D.Host B must wait until it is certain that Host A has completed sending its frame.

9.Which of the following are fields in an 8 0 2 dot 3 Ethernet frame? (Choose three.)
A.Source physical address
B.Source logical address
C.Media type identifier
D.Frame check sequence
E.Destination physical address
F.Destination logical address

10.What address type does a switch use to make selective forwarding decisions?
A.Source IP
B.Destination IP
C.Source MAC
D.Destination MAC

You might also like