Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lte Vs Wimax

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Network Infrastructures A.A.

2010-2011






LTE and 4G: a comparison with
WiMAX


Name of authors: Gaetano Guida, Valerio Massa, Fabio Parente










2


1. Introduction ...................................................................................................................................... 4
2. WiMAX vs LTE: a system overview ..................................................................................................... 5
2.1. WiMAX Network Architecture ........................................................................................................... 5
2.2. LTE Network Architecture ................................................................................................................. 6
3. WiMAX Protocol Architecture ............................................................................................................ 7
3.1. LLC Layer ............................................................................................................................................ 7
3.2. MAC Layer.......................................................................................................................................... 8
3.2.1. Convergence Sub-layer .............................................................................................................. 8
3.2.2. Common Part Sub-layer ............................................................................................................. 8
3.2.3. Security Sub-layer ...................................................................................................................... 8
4. LTE Protocol Architecture .................................................................................................................. 9
4.1. Protocol Stack .................................................................................................................................... 9
4.1.1. NAS Layer ................................................................................................................................... 9
4.1.2. RRC Layer ................................................................................................................................... 9
4.1.3. PDCP Layer ............................................................................................................................... 10
4.1.4. RLC Layer ................................................................................................................................. 10
4.1.5. MAC Layer ................................................................................................................................ 11
4.2. Retransmission Handling ................................................................................................................. 12
4.3. Scheduling ....................................................................................................................................... 13
5. Comparison between WiMAX and LTE protocol architectures ........................................................... 14
6. Overview of LTE physical layer ......................................................................................................... 15
6.1. Multiple access technology in the downlink: OFDM and OFDMA .................................................. 15
6.2. OFDMA and SC-FDMA compared .................................................................................................... 16
6.3. SC-FDMA signal generation ............................................................................................................. 17
6.4. Spectrum flexibility: FDD and TDD .................................................................................................. 18
6.5. Physical channels and modulation (36.211) .................................................................................... 19
6.6. Frame Structure ............................................................................................................................... 19
6.7. Resource element and resource block ............................................................................................ 21
6.8. Example: FDD downlink mapping to resource elements ................................................................ 21
6.9. Example: TDD mapping to resource elements ................................................................................ 22
6.10. Scheduling and link adaptation ....................................................................................................... 23
7. Comparison of WiMAX and LTE physical layer .................................................................................. 24
7.1. Radio access modes and spectrum considerations ......................................................................... 24
7.2. Data Rates ........................................................................................................................................ 24
7.3. Multiple Access Technology ............................................................................................................ 24
7.3.1. OFDMA .................................................................................................................................... 24
7.3.2. SC-FDMA .................................................................................................................................. 25
7.4. Physical Channelization and Mapping ............................................................................................. 25
7.4.1. WiMAX ..................................................................................................................................... 25
7.5. Frame Structure ............................................................................................................................... 26
3

8. QoS Introduction ............................................................................................................................. 27
9. WiMAX QoS .................................................................................................................................... 27
9.1. WiMAX Air Interface Scheduler ....................................................................................................... 28
9.2. WiMAX QoS popular applications ................................................................................................... 29
9.3. WiMAX QoS requirements .............................................................................................................. 29
9.4. WiMAX QoS classes mapping .......................................................................................................... 29
9.5. WiMAX MAC layer and QoS ............................................................................................................. 30
9.6. WiMAX Admission Control .............................................................................................................. 30
10. LTE QoS ........................................................................................................................................... 31
10.1. Introduction to QoS over LTE .......................................................................................................... 31
10.2. The Service Data Flows and the Class-Based Method ..................................................................... 32
10.3. LTE Bearers ...................................................................................................................................... 32
10.4. LTE definitions ................................................................................................................................. 33
10.5. LTE air interface scheduler .............................................................................................................. 33
10.6. A New Approach for the Uplink Admission Control ........................................................................ 33
10.6.1. Reference AC Algorithm .......................................................................................................... 34
10.6.2. Proposed AC Algorithm ........................................................................................................... 34
11. Comparison of WiMAX and LTE QoS ................................................................................................. 34
12. 4G QoS ............................................................................................................................................ 35
12.1. QoS Optimization Architecture ....................................................................................................... 35
12.2. Components of the Architecture ..................................................................................................... 35
15.2.1. SAU Service Archives Unit ..................................................................................................... 36
15.2.2. CSAU Cumulative Service Archives Unit ............................................................................... 36
12.3. Discussion ........................................................................................................................................ 36
13. References ...................................................................................................................................... 38

4

1. Introduction
The communication industry has been formulating new standards to efficiently deliver high speed broadband
mobile access in a single air interface and network architecture at low cost to operators and end users. Two standards,
IEEE 802.16 (WiMAX) and 3GPP LTE are leading the pack towards forming the next-generation of mobile network
standards.
The WiMAX (IEEE 802.16 standard) comes from IEEE family of protocols and extends the wireless access from
the Local Area Network (typically based on the IEEE 802.11 standard) to Metropolitan Area Networks (MAN) and Wide
Area Networks (WAN). It uses a new physical layer radio access technology called OFDMA (Orthogonal Frequency
Division Multiple Access) for uplink and downlink. While the initial versions 802.16-2004 focused on fixed and nomadic
access, the later version 802.16-2005, an amendment to 802.16-2004 include many new features and functionalities
needed to support enhanced QoS and high mobility broadband services at speeds greater than 120 Km/h. The 802.16-
2004 is also called 802.16d and is referred to as fixed WiMAX while the 802.16-2005 is referred to as 802.16e or
Mobile WiMAX. The Mobile WiMAX uses an all IP backbone with uplink and downlink peak data rate capabilities of up
to 75 Mbps depending on the antenna configuration and modulation, practicable to 10 Mbps within a 6 miles (10 Km)
radius. The earliest iterations of WiMAX was approved with the TDMA TDD and FDD with line of sight (LOS)
propagation across the 10 to 66 GHz frequency range which was later expanded to include operation in the 2 to 11
GHz range with non line of sight (NLOS) capability using the robust OFDMA PHY layer with sub-channelization allowing
dynamic allocation of time and frequency resources to multiple users. The 802.16m (Mobile WiMAX Release 2) Task-
force is currently working on the next-generation systems with an aim for optimizations for improved interworking
and coexistence with other access technologies such as 3G cellular systems, WiFi and Bluetooth and enhance the peak
rates to 4G standards set by the ITU under IMT-Advanced umbrella which calls for data rates of 100 Mbps for high
mobility and 1 Gbps for fixed/nomadic wireless access.
The LTE, on the other hand evolves from the Third-generation technology which is based on WCDMA and
defines the long term evolution of the 3GPP UMTS/HSPA cellular technology. The specifications of these efforts are
formally known as the evolved UMTS terrestrial radio access (E-UTRA) and evolved UMTS terrestrial radio access
network (E-UTRAN), commonly referred to by the 3GPP project LTE. The first version of LTE is documented in Release
8 of the 3GPP specifications. It defines a new physical layer radio access technology based on Orthogonal Frequency
Division Multiple Access (OFDMA) for the downlink, similar in concept to the PHY layer of Mobile WiMAX, and uses SC-
FDMA (single Carrier Frequency Division Multiple Access) for the uplink. LTE supports high performance mobile access
functional up to 350Km/h with 500Km/h under consideration. Peak data rates range from 100 to 326.4Mbps on the
downlink and 50 to 86.4 Mbps on the uplink depending on the antenna configuration and modulation depth. The LTE
also targets to achieve the data rates set by the 4G IMT-Advanced standard. The development of LTE interface is
linked closely with the 3GPP system architecture evolution (SAE) which defines the overall system architecture and
Evolved Packet Core (EPC). The LTE aims to provide an all IP backbone with reduction in cost per bit, better service
provisioning, flexibility in use of new and existing frequency bands, simple network architecture with open interfaces,
and lower power consumption.

5

2. WiMAX vs LTE: a system overview
2.1. WiMAX Network Architecture
The WiMAX network architecture is designed to provide an IP friendly framework with scalable data capacity,
open access to innovative applications and services, enhanced QoS and mobility. The IEEE 802.16 standards define the
structure of the Physical and Link Layer operations that occur between mobile stations (MSs) and base stations (BSs).
The WiMAX Network Reference Model (NRM) logically representing a WiMAX network architecture is shown in the
figure below.

FIGURE 1 - WIMAX NETWORK REFERENCE MODEL
The Model shown in the figure identifies the key functional entities and reference points for network
interoperability. The network access provider (NAP) is a business entity that provides WiMAX radio access
infrastructure while the Network Service Provider (NSP) is the business entity that provides IP connectivity and
WiMAX services to the subscribers according to negotiated SLAs (Service Level Agreements) with one or more NAPs.
Moreover, the WiMAX NRM has several logical entities, such as:
Connectivity Service Network (CSN) It provides IP connectivity services to WiMAX subscribers and may
comprise of network elements such as routers, servers, home agents, to support multicast, broadcast and
location based services. Some of the key functions of CSN include: IP address management, QoS policy and
admission control, ASN-CSN tunneling support, subscriber billing, inter-CSN tunneling for roaming, location
based services, peer-to-peer services, over-the-air activation and provisioning.
Mobile Station (MS)/Subscriber Station (SS) It generically refers to both fixed and mobile device terminals
providing wireless connectivity between single or multiple hosts and a WiMAX network.
Access Service Network (ASN) It performs various network functions required to provide radio access to the
MSs, such as: layer 2 connectivity with MSs, transfer of AAA (Authentication, Authorization and Accounting)
messages to the H-NSP (Home NSP), NSP discovery and selection, radio resource management, paging and
location management, ASN-CSN tunneling.
The ASN may be implemented as an integrated ASN where all functions are collated in a single logical entity,
or it may have a decomposed configuration in which those functions are mapped into two separate nodes:
Base Station (BS) It is a logical entity that primarily performs the radio related functions of an ASN interface
with the MS. Each BS is associated with one sector with one frequency assignment and may incorporate a
scheduler.
ASN Gateway (ASN-GW) It is a logical entity that represents an aggregation of centralized functions related
to QoS, security and mobility management for all the data connections served by its association with the BSs.
It also performs IP layer interactions with the CSN and with other ASNs for mobility.
6

2.2. LTE Network Architecture
The aim of Long Term Evolution Network Architecture is to:
Provide open interfaces to support multi-vendor deployments
Provide robustness: no single point of failure
Support multi-RAT (Radio Access Technology) with resources controlled from the network
Support of seamless mobility to legacy systems as well as to other emerging systems
Maintain appropriate level of security
Based on the goals exposed above, the LTE model gets rid of the RNC (Radio Network Controller), which stands
in the UTRAN (Universal Terrestrial Radio Access Network) of the 3G UMTS networks. The LTE E-UTRAN (Evolved
UTRAN), in fact, is greatly simplified and has a new network element called eNB (evolved Node-B) that provides the E-
UTRA user plane and control plane terminations toward the User Equipment (UE).

FIGURE 2 - LTE NETWORK ARCHITECTURE
In the figure above we can see the main components of the LTE Network Architecture:
Evolved Node-B (eNB) It is the node that has to be contacted by a user who wants to access the network. It
performs many functions, such as: radio resource management, IP header compression and encryption,
selection of MME at UE attachment, routing of user plane data towards S-GW, scheduling and transmission
of paging and broadcast information, mobility measurement and configuration reporting.
Mobility Management Entity (MME) It is the key control-node for the LTE access network, used to process
signaling information between Core Network and UE. It is responsible of the bearer management and of the
choosing of a S-GW for a UE at the initial attach. Moreover, it is involved in the authentication process of the
user and it is the termination point in the network for ciphering/integrity protection operations. Finally, it
also provides the control plane function for mobility between LTE and 2G/3G access networks with the S3
interface terminating at SGSN (Service Gprs Support Node).
Home Subscriber Server (HSS) It contains the subscription data of users such as roaming restrictions and
QoS profiles, and dynamic information, i.e. identity of the MME to which an user is currently connected.
Serving Gateway (S-GW) It is responsible for transferring user IP packets: it routes and forwards them, while
also acting as the mobility anchor for the user plane during inter-eNB handovers and as the anchor for
mobility between LTE and other 3GPP technologies. Furthermore, it manages and stores UE contexts (e.g.
parameters of the IP bearer service) and performs replication of the user traffic in case of lawful interception.
Packet Data Network Gateway (PDN-GW) It provides connectivity to the UE to external packet data
networks by being the point of exit and entry of traffic for the UE. It performs policy enforcement, packet
7

filtering for each user, charging support, lawful interception and packet screening. Another key role of the
PDN-GW is to act as the anchor for mobility between 3GPP and non-3GPP technologies, such as WiMAX.
Policy Control and Charging Rules Functions (PCRF) It organizes decision making control policy and provides
QoS authorization (i.e. bit-rate and QoS class identifier), which decides the method of treating certain data
flows and ensures that the data flow is in accordance with the subscription of the user profile.
Evolved Packet Data Gateway (ePDG) It is responsible for internetworking between the LTE Core Network
and untrusted non-3GPP networks that require secure access, such as WiFi, LTE Metro and femtocell access
networks. The ePDG builds strength and security using: tunnel authentication and authorization, transport
level packet marking, lawful interception and other functions.
As shown in the figure above, the eNBs are connected together through an interface called X2, enabling direct
communication between the elements and eliminating the need to channel data back and forth through the RNC. The
E-UTRAN is connected to the EPC (Evolved Packet Core) through the S1 interface, which connects the eNBs to the
MME and the S-GW elements, through a many-to-many relationship.
3. WiMAX Protocol Architecture
The WiMAX protocol stack embraces the first two levels of the ISO/OSI stack. The Physical layer is concerned
with tasks as coding and modulation, while at level 2 there is the MAC layer which, as we can see from the figure
below, is divided into three sub-layers. Between the MAC layer and the network layer, there is the LLC layer, which is
used by all the 802.x standards. In the next sections, all these layers will be briefly discussed.

FIGURE 3 - WIMAX PROTOCOL STACK
3.1. LLC Layer
The Logical Link Control layer stands on the top of the layer 2 in the ISO/OSI stack, so it acts as an interface
between the MAC layer and the network layer. The main tasks of LLC are:
Multiplexing protocols transmitted over the MAC (in uplink) and decoding them (in downlink)
Providing flow and error control
When LLC receives a packet from the upper layers, it encapsulates it in an LLC PDU (Protocol Data Unit), adding
an LLC header, and sends it to the MAC layer for the transmission. An LLC header tells the Data Link layer what to do
with a packet, once a frame is received. It works like this: a host will receive a frame and look in the LLC header to find
out where the packet is destined for - for example, the IP protocol at the Network layer or IPX.
LLC has three possible implementations, offering to the upper layers three different service modes:
8

Logical Data Link Its a non-reliable and non-connection-oriented service, in which each packet is
transmitted independently from the others, without any preliminary communication. Unicast, multicast and
broadcast transmissions are allowed in this mode. It is not reliable in the sense that the delivery and the
order of the messages are not ensured. Moreover, neither error correction or flow control are provided: they
must be granted by the upper layers.
Data Link Connection Its a reliable and connection-oriented service, which requires the opening of a
communication channel between source and destination to enable data transmission. Mechanisms of error
correction and sequencing are provided, in order to ensure the ordered delivery of the messages.
Alternative Logical Data Link Its an alternative version of the first mode. Although it is non-connection-
oriented, this mode contemplates an acknowledgement for the messages and it ensures the ordered delivery
of them. The ACK delivering may be independent or trade on answering messages.
3.2. MAC Layer
The WiMAX layer is designed for point-to-multipoint applications. It takes MAC Service Data Units (MSDU) from
the upper layers and organizes them in form of MAC Packet Data Units (MPDU) for transmission over the air. MAC
layer supports variable length frames for transmission and it is divided into three sub-layers, which will be illustrated
in the following sections.
3.2.1. Convergence Sub-layer
WiMAX MAC layer is designed to interface with a variety of higher-layer protocols, such as Asynchronous
Transmission Mode (ATM), Ethernet (IEEE.802.3), IP and TDM Voice. So, the main task of the Convergence Sub-layer is
to classify the higher-layer MSDUs and address them to the appropriate connection: this is performed by associating
the received MSDU to the proper Connection Identifier (CID). This mapping takes various forms, depending on the
type of service, which is expressed by the Service Flow Identifier (SFID), to which the MSDU has to be associated too.
Another (optional) task that can be performed at this layer (in uplink) is the suppression of payload header
information (PHS), in order to avoid the transmission of redundant information in the headers of the MSDUs. To do
that, every received MSDU is mapped to a PHS rule by the classifier and is prefixed with a Payload Header Suppression
Index (PHSI), which references the Payload Header Suppression Field (PHSF) to suppress. In downlink, the receiver
uses CID and PHSI to restore the PHSF.
3.2.2. Common Part Sub-layer
The Common Part Sub-layer is responsible of the real tasks of the MAC layer, i.e. system access, fragmentation
of data units, QoS control, scheduling, retransmissions, band requests, bandwidth management and connection
establishment/maintenance. More precisely, this layer provides four kinds of connection, reflecting the different QoS
requirements used by management levels:
Basic Connection Used for short and critical messages.
Primary Management Connection Used for longer and more delay-tolerant messages, such as
authentication and connection setup.
Secondary Management Connection It transfers standards-based messages, such as DHCP or SNMP.
Transport Connection Unidirectional to facilitate different QoS and traffic parameters.
Moreover, the Common Part Sub-layer performs fragmentation and concatenation of the received MSDUs too.
In fact, multiple MAC PDUs may be concatenated into a single transmission in either uplink or downlink direction in
order to save power and bandwidth. On the other hand, when the size of the transport block unit is smaller than the
PDU which has to be sent, it is necessary to divide it into multiple fragments.
3.2.3. Security Sub-layer
The Security Sub-layer, also known as Privacy Sub-layer, has the goal to ensure privacy services to the
subscribers across the wireless network and give protection from unauthorized accesses: in practice, it codifies the
link between the SS and BS. It provides encryption, authentication and secure key exchange functions on MPDUs by
using DES, AES and RSA algorithms. The ciphered MPDUs are finally sent to the PHY layer for further processing.
9


4. LTE Protocol Architecture
4.1. Protocol Stack
The Long Term Evolution protocol architecture embraces the first three levels of the ISO/OSI stack. In fact, as we
can see from the figure, two sub-layers of the LTE protocol stack operate at level 3: NAS and RRC. Anyway, they are
concerned just with the control plane, so no user traffic passes through them. The sub-layers concerned with the user
plane are at level 2: going down, we have PDCP, RLC and MAC.
In next sections, we will analyze all these sub-layers, focusing on their tasks and on what happens to a packet
passing through them.

FIGURE 4 - LTE PROTOCOL STACK
4.1.1. NAS Layer
The Non Access Stratum Protocol runs between the Mobility Management Entity (MME) and the User
Equipment (UE). It is used for control-purposes such as network attach, authentication, setting up of bearers, and
mobility management. All NAS messages are ciphered and integrity protected by the MME and UE.
4.1.2. RRC Layer
The Radio Resource Control acts between the eNB node and the UE. As like as the NAS layer, RRC also is
concerned with the control plane. Functions handled by the RRC include the following:
Processing of broadcast system information, which provides information that allows a device to decide if it
wants to connect to the network or not
Paging, which indicates to a device in idle mode that it might have an incoming call
Integrity protection and ciphering of RRC messages (RRC uses different keys than the user plane)
Radio Bearer setting up and maintenance (logical channels at the top of the PDCP layer)
Mobility functions (handover decisions during active calls, based on neighbor cell measurements sent by the
UE, and cell reselection when idle)
UE measurement reporting and control of signal quality, both for the current base station and other base
stations that the UE can hear
We spoke about a device in idle mode above. In fact, there exist two RRC states: idle (the radio is not active, but
an ID is assigned and tracked by the network) and connected (active radio operations). When a node is in the
10

connected mode, it measures, transmits and receives data. On the other hand, the idle state handles cell reselection,
network selection, receiving and paging. There is a much longer time period in idle mode; active mode is much
shorter, based on activity level.
4.1.3. PDCP Layer
The Packet Data Convergence Protocol layer acts both in control and user plane and it is responsible of several
tasks.
In uplink, when PDCP receives an IP packet from the upper layer, it applies to it a sequence number. The second
task is the compression of the headers of user plane IP packets, using Robust Header Compression (ROHC), in order to
enable efficient use of air interface bandwidth. ROHC can be performed in three different modes:
Unidirectional Mode (U-Mode) Packets are only sent in one direction (from compressor to decompressor),
making ROHC usable over link where a return path is unavailable.
Bidirectional Optimistic Mode (O-Mode) Similar to the U-Mode, except that a feedback channel is used to
send error recovery requests and updates from decompressor to compressor. Anyway, this mode
contemplates a sparse usage of the feedback channel and aims to the maximization of the compression
efficiency.
Bidirectional Reliable Mode (R-Mode) Involves a more intensive usage of the feedback channel and a
stricter logic at both the compressor and the decompressor, in order to prevent loss of context
synchronization.
Then, PDCP also performs an integrity protection of control plane data and ciphering of both user plane and
control plane data. Finally, it attaches its header (which contains also the de-ciphering information), and send the
resulting PDU to the RLC layer.
In downlink, the operations are symmetrical: when PDCP receives a packet from the downer layers, it reads the
de-ciphering information from the PDCP header and then removes it. After that, it can decipher the user plane and
control plane data, and verify the integrity of control plane data. Finally, the decompression of the headers of user
plane packets is performed, and the packets are delivered in order to the upper layers thanks to the sequence
numbers.
4.1.4. RLC Layer
The Radio Link Control layer is used to format and transport traffic between the UE and the eNB. RLC performs
segmentation and reassembly and provides three different reliability modes, which are used by different radio bearers
for different purposes:
Transparent Mode (TM) It is used only for control plane signaling for a few RLC messages during the initial
connection and, because of this reason, it does not perform segmentation of the RLC SDUs. Moreover, it does
not guarantee delivery and does not add a RLC header: it simply passes the messages through. This mode is
used especially when the PDU sizes are known a priori, such as for broadcasting system information.
Unacknowledged Mode (UM) Differently from TM, this mode does provide segmentation (in uplink) and
reassembly (in downlink) of RLC SDUs. The operations performed, in uplink, in this mode can be so resumed:
1. Receive the upper layer SDU from PDCP or RRC
2. Add the SDU to the transmission buffer
3. Segment the SDU into RLC PDUs
4. Add the RLC header to the PDUs
5. Pass the PDUs to MAC for transmission over the air
In downlink, the operations are symmetrical.
The UM mode is suitable for transport of Real Time services (like streaming) because they are delay sensitive
and cannot wait for retransmissions.
Acknowledged Mode (AM) Differently from UM, this mode does guarantee a reliable in sequence delivery
service. The operations performed in this mode are exactly the same of the ones of UM, but here, after the
segmentation of the SDU, a copy of the transmission buffer is made. When a packet is successfully delivered,
the sender node receives a positive ACK from the remote end: so the RLC layer at the sender node accesses
11

the retransmission queue and remove the acknowledged buffer; after this, it updates the received sequence
numbers to advance the sliding window. On the contrary, when the delivery of a packet fails, the sender node
receives a negative ACK from the remote end: in this case, the RLC layer at the sender node accesses the
transmission queue, extracts the undelivered buffer and retransmits it. For what concerns the in sequence
delivering, instead, packet order is corrected using sequence numbers in the RLC header. The AM mode is
appropriate for non-RT services, like carrying TCP traffic.
The motivation of the segmentation performed at this layer is quite simple. As seen above, the segmentation
process involves unpacking an RLC SDU into RLC PDUs. The RLC PDU size is based on transport block size and is not
fixed because it depends on the conditions of channels which the eNB assigns to every UE on the downlink. Transport
block size can vary based on bandwidth requirements, distance, power levels or modulation scheme. The process also
depends on the size of the packets, e.g. large packets for video or small packets for voice over IP. If an RLC SDU is
large, or the available radio data rate is low (resulting in small transport blocks), the RLC SDU may be split among
several RLC PDUs. If the RLC SDU is small, or the available radio data rate is high, several RLC SDUs may be packed into
a single PDU.
4.1.5. MAC Layer
The Media Access Control layer is responsible for managing the hybrid ARQ function (HARQ), which is a
transport-block level automatic retry. It also performs the transport as a logical mappinga function that breaks
down different logical channels out of the transport block for the higher layers.
MAC layer receives data as MAC SDUs from RLC layer. The MAC SDUs are combined along with the attachment
of MAC header and MAC control elements to form MAC PDUs. The MAC header is further divided into sub-headers
where every sub-header contains the Logical Control Identification (LCID) and length field. The LCID indicates which
type of control elements are used in the MAC payload field or indicates the type of channel. The length field indicates
the length of MAC SDUs or MAC control elements.
As said above, a very important task performed by MAC is the mapping between channels. In fact, MAC
interfaces itself with RLC through different logical channels: they represent data transfer services offered by the MAC
and are defined by what type of information they carry; types of logical channels include control channels (for control
plane data) and traffic channels (for user plane data). On the other hand, the link between MAC and PHY is
represented by different transport channels: they represent data transfer services offered by the PHY and are defined
by how the information is carried, different physical layer modulations and the way they are encoded. So, for
example, in downlink its critical to send a packet arriving from a certain transport channel to the appropriate logical
channel.


FIGURE 5 - MAC DOWNLINK CHANNEL MAPPING FIGURE 6 - MAC UPLINK CHANNEL MAPPING

As we can see from the figures above, the logical channels standing between MAC and RLC are:
Paging Control Channel (PCCH) A downlink channel that transfers paging information. It is used when the
network does not know the location cell of the UE.
Broadcast Control Channel (BCCH) A downlink channel for broadcasting system control information.
Common Control Channel (CCCH) Uplink channel for transmitting control information between UEs and
network. It is used by the UEs having no RRC connection with the network.
12

Dedicated Control Channel (DCCH) A point-to-point bi-directional channel that transmits dedicated control
information between a UE and the network. It is used by UEs that have an RRC connection.
Dedicated Traffic Channel (DTCH) A point-to-point channel, dedicated to one UE, for the transfer of user
information. It can exist in both uplink and downlink.
Multicast Control Channel (MCCH) A point-to-multipoint downlink channel used for transmitting MBMS
(Multimedia Broadcast and Multicast Service) control information from the network to the UE. It is used only
by UEs that receive MBMS.
Multicast Traffic Channel (MTCH) A point-to-multipoint downlink channel for transmitting traffic data from
the network to the UE. It is used only by UEs that receive MBMS.
On the other hand, the transport channels standing between MAC and PHY are:
Paging Channel (PCH) A downlink channel that supports discontinuous reception to enable UE power
saving. It broadcasts in the entire coverage area of the cell.
Broadcast Channel (BCH) A downlink channel with fixed, pre-defined transport format. It broadcasts in the
entire coverage area of the cell.
Multicast Channel (MCH) A downlink channel that supports MBMS transmission on multiple cells and semi-
static resource allocation (e.g. with a time frame of a long cyclic prefix). It broadcasts in the entire coverage
area of the cell.
Downlink Shared Channel (DL-SCH) A downlink channel that supports Hybrid ARQ and dynamic link
adaptation by varying the modulation, coding and transmit power. It also supports both dynamic and semi-
static resource allocation, UE discontinuous reception and MBMS transmission.
Random Access Channel (RACH) An uplink channel which carries minimal information. Transmissions on this
channel may be lost due to collisions.
Uplink Shared Channel (UL-SCH) An uplink channel that supports dynamic link adaptation by varying the
transmit power, modulation and coding. It also supports Hybrid ARQ and dynamic and semi-static resource
allocation.
The second critical task performed by MAC layer in LTE is the managing of Hybrid ARQ (HARQ) function and so the
retransmission handling, together with PHY layer. This will be explained in the next section.
4.2. Retransmission Handling
In any communication system, there are occasional data transmission errors, for example, due to noise, interference,
and/or fading. Link-layer, network-layer (IP), and transport-layer protocols are not prepared to cope with bit errors in
headers, and the majority of the protocols are not capable of handling errors in the payload either. Therefore, a
fundamental design choice for LTE has been not to propagate any bit errors to higher layers but rather to drop or
retransmit the entire data unit containing bit errors. This goal is achieved by a two layer ARQ design: ARQ at RLC layer
and HARQ at MAC/PHY layers.
The ARQ at RLC layer is performed only in AM mode and it has already been discussed in the RLC section.
The Hybrid Automatic Repeat-reQuest (HARQ) process, done in combination between the MAC and the PHY layers,
retransmits transport blocks (TBs) for error recovery. The PHY performs the retention and re-combination
(incremental redundancy) and the MAC performs the management and signaling.
The functionality and performance is comparable to that of a window-based selective repeat protocol. In particular, it
allows continuous transmission, which cannot be achieved with a single stop-and-wait scheme. Instead of a status
message containing a sequence number, a single-bit HARQ feedback acknowledgment/negative acknowledgment
(ACK/NACK), with a fixed-timing relation to the corresponding transmission attempt, provides information about the
successful reception of the HARQ process.
More precisely, the MAC indicates a NACK when theres a transport block CRC failure; the PHY usually indicates that
failure. Retransmission is done by the eNB or the sender on the downlink using a different type of coding. The coding
13

is sent and maintained in buffers in the eNB. Eventually, after one or two attempts, there will be enough data to
reconstruct the signal. In HARQ operation, the retransmission does not have to be fully correct. It has to be correct
enough that it can be combined mathematically with the previous transport block in order to produce a good
transport block. This is the most efficient way of providing this ARQ function.
So, we can resume the basic steps of the HARQ function in this way:
MAC sends a NACK when TB fails CRC
TBs with errors are retained
PHY retransmits with different puncturing code
Retransmission combined with saved TBs
When a correct TB is decoded, MAC sends an ACK
Multiple HARQ processes can run in parallel to retry several outstanding TBs
This approach gains in terms of delay, simplicity, and control overhead compared to a window-based selective repeat
protocol.
The two-layer ARQ design achieves low latency and low overhead without sacrificing reliability. Most errors are
captured and corrected by the lightweight HARQ protocol. Only residual HARQ errors are detected and resolved by
the more expensive (in terms of latency and overhead) ARQ retransmissions.
4.3. Scheduling
In order to allow the UE to request uplink-transmission resources from the eNB, LTE provides a Scheduling
Request (SR) mechanism. The SR conveys a single bit of information, indicating that the UE has new data to transmit.
Because the SR procedure conveys little detail about the UE resource requirement, a Buffer Status Report (BSR) with
more detailed information about the amount of data waiting in the UE is attached to the first uplink transmission
following the SR procedure. In fact, the requirement to transmit a BSR triggers the SR.
From a protocol point of view, as we can see from the next figure, in a LTE eNodeB the scheduler embraces both
the physical layer and the link layer. More precisely, at level 2 the scheduler is concerned with MAC and RLC sub-
layers.
As like as all schedulers, also for
the LTE eNB scheduler a particular
challenge is to provide the desired
quality of service (QoS) on a shared
channel. However, it is up to the eNB
implementation and consequently, the
responsibility of the scheduler to assign
radio resources in a way that the
terminals and radio bearers obtain the
QoS characteristics assigned by the
EPC. Depending on the
implementation, the scheduler can
base its scheduling decision on the QoS
class and the queuing delay of the
available data, on the instantaneous
channel conditions, or on fairness
indicators. The channel conditions in a
wideband system vary not only over
time but also can differ in the
frequency domain. If the UE provides
sufficiently detailed channel-quality
information to the eNB, the
scheduler can perform channel-
dependent scheduling in the time and frequency domain and thereby improve the cell and system capacity. Also, the
physical downlink-control channel (PDCCH) that carries the scheduling decisions to the affected UE and the PUCCH
FIGURE 7 - SCHEDULER IN LTE STACK
14

that carries HARQ feedback and channel quality information to the eNB have a finite capacity and thus, may constrain
the scheduler in its freedom of how many users to address in a sub-frame. Furthermore, the scheduler must ensure
that HARQ retransmissions are performed on a timely basis. In the uplink direction, the HARQ retransmission must
occur exactly one round-trip time after the previous transmission attempt, whereas the scheduler can postpone
downlink retransmissions in favor of higher priority transmissions.
For the downlink, the scheduler selects not only the appropriate user but also decides which radio bearer to
serve. In contrast, uplink scheduling grants are dedicated to particular UE but do not comprise instructions about
which radio bearers to serve. This additional information would increase the size of the uplink grants and thereby limit
the capacity of the PDCCH and consequently, the number of UE units that could be addressed in a sub-frame. Rather,
the UE makes this decision autonomously in the logical channel prioritization function, which is preconfigured by the
eNB. Moreover, the UE sends BSRs for active radio bearers. Based on these reports, the eNB can ensure that users
with high priority data are prioritized and obtain the assigned QoS characteristics. Not only user data but also control
information, namely, MAC control elements such as BSR, and discontinuous reception (DRX) and timing advance
messages can be chosen for transmission. When generating the transport block, the MAC layer typically incorporates
those MAC control elements first. Secondly, it triggers the scheduled RLC entities, which send either new RLC PDUs, or
in the case of RLC AM, retransmissions. The size of a new RLC PDU is variable so that an RLC entity generates, at most,
one new RLC PDU per transport block. This minimizes segmentation and multiplexing of data packets and
consequently, protocol overhead. It also reduces the required RLC-sequence-number space and makes it independent
of the data rate and future proof. The RLC re-segmentation function allows changing the size of RLC retransmissions if
the scheduler did not provide sufficient resources to transmit the requested RLC PDU at once. Finally, if the size of the
computed transport block does not exactly match the chosen transport format, the remaining bytes are filled with
padding before handing the block to the physical layer for transmission.
5. Comparison between WiMAX and LTE protocol
architectures
In this section, we will resume the differences between the WiMAX and LTE protocol architectures seen in the
previous sections.
The first peculiarity that we can find in the LTE protocol stack is that it embraces the first three layers of the
ISO/OSI stack, while the WiMAX architecture is concerned just with the physical and link layers. Anyway, the two sub-
layers of the LTE stack which stand at level 3 (NAS and RRC) are used only for control-plane tasks: the user-plane
traffic, instead, goes through the level-2 sub-layers, exactly like in WiMAX.
A strong difference between the two standards is given by the protocol through which they provide the interface
between the link and network layers. In fact, in the previous sections we saw that WiMAX uses the LLC protocol, which
is common to all 802.x standards and provides the multiplexing of the upper-layer protocols, making WiMAX able to
interface with an high variety of level-3 protocols. On the contrary, the network architecture of LTE is IP-based: so,
when a packet goes down from the network layer, it passes through the PDCP protocol, which has tasks like
compression, ordering and ciphering of the packets.
The analysis of MAC layers of WiMAX and LTE brings us another difference. First of all, we have seen that in
WiMAX it was chosen to split the MAC tasks in three sub-layers: the CS sub-layer provides the classification and
mapping of the packets arriving from the variety of upper-layer protocols; CPS performs all the typical tasks of the
MAC layer; SS provides encryption and privacy. On the contrary, in LTE some of the tasks exposed above are delegated
to the other two layers standing upon MAC (for example, fragmentation is a RLCs duty). Nevertheless, LTE MAC layer
has to deal with a problem not present in WiMAX: the mapping of the transport and logical channels standing
between RLC, MAC and PHY layers.
Finally, another huge difference between WiMAX and LTE is given by the retransmission handling. In WiMAX
MAC layer, ARQ is optional and used when needed by the receiver to provide acknowledgements on successfully
received data or notifying missing blocks of data. When implemented, the ARQ may be enabled on a per-connection
basis. The per-connection ARQ is specified and negotiated during connection creation. A connection cannot have a
mixture of ARQ and non-ARQ traffic. On the other hand, LTE contemplates a double-level retransmission handling: the
Hybrid ARQ and the outer ARQ. The HARQ mechanism is performed locally in a host, between MAC and PHY layers:
the former performs the management and signaling, while the latter performs the retention and re-combination
(incremental redundancy). The outer ARQ is implemented by the RLC layer (when used in AM mode) and is required to
handle residual errors that are not corrected by HARQ.
15



6. Overview of LTE physical layer
6.1. Multiple access technology in the downlink: OFDM and
OFDMA
OFDMA is derived from OFDM (Orthogonal Frequency Division Multiplexing), a digital multi-carrier modulation
scheme which uses the principle that information can be transmitted on a radio channel through variations of a carrier
signals frequency, phase or magnitude. Instead of transmitting all the information on to a single RF carrier signal, the
high data rate input stream is multiplexed into parallel combination of low data rate streams. The parallel streams are
modulated onto separate subcarriers in the frequency domain through the use of inverse fast Fourier transform (IFFT)
and transmitted through the channel. At the receiver, the signal is demodulated using an FFT process to convert a
time varying complex waveform back to its spectral components, recovering the initial subcarriers with their
modulation and thus the original digital bit stream. The figure below shows frequency and time domain
representation of an OFDM signal.


In OFDM, the subcarriers are spaced closely together without any guard bands in frequency domain and use the
FFT to convert the digital signals from time domain into a
spectrum of frequency domain signals that are
mathematically orthogonal to each other. The frequency
domain null of one subcarrier corresponds to the maximum
value of the adjacent subcarrier which allows subcarriers to
overlap without interference and thus conserve bandwidth.
By using TDMA with basic OFDM, OFDMA is achieved thus
allowing dynamic allocation of subcarriers among different
users on the channel. OFDMA provides a robust system with
increased capacity and resistance to multipath fading. The
figure below shows the OFDM and OFDMA subcarrier
allocation.
In LTE and WiMAX, each subcarrier is modulated with a conventional modulation scheme depending on the
channel condition. LTE uses QPSK, 16QAM, or 64QAM while WiMAX uses BPSK, QPSK, 16QAM, or 64QAM for
modulation at a low symbol rate. The FFT sizes of 128, 256, 512, 1024 and 2048, corresponding to WiMAX and LTE
channel bandwidth of 1.25, 2.5, 5, 10 and 20MHz are used. In time domain, guard intervals known as cyclic prefix (CP)
are inserted between each of the symbols to prevent inter-symbol interference at the receiver caused by multi-path
delay spread in the radio channel. The normal CP for LTE is 4.69 s while for WiMAX it is 1/8 the length of OFDMA
symbol time, typically 11.43 s for OFDMA symbol duration of 102.86 s. The CP is a copy of the end of the symbol
inserted at the beginning.
FIGURE 8 - OFDM SIGNAL REPRESENTED IN FREQUENCY AND TIME
FIGURE 9 - OFDM AND OFDMA SUBCARRIER
ALLOCATION
16

6.2. OFDMA and SC-FDMA compared
A graphical comparison of OFDMA and SC-FDMA as shown in figure 10 is helpful in understanding the
differences between these two modulation schemes. For clarity this example uses only four (M) subcarriers over two
symbol periods with the payload data represented by quadrature phase shift keying (QPSK) modulation. Real LTE
signals are allocated in units of 12 adjacent subcarriers.

FIGURE 10 - COMPARISON OF OFDMA AND SC-FDMA TRANSMITTING A SERIES OF QPSK DATA SYMBOLS
On the left side of figure 10, M adjacent 15 kHz subcarriersalready positioned at the desired place in the
channel bandwidthare each modulated for the OFDMA symbol period of 66.7 s by one QPSK data symbol. In this
four subcarrier example, four symbols are taken in parallel. These are QPSK data symbols so only the phase of each
subcarrier is modulated and the subcarrier power remains constant between symbols. After one OFDMA symbol
period has elapsed, the Cyclic Prefix (CP) is inserted and the next four symbols are transmitted in parallel. For visual
clarity, the CP is shown as a gap; however, it is actually filled with a copy of the end of the next symbol, which means
that the transmission power is continuous but has a phase discontinuity at the symbol boundary. To create the
transmitted signal, an IFFT is performed on each subcarrier to create M time domain signals. These in turn are vector-
summed to create the final time-domain waveform used for transmission.
SC-FDMA signal generation begins with a special pre-coding process but then continues in a manner similar to
OFDMA. However, before getting into the details of the generation process it is helpful to describe the end result as
shown on the right side of figure 10. The most obvious difference between the two schemes is that OFDMA transmits
the four QPSK data symbols in parallel, one per subcarrier, while SC-FDMA transmits the four QPSK data symbols in
series at four times the rate, with each data symbol occupying M x 15 kHz bandwidth. Visually, the OFDMA signal is
clearly multi-carrier with one data symbol per subcarrier, but the SC-FDMA signal appears to be more like a single-
carrier (hence the SC in the SC-FDMA name) with each data symbol being represented by one wide signal. Note that
OFDMA and SC-FDMA symbol lengths are the same at 66.7 s; however, the SC-FDMA symbol contains M sub-
symbols that represent the modulating data.
It is the parallel transmission of multiple symbols that creates the undesirable high PAPR
1
of OFDMA. By
transmitting the M data symbols in series at M times the rate, the SC-FDMA occupied bandwidth is the same as multi-
carrier OFDMA but, crucially, the PAPR is the same as that used for the original data symbols. Adding together many
narrowband QPSK waveforms in OFDMA will always create higher peaks than would be seen in the wider-bandwidth,
single-carrier QPSK waveform of SC-FDMA. As the number of subcarriers M increases, the PAPR of OFDMA with
random modulating data approaches Gaussian noise statistics but, regardless of the value of M, the SC-FDMA PAR
remains the same as that used for the original data symbols.

1
Peak to Average Power Ratio:
avg
P
t x
PAPR
2
) (
=
17

6.3. SC-FDMA signal generation
As noted, SC-FDMA signal generation begins with a special pre-coding process. Figure 11 shows the first steps,
which create a time-domain waveform of the QPSK data sub-symbols. Using the four color-coded QPSK data symbols
from figure 10, the process creates one SC-FDMA symbol in the time domain by computing the trajectory traced by
moving from one QPSK data symbol to the next. This is done at M times the rate of the SC-FDMA symbol such that one
SC-FDMA symbol contains M consecutive QPSK data symbols.

FIGURE 11 - CREATING THE TIME-DOMAIN WAVEFORM OF AN SC-FDMA SYMBOL
Once an IQ representation of one SC-FDMA symbol has been created in the time domain, the next step is to
represent that symbol in the frequency domain using a DFT. This is shown in figure 12. The DFT sampling frequency is
chosen such that the time-domain waveform of one SC-FDMA symbol is fully represented by M DFT bins spaced 15
kHz apart, with each bin representing one subcarrier in which amplitude and phase are held constant for 66.7 s.

FIGURE 12 - BASEBAND FREQUENCY AND SHIFTED DFT REPRESENTATIONS OF AN SC-FDMA SYMBOL
A one-to-one correlation always exists between the number of data symbols to be transmitted during one SC-
FDMA symbol period and the number of DFT bins created. This in turn becomes the number of occupied subcarriers.
When an increasing number of data symbols are transmitted during one SC-FDMA period, the time-domain waveform
changes faster, generating a higher bandwidth and hence requiring more DFT bins to fully represent the signal in the
frequency domain. Note in figure 12 that there is no longer a direct relationship between the amplitude and phase of
the individual DFT bins and the original QPSK data symbols. This differs from the OFDMA example in which data
symbols directly modulate the subcarriers.
The next step of the signal generation process is to shift the baseband DFT representation of the time-domain
SC-FDMA symbol to the desired part of the overall channel bandwidth. Because the signal is now represented as a
DFT, frequency-shifting is a simple process achieved by copying the M bins into a larger DFT space of N bins (typically
N=256). This larger space equals the size of the system channel bandwidth, of which there are six to choose from in
LTE spanning 1.4 to 20 MHz. The signal can be positioned anywhere in the channel bandwidth, thus executing the
frequency-division multiple access (FDMA) essential for efficiently sharing the uplink between multiple users. To
complete SC-FDMA signal generation, the process follows the same steps as for OFDMA. Performing an IDFT converts
the frequency-shifted signal to the time domain and inserting the CP provides the fundamental robustness of OFDMA
against multipath. The relationship between SC-FDMA and OFDMA is illustrated in Figure 13.

FIGURE 13 A MODEL OF SC-FDMA SIGNAL GENERATION
18

At this point, it is reasonable to ask how SC-FDMA can be resistant to multipath when the data symbols are still
short. In OFDMA, the modulating data symbols are constant over the 66.7 s OFDMA symbol period, but an SC-FDMA
symbol is not constant over time since it contains M sub-symbols of much shorter duration. The multipath resistance
of the OFDMA demodulation process seems to rely on the long data symbols that map directly onto the subcarriers.
Fortunately, it is the constant nature of each subcarriernot the data symbolsthat provides the resistance to delay
spread. As shown in figure 10 and figure 12, the DFT of the time-varying SC-FDMA symbol generated a set of DFT bins
constant in time during the SC-FDMA symbol period, even though the modulating data symbols varied over the same
period. It is inherent to the DFT process that the time-varying SC-FDMA symbol, made of M serial data symbols, is
represented in the frequency domain by M time-invariant subcarriers. Thus, even SC-FDMA with its short data
symbols benefits from multipath protection. It may seem counter intuitive that M time-invariant DFT bins can fully
represent a time-varying signal. However, the DFT principle is simply illustrated by considering the sum of two fixed
sine waves at different frequencies. The result is a non-sinusoidal time-varying signal, fully represented by two fixed
sine waves.
Table 1 summarizes the differences between the OFDMA and SC-FDMA modulation schemes. When OFDMA is
analyzed one subcarrier at a time, it resembles the original data symbols. At full bandwidth, however, the signal looks
like Gaussian noise in terms of its PAPR statistics and the constellation. The opposite is true for SC-FDMA. In this case,
the relationship to the original data symbols is evident when the entire signal bandwidth is analyzed. The constellation
(and hence low PAPR) of the original data symbols can be observed rotating at M times the SC-FDMA symbol rate,
ignoring the seven percent rate reduction that is due to adding the CP. When analyzed at the 15 kHz subcarrier
spacing, the SC-FDMA PAPR and constellation are meaningless because they are M times narrower than the
information bandwidth of the data symbols.

TABLE 1: ANALYSIS OF OFDMA AND SC-FDMA AT DIFFERENT BANDWIDTHS
6.4. Spectrum flexibility: FDD and TDD
Depending on regulatory aspects in different geographical areas, radio spectrum for mobile communication is
available in different frequency bands in different bandwidths, and comes as both paired and unpaired spectrum.
Spectrum flexibility, which enables operation under all these conditions, is one of the key features of LTE radio access.
Besides being able to operate in different frequency bands, LTE can be deployed with different bandwidths ranging
from approximately 1.25MHz up to approximately 20MHz. Furthermore, LTE can operate in both paired and unpaired
spectrum by providing a single radio-access technology that supports frequency-division duplex (FDD) as well as time-
division duplex (TDD) operation.
Where terminals are concerned, FDD can be
operated in full- and half-duplex modes. Half-
duplex FDD, in which the terminal separates
transmission and reception in frequency and time
(figure 14), is useful because it allows terminals to
operate with relaxed duplex-filter requirements.
This, in turn, reduces the cost of terminals and
makes it possible to exploit FDD frequency bands
that could not otherwise be used (too narrow
duplex distance). Together, these solutions make
LTE fit nearly arbitrary spectrum allocations.
One challenge when designing a spectrum
flexible radio-access technology is to preserve
commonality between the spectrum and
duplexing modes. The frame structure that LTE
uses is the same for different bandwidths and similar for FDD and TDD.
FIGURE 14 - LTE SPECTRUM (BANDWIDTH AND DUPLEX)
FLEXIBILITY. HALF-DUPLEX FDD IS SEEN FROM A TERMINAL
PERSPECTIVE
19

6.5. Physical channels and modulation (36.211)
The LTE air interface
consists of physical signals and
physical channels, which are
defined in 36.211 [10]. Physical
signals are generated in Layer 1
and used for system
synchronization, cell
identification, and radio channel
estimation. Physical channels
carry data from higher layers
including control, scheduling, and
user payload. Physical signals are
summarized in Table 2. In the
downlink, primary and secondary
synchronization signals encode
the cell identification, allowing
the UE to identify and synchronize with the network. In both the downlink and the uplink there are reference signals
(RS), known as pilot signals in other standards, which are used by the receiver to estimate the amplitude and phase
flatness of the received signal. The flatness is a combination of errors in the transmitted signal and additional
imperfections that are due to the radio channel. Without the use of the RS, phase and amplitude shifts in the received
signal would make demodulation unreliable, particularly at high modulation depths such as 16QAM or 64QAM. In
these high modulation cases, even a small error in the received signal amplitude or phase can cause demodulation
errors.
Alongside the physical signals are physical channels, which carry the user and system information. These are
summarized in Table 3. Notice the absence of dedicated channels, which is a characteristic of packet-only systems.

TABLE 3 - LTE PHYSICAL CHANNELS
6.6. Frame Structure
The physical layer supports the two multiple access schemes previously described: OFDMA on the downlink and
SC-FDMA on the uplink. In addition, both paired and unpaired spectrum are supported using frequency division
duplexing (FDD) and time division duplexing (TDD), respectively.
Although the LTE downlink and uplink use different multiple access schemes, they share a common frame
structure. The frame structure defines the frame, slot, and symbol in the time domain. Two radio frame structures are
defined for LTE and shown in figure 15 and figure 16.
Frame structure type 1 is defined for FDD mode. Each radio frame is 10 ms long and consists of 10 subframes.
Each subframe contains two slots. In FDD, both the uplink and the downlink have the same frame structure though
they use different spectra.
TABLE 2 - LTE PHYSICAL SIGNALS
20


FIGURE 15 - FRAME STRUCTURE TYPE 1
Frame structure type 2 is defined for TDD mode. An example is shown in figure 16. This example is for 5 ms
switch-point periodicity and consists of two 5 ms half-frames for a total duration of 10 ms. Subframes consist of either
an uplink or downlink transmission or a special subframe containing the downlink and uplink pilot timeslots (DwPTS
and UpPTS) separated by a transmission gap guard period (GP). The allocation of the subframes for the uplink,
downlink, and special subframes is determined by one of seven different configurations.
Subframes 0 and 5 are always downlink transmissions, subframe 1 is always a special subframe, and subframe 2
is always an uplink transmission. The composition of the other subframes varies depending on the frame
configuration. For a 5 ms switch-point configuration, subframe 6 is always a special subframe as shown below. With
10 ms switch-point periodicity, there is only one special subframe per 10 ms frame.

FIGURE 16 - FRAME STRUCTURE TYPE 2
One of the key advantages in OFDM systems (including SC-FDMA in this context) is the ability to protect against
multipath delay spread. The long OFDM symbols allow the introduction of a guard period between each symbol to
eliminate inter-symbol interference due to multipath delay spread. If the guard period is longer than the delay spread
in the radio channel, and if each OFDM symbol is cyclically extended into the guard period (by copying the end of the
symbol to the start to create the cyclic prefix), then the inter-symbol interference can be completely eliminated.
Figure 17 shows the seven symbols in a slot for the normal cyclic prefix case.

FIGURE 17 - OFDM SYMBOL STRUCTURE
Cyclic prefix lengths for the downlink and the uplink are shown in Table 4. In the downlink case, f represents
the 15 kHz or 7.5 kHz subcarrier spacing. The normal cyclic prefix of 144 x Ts protects against multi-path delay spread
of up to 1.4 km. The longest cyclic prefix provides protection for delay spreads of up to 10 km.

TABLE 4 - OFDM (DL) AND SC-FDMA (UL) CYCLIC PREFIX LENGTH
21

6.7. Resource element and resource block
A resource element is the smallest unit in the physical layer and occupies one OFDM or SC-FDMA symbol in the
time domain and one subcarrier in the frequency domain. This is shown in Figure 18.

FIGURE 18 - RESOURCE GRID FOR UPLINK (A) AND DOWNLINK (B)
A resource block (RB) is the smallest unit that can be scheduled for transmission. An RB physically occupies 0.5
ms (1 slot) in the time domain and 180 kHz in the frequency domain. The number of subcarriers per RB and the
number of symbols per RB vary as a function of the cyclic prefix length and subcarrier spacing, as shown in Figure 19.
The obvious difference between the downlink and uplink is that the downlink transmission supports 7.5 kHz subcarrier
spacing, which is used for multicast/broadcast over single frequency network (MBSFN).
The 7.5 kHz subcarrier spacing means that the symbols are twice as long, which allows the use of a longer CP to
combat the higher delay spread seen when receiving from multiple MBSFN cells.

FIGURE 19 - PHYSICAL RESOURCE BLOCK PARAMETERS
6.8. Example: FDD downlink mapping to resource elements
The primary and secondary synchronization signals, reference signals, PDSCH, PBCH, and PDCCH are almost
always present in a downlink radio frame. There is a priority rule for allocation (physical mapping) as follows. Signals
(reference signal, primary/secondary synchronization signal) take precedence over the PBCH. The PDCCH takes
precedence over the PDSCH. The PBCH and PDCCH are never allocated to the same resource elements, thus they are
not in conflict.
Figures 20 and 21 show an LTE FDD mapping example. The primary synchronization signal is mapped to the last
symbol of slot #0 and slot #10 in the central 62 subcarriers. The secondary synchronization signal is allocated in the
symbol just before the primary synchronization signal. The reference signals are located at symbol #0 and symbol #4
of every slot. The reference signal takes precedence over any other allocation.
22

The PBCH is mapped to the first four symbols in slot #1 in the central 6 RB. The PDCCH can be allocated to the
first three symbols (four symbols when the number of RB is equal or less than 10) of every subframe as shown in
Figure 20. The remaining unallocated areas can be used for the PDSCH.

FIGURE 20 - EXAMPLE OF DOWNLINK MAPPING

FIGURE 21 - EXAMPLE OF DOWNLINK MAPPING SHOWING FREQUENCY
6.9. Example: TDD mapping to resource elements
Figures 22 and 23 show examples of 5 ms and 10 ms TDD switch point periodicity. The primary synchronization
signal is mapped to the third symbol of slot #2 and slot #12 in the central 62 subcarriers. The secondary
synchronization signal is allocated in the last symbol of slot #1 and slot #11. The reference signals are located at
symbol #0 and symbol #4 of every slot. The reference signal takes precedence over any other allocation. The PBCH is
mapped to the first four symbols in slot #1 in the central 6 RB. The PDCCH can be allocated to the first three symbols
of every subframe as shown here. The remaining unallocated areas can be used for the PDSCH. Note that the
acronyms P-SS for primary synchronization signal and S-SS for secondary synchronization signal are not formally
defined in the 3GPP specifications but are used here for convenience.

FIGURE 22 - EXAMPLE OF LTE TDD 5 MS SWITCH PERIODICITY MAPPING
23


FIGURE 23 - EXAMPLE OF LTE TDD 10 MS SWITCH PERIODICITY MAPPING
6.10. Scheduling and link adaptation
An intrinsic characteristic of radio communication is fading, which results in the instantaneous radio-channel
quality varying in time, space, and frequency. Due to the use of OFDM-based transmission, LTE can use channel-
dependent scheduling in both the time and frequency domain to exploit rather than suppress such rapid channel-
quality variations, thereby achieving more efficient utilization of the available radio resources. This is illustrated in
Figure 24.
In general, scheduling refers to the process of dividing and
allocating resources between users who have data to transfer. In
LTE, dynamic scheduling (1ms) is applied both to the uplink and
downlink. Scheduling should result in a balance between perceived
end-user quality and overall system performance. Channel-
dependent scheduling is used to achieve high cell throughput.
Transmissions can be carried out with higher data rates (the result
of using higher-order modulation, less channel coding, additional
streams, and fewer retransmissions) by transmitting on time or
frequency resources with relatively good channel conditions. This
way, fewer radio resources (less time) are consumed for any given
amount of information transferred, resulting in improved overall
system efficiency.
The scheduler determines, for each 1 ms subframe, which
user(s) are allowed to transmit, on what frequency resources the
transmission is to take place, and what data rate to use. The short
subframe duration of 1 ms allows relatively fast channel variations
to be tracked and utilized by the scheduler. In the frequency
domain, the scheduling granularity is 180 kHz. The scheduler is thus
a key element and to a large extent determines the overall downlink system performance, especially in a highly loaded
network. To aid the downlink scheduler in its decision, the instantaneous channel-quality at the terminals is estimated
and fed back to the base station, possibly as often as once per subframe. In the uplink, the terminals can be
configured to transmit a sounding reference signal, the reception quality of which may be used for uplink channel-
dependent scheduling.
FIGURE 24 - CHANNEL QUALITY VARIATIONS IN
FREQUENCY AND TIME
24

7. Comparison of WiMAX and LTE physical layer
7.1. Radio access modes and spectrum considerations
In FDD, Base Station and mobile user transmit and receive simultaneously due to allocation of separate
frequency bands. While in TDD, downlink and uplink transmit in different times due to sharing of same frequency. The
radio mode currently specified by WiMAX is TDD whereas LTE is specified for FDD. The spectral holdings of operators
will be a key decision factor for selecting the technology (based on FDD or TDD).
Equipment vendors have focused their efforts on developing equipment in the frequency bands of the major
mobile network operators who are one of the main forces behind LTE. A common profile of the standard is necessary
to drive high volumes and low prices in addition to supporting key services such as roaming. As mobile services
became ubiquitous around the world, different spectral bands were opened globally for these services. The result is a
relatively high number of bands where mobile wireless networks are operating in (or planned for operation) including:
700 MHz (USA), 800 MHz (North America, and digital divided band in some European countries), 900 and 1800 MHz
(Europe, rest of world), 1700 MHz (North America AWS band), 1900 MHz (North America PCS band), and 2.1 GHz
(Europe UMTS band). These bands, all configured for paired allocation (FDD), have been the main candidates for LTE
deployments in addition to the 2.5-2.7 GHz band.
In contrast to LTE, WiMAX has been focused on deployments in higher frequencies, namely 2.3 GHz (Korea,
India), 2.5-2.7 GHz (USA), and 3.4-3.6 GHz (Europe, rest of the world). Depending on geography, these bands feature
unpaired (TDD) or paired allocations. The WiMAX Forum, the industry coalition behind WiMAX, certified equipment
for compliance with the IEEE standard and for interoperability in these bands.
Meanwhile the frequency bands for TD-LTE which is being promoted as a substitute to WiMAX have focused on
2.3 GHz and 2.5 GHz, driven by interest of China Mobile and Indian broadband deployments and by US operator
Clearwire, respectively. This leaves WiMAX relatively little challenged in the 3.x GHz bands for the time being.
The fragmentation of spectrum presents a challenge for equipment vendors as wireless devices (and base
stations) need to support a continually higher number of frequency bands. It is particularly in the RadioFrequency
chain that includes RFICs, filters, mixers and power and low-noise amplifiers that this challenge becomes manifest.
Even as component vendors strive to develop multi-band RFICs, supporting wideband or dual-band power amplifiers is
very challenging.
7.2. Data Rates
The peak data rates of LTE and WiMAX depend upon multiple antenna configuration and modulation scheme
used. The peak data rates of LTE and WiMAX in DL and UL are illustrated below.

7.3. Multiple Access Technology
The multiple access technologies used by WiMAX and LTE are quite similar having modification in the uplink. The
multiple access technology adopted in the downlink of LTE and uplink/downlink of WiMAX is OFDMA, whereas uplink
of LTE is based on SC-FDMA. The benefit of SC-FDMA in the uplink is the reduction of the PAPR.
7.3.1. OFDMA
It is an extension OFDM and is used in downlink of LTE and uplink/downlink of WiMAX. In OFDMA, subcarriers
are allocated dynamically to users in different time slots. OFDMA has various advantages as compared to OFDM
where single user can transmit/receive in the entire time frame. Due to this, OFDM suffers from PAPR. OFDMA
reduces PAPR by distributing the entire bandwidth to multiple mobile stations with low transmit power. In addition,
OFDMA accommodates multiple users with widely varying applications, QoS requirements and data rates.
25

7.3.2. SC-FDMA
SC-FDMA is an extension of OFDMA and is used in the uplink of LTE. SC-FDMA significantly reduces PAPR as
compared to OFDMA by adding additional blocks of DFT and IDFT at transmitter and receiver. However, due to
existing similarities with OFDMA, parameterization of LTE in the uplink and downlink can be harmonized.
7.4. Physical Channelization and Mapping
The physical signals and physical channels are appropriately mapped on to the frame structure for uplink and
downlink transmission. The physical signals are generated in the Layer 1 and used for system synchronization, cell
identification and radio channel estimation. Physical channels provide a means of carrying data from higher layers
including control, scheduling and user payload.
WiMAX in PUSC mode is based on sub-channels occupying two PUSC clusters with 28 subcarriers x 2 OFDM
symbols. In 20 MHz bandwidth 60 sub-channels are available. The sub-channels are preferably mapped to sub-carriers
via a permutation function for interference averaging purposes. Within a frame, sub-channels can be flexibly assigned
to users in the two dimensional time frequency domain such that a rectangular user allocation has been obtained.
LTE partitions the 1 ms sub-frame into 12 x 14 stripes, which are called Physical Resource Blocks (PRBs) The
mapping of LTE PRBs on physical sub-carriers is either localized or distributed with the first one as preferred mode to
allow for efficient frequency selective packet scheduling.
Even if there are some different approaches in the LTE-mapping and WiMAX-mapping, these dont bring any
significant performing differences. Since the LTE-mapping was shown above, only the mapping of WiMAX will be
described below.
7.4.1. WiMAX
In WiMAX, the data to be transmitted is mapped to physical subcarriers in two steps:
- For the first step, data is mapped to one or more logical sub-channels called slots. This is controlled by the
scheduler. A slot is a basic unit of allocation in the frequency time grid. It is 1 sub-channel in frequency by 1,
2, or 3 symbols in time. The slots may be further grouped and assigned to segments based on the application
and can be used by the BS for different sectors in a cellular network.
- For the second step, the logical sub-channels are mapped to physical subcarriers. The physical data and pilot
subcarriers are uniquely assigned based on the type of subchannelization used. Sub-channelization is an
advanced form of FDMA in which multiple subcarriers are grouped into sub-channels to improve system
performance.
These sub-channels are formed by two types of subcarrier allocations:
- Distributed allocation: This allocation pseudo-randomly distributes the subcarriers over the available
bandwidth thus providing frequency diversity in frequency selective fading channels and inter-cell
interference averaging.
- Adjacent allocation: This allocation groups subcarriers adjacent to each other in the frequency domain. It is
useful for frequency nonselective and slowly fading channels, implementing Adaptive Modulation and Coding
(AMC), and assignment of the sub-channel with the best frequency response to the user.
- Contiguous symbols that use specific type of sub-channel assignment are called permutation zones, or zones.
There are seven zone types: FUSC, OFUSC, PUSC, OPUSC, AMC, TUSC1, and TUSC2. A single frame may
contain one or more zones. The DL sub-frame requires at least one zone and it always starts with PUSC. The
exact number of zones used in the frame depends on the network conditions.
26


FIGURE 25 - WIMAX MAPPING
7.5. Frame Structure
LTE uses two types of frames, the Generic Frame Structure (GFS) and Alternative Frame Structure (AFS). The GFS
used by FDD is 10ms in duration and has 10 subframes. Every subframe is 1ms in length and divided into two equal
slots of 0.5ms. Every slot comprises 7 OFDM/SC-FDMA symbols in case of normal CP and 6 OFDM/SC-FDMA symbols in
case of extended CP. In contrast to GFS, an alternative frame structure is used by TDD. In this frame structure, there is
a certain restriction for the allocation of subframes. Subframe #1 and subframe #6 are used for downlink
synchronization.
WiMAX supports both TDD and FDD frame structures but adopted TDD frame structure due to its flexible nature.
The TDD frame is comprised of downlink subframe along with uplink subframe. The DL subframe is further divided in
to DL-PHY PDU, comprised of preamble information, FCH and DL bursts. The preamble contains information about
downlink synchronization. Frame configurations such as modulation schemes, length of MAP messages and usable
subcarriers are provided by the Frame
Control Header (FCH). The DL-Burst consists
of MAC PDU, DL-Map and UL-Map. The Map
messages are used to provide user
allocations to the BS. The uplink subframe
contains information about contention
region, contention bandwidth requests and
UL PHY PDUs. The contention region
provides contention based access which
includes periodical closed loop frequency
and power adjustments. The contention
bandwidth request contains uplink
bandwidth requests. The WiMAX frame
structure is flexible in nature as it supports
variable length frames ranging from 2ms to
20ms. However, most of the WiMAX products use 5ms frames. The WiMAX TDD frame is shown in Figure 26.

FIGURE 26 - WIMAX FRAME STRUCTURE
27

8. QoS Introduction
Cellular network operators across the world have seen an explosive growth of mobile broadband usage. Traffic
volume per subscriber is increasing daily; in particular, with the introduction of flat-rate tariffs and more advanced
mobile devices. Operators are moving from a single-service offering in the packet-switched domain to a multi-service
offering by adding Value added services (VAS) that are also provided across the mobile broadband access. Examples of
such services are multimedia telephony and mobile-TV. These services have different performance requirements, for
example, in terms of required bit rates and packet delays. Solving these performance issues through over-provisioning
typically is uneconomical due to the relatively high cost for transmission capacity in cellular access networks which
includes radio spectrum and backhaul from the base stations.
4G broadband wireless technologies such as IEEE 802.16e/m and Third Generation Partnership Project (3GPP)
Long Term Evolution (LTE) have been designed with different QoS (Quality of Service) frameworks and means to
enable delivery of the evolving Internet applications. QoS specifically for evolving Internet applications is a
fundamental requirement to provide satisfactory service delivery to users and also to manage network resources.
Quality of Experience (QOE) for wireless networks has always been dependent on more variable Radio links and
factors like RSSI, RxQUAL, BER, Interference, mobility and number of simultaneous sessions. A wireless link generally
offers less bandwidth than a wired connection. Throughput, latency/delay and transmission errors vary much more
widely over a wireless connection because of constantly changing radio signal conditions and extensive digital radio
processing. Standard internet protocols, designed for use over a more stable wire-based connection, are not well-
suited to handle these variations. The same wideband radio channel must be shared among many user sessions and
each user session typically involves many different types of data streams and protocols. With the advent of
OFDMA(Orthogonal Frequency Division Multiple Access) and its concepts of sub-carriers which can be scaled from
1.4MHz to 20MHz, the air interface pipe has become bigger, faster and better. Traffic shaping and QOS will play a
big role in how services are delivered in the evolved 4G systems, with IP as a backbone to their deliverability and will
be done with the help of Deep Packet Inspection(DPI) mechanisms that will exist at the control nodes of the network.
QoS refers to the ability (or probability) of the network to provide a desired level of service for selected traffic on
the network.
Service levels are specified in terms of throughput, latency (delay), jitter (delay variation) and packet errors or
loss.
Different service levels are specified for different types or streams of traffic.
To provide QoS, the network identifies or classifies different types or streams of traffic and processes these
traffic classes differently to achieve (or attempt to achieve) the desired service level for each traffic class.
The effectiveness of any QoS scheme can be measured based on its ability to achieve the desired service levels
for a typical combination of traffic classes (traffic profile).
Let us discuss the QOS/queuing algorithms for WiMAX (802.16e/m) and LTE.
9. WiMAX QoS


FIGURE 27 - WIMAX FLOW BASED QOS
28

WiMAX employs flow-based QoS traffic can be classified to different service flows with different QoS
parameters. The ASN (Access Service Network) supports admission control & resource scheduling to manage
(nonguaranteed) QoS per service flow. The WiMAX ASN also marks traffic to enable other networks/elements (e.g.
backhaul network) to provide QoS consistent with the air interface. WiMAX provides QoS by classifying traffic to
service flows with different QoS. A service flow (SF) is a unidirectional MAC-layer transport connection with particular
QoS parameters.
The WiMAX network creates at least two (1 DL + 1 UL) service flows (default service flows) for a device when it
enters the network.
- The default service flows are Best Effort and support most traffic.
- The default service flows are also used for DHCP, DNS, etc.
Devices may also be pre-provisioned with additional dedicated service flows to provide QoS for selected
applications.
- Traffic must be classified to dedicated service flows.
- A single device can currently support up to 8 active service flows (4 DL + 4 UL), including the default service
flows.
WiMAX provides mechanisms for dynamically creating, modifying and deleting dedicated service flows during a
subscribers active session.
- Requests can be initiated by either the network or device.

TABLE 5 - THE QOS TYPOLOGIES IN WIMAX
WIMAX enables 5 typologies of QoS:
1. Unsolicited Grant Service (UGS) for real time services characterized by fixed-length and periodic cadence
packets.
2. Real-Time Polling Service (rtPS) for real time services characterized by variable-length and variable cadence
packets.(i.e. video services )
3. Non Real-Time Polling Service (nrtPS) useful for services tolerant to delays, but with guaranteed minimum
bandwidth(i.e. FTP applications)
4. Extended Real-Time Polling Service (ErtPS) similar to rtPS for real time flows with fixed size (i.e. VoIP with
silence suppression)
5. Best Effort (BE) for data flows where theres no request for minimum level services (without requirement of
guaranteed minimum bandwidth or limited delay).
9.1. WiMAX Air Interface Scheduler
The SF framework provides QoS granularity and inter-SF isolation over the air interface. The air interface
scheduler is responsible for enforcing QoS by assigning DL and UL physical (PHY) layer resource blocks among SFs. This
mechanism is called bandwidth allocation. A scheduling decision is determined based on appropriate SFs QoS state
variables, like buffer lengths, elapsed packet delay, SFs QoS requirements such as MRTR and maximum latency, and
radio frequency (RF) conditions of different MSs. In general:
- SFs with shorter maximum latency or SFs with higher MRTR receive higher priorities in the scheduling
decision.
- SFs with late packets or long buffer lengths also, receive higher priorities in the scheduling decision.
- MSs with better RF conditions receive higher priorities by the scheduler in order to improve overall sector
throughput. However, an operator can adjust fairness to ensure MSs in poor RF conditions receive
reasonable QoS.
The air interface scheduler may differentiate between traffic flows within an SF by packet priority levels such as
DSCP values (intra-SF). Also, it may further utilize the traffic priority attribute of SFs to differentiate between traffic
associated with SFs of the same type (inter-SF).
29

9.2. WiMAX QoS popular applications
Category Application types BW-requirement
Voice of IP Skype, SIP Phone 3~16kBps, 8kbps, (G.729)~64kbps(G.711)
Video Conference 320*240 (H.26L) 160kbps ~
Streaming Live Audio, IPTV 8KBps (16bit, 44MHz, single), 2Mbps ~
File Transfer FTP, BT, Emule Unspecified
Web Browsing HTML, Blog, Dynamic Web Pages, Flash Unspecified
Instant Message MSN, Yahoo Messenger Unspecified
E-mail Outlook, Web mail Unspecified
E-Commerce Stock transaction, Online order, Online sell Unspecified
On-line Game Play in turn or time critical Unspecified
TABLE 6 - WIMAX QOS POPULAR APPLICATIONS
9.3. WiMAX QoS requirements
Category Loss-tolerant Real-Time BW-requirement
Voice of IP Skype Yes
SIP Phone Yes
Yes Low ~ Medium
Video Conference Yes Yes Medium ~ High
Streaming Live Audio Yes
IPTV Yes
Yes Live Audio Low
IPTV High
File Transfer FTP, BT, Emule No No Low ~ High
Web Browsing HTML, Blog, Dynamic Web
Pages, Flash No
No Low ~ Medium
Instant Message MSN, Yahoo Messenger No Yes/No Low ~ Medium
E-mail Outlook, Web mail No No Low
E-Commerce Stock transaction, Online order,
Online sell No
Yes/No Low ~ Medium
On-line Game Play in turn or time critical No Yes/No Medium
TABLE 7 - WIMAX QOS REQUIREMENTS
9.4. WiMAX QoS classes mapping
Category WiMAX Class
Voice of IP UGS
Video Conference UGS
Streaming UGS
File Transfer BE
Web Browsing nrt-PS
Instant Message rt-PS/nrt-PS
E-mail BE
E-Commerce rt-PS/nrt-PS
On-line Game rt-PS/nrt-PS
TABLE 8 - WIMAX QOS CLASSES MAPPING
30

9.5. WiMAX MAC layer and QoS
At the MAC layer, WiMAX provides a connection oriented service in which logical connections between mobile
stations and base stations are distinguished by16 bit connection identifiers (CID). A base station assigns CIDs to
unidirectional connections; this means that the identifiers for uplink and downlink are different. The MAC layer is also
in charge of mapping data to the correct destination based on the CID. A mobile station will typically be assigned
multiple CIDs, a primary one for management purposes and one or more secondary ones used to carry data
connections. To initiate a data transfer either a mobile station or a base station creates a service flow. Independently
of who requests the creation, the base station is in charge of assigning the flow a 32 bit service flow identifier (SFID).
Each admitted service flow is transported over the air using a particular CID. Additionally, any service flow is
associated with a set of QoS parameters such as delay, jitter, or throughput. Service flows with the same QoS
parameters are grouped into a service flow class. Classes are not defined in the standard; their definition is left to the
service providers. Finally, traffic classification at the MAC layer is done based on the provider defined classes. Assuring
that QoS requirements are met for all service flows that have been admitted in the system is also done at the MAC
layer. Each flow in the system can negotiate, according to a set of QoS parameters, a particular scheduling service. As
told before, five different types of scheduling services are defined for WiMAX, each of them providing different type
of QoS guarantees (UGS, rtPS, nrTPS, ErtPS, BE).
9.6. WiMAX Admission Control
WiMAX networks support QoS reservation of resources by allowing a new flow to apply for admittance in the
system through a Dynamic Service Addition REQuest message (DSA-REQ). Such requests contain a QoS parameter set
which includes different mandatory information depending on the data delivery service requested in the downlink or
the uplink direction. Next table summarizes the required QoS parameter set per data delivery service according to the
IEEE 802.16 standard(IEEE Working Group, 2009).

TABLE 9 DL REQUIRED QOS PARAMETER SET

Additionally, other QoS parameters can be specified to further define the QoS guarantees required by a flow and
allow for a higher efficiency of the resource utilization in the network, e.g., Maximum Sustained Traffic Rate, Traffic
Priority, and so forth. A similar set of parameters is required in the uplink direction, as shown in the next table.

TABLE 10 - UL REQUIRED QOS PARAMETER SET
Admission control is always one of the most significant issues in wireless communications. The basic admission
mechanisms of the guard channel and queuing were introduced in the mid-80s to give priority to handover calls over
31

new calls (L. Wang, 2007), (D. Hong). An admission control mechanism decides which flows may be allowed into a
network without the network being saturated. It uses knowledge of incoming
flows and the current network situation to ensure the QoS for flows in the network, hence the importance of its
efficiency, both in terms of accuracy and computational complexity.
Current literature on admission control for WiMAX proposes a wide range of options achieving very different
levels of accuracy as well as computational load. The authors in (H. Wang, 2005) propose a simple approach that is
mainly based on the mean data rate requirements that an application specifies. With such knowledge, connections
from different services can be progressively admitted into the WiMAX system by following a predetermined priority
order. Such approach requires few computational resources; however, neither does it take into consideration the
time-varying nature of typical applications such as video or voice with activity detection nor the time period at which
these resources are required. Thus, actual available resources might be unused.
A similar approach is proposed in (S. Chandra, 2007) for uplink connections. The work is extended to include a
bandwidth estimation method used to monitor the queue lengths of all polling service connections at regular
intervals. Such monitoring can be used to estimate dynamic bandwidth requirements. However, the granularity of the
monitoring interval hinders the method from following fast changing requirements such as those found in modern
video applications.
A different approach is proposed in (A. Teh, Efficient Admission Control Based on Predicted Traffic
Characteristics, 2007) where the variance of a flow bandwidth requirements is proposed as a statistic to describe the
application requirements.

The authors further extend this method in (A. Teh, Efficient Admission Control Based on Predicted Traffic
Characteristics, 2007) where they take into account the predicted fraction of packets delayed above a threshold.
However, there is no proof that variance is a good descriptor for all traffic types. Such knowledge can then be used to
assess if the QoS requirements for a particular flow can be fulfilled.
In (S. Ghazal, 2008) a fuzzy-logic based controller is employed to predict the blocking probability of a particular
flow. The authors claim that the varying nature of real time applications can be taken into consideration by a rule-
based controller. However, a validation of such controller against diverse types of traffic is not provided.
Finally, in (O.Yang, 2006) an accurate admission control algorithm for video flows is proposed which takes into
account both throughput and delay requirements. This approach, referred to as Diophantine, cannot be used in
practice due to its computational load.
In (Mezzavilla, 2010) the E-Diophantine solution is proposed. It consists in first, exactly as in the Diophantine
case, finding the set of intersections for all flows. These results are summarized in a matrix of
intersections of flows. Then, the rest of the set of intersections between the solutions found is derived based on
the information obtained regarding the flows involved in each intersection set.
10. LTE QoS
10.1. Introduction to QoS over LTE
While WiMAX MAC layer has a connection-oriented architecture that is designed to support a variety of
applications, LTE supports End-to-end QoS, meaning that bearer characteristic are defined and controlled
throughout the duration of a session between the mobile device (UE or MS) and the packet data network gateway
(PDN-GW). QoS is characterized by and index, QCI (QoS Class Identifier), and the parameter ARP (Allocation and
Retention Priority). Bearer types belong to two main classes with guaranteed and non-guaranteed rates and LABELs
specify in more detail what values of packet delay and loss can be tolerated for any given bearer. LTE can also support
QoE (Quality of Experience or Quality of User Experience) (Bhandare, 2008).
Because the existing QoS concept for GSM and WCDMA systems is somewhat complex, the LTE-SAE (System
Architecture Evolution) targets a QoS concept that combines simplicity and flexible access with backward
compatibility. LTE-SAE has adopted a class-based QoS concept that gives operators a simple, yet effective solution to
differentiating between packet services (Per Beming, 2007).
32

10.2. The Service Data Flows and the Class-Based Method

FIGURE 28 - GENERAL FLOW OF QOS IN LTE
The QoS level of granularity in the LTE evolved packet system (EPS) is bearer, which is a packet flow established
between the packet data network gateway (PDN-GW) and the user terminal (UE or MS). The traffic running between a
particular client application and a service can be differentiated into separate service data flows (SDFs).
SDFs mapped to the same bearer receive a common QoS treatment (e.g., scheduling policy, queue management
policy, rate shaping policy, radio link control (RLC) configuration). A bearer is assigned a scalar value referred to as a
QoS class identifier (QCI), which specifies the class to which the bearer belongs. QCI refers to a set of packet
forwarding treatments (e.g., scheduling weights, admission thresholds, queue management thresholds, and link layer
protocol configuration) preconfigured by the operator for each network element. The class-based method improves
the scalability of the LTE QoS framework. The bearer management and control in LTE follows the network-initiated
QoS control paradigm, and the network initiated establishment, modification, and deletion of the bearers.
10.3. LTE Bearers
Guaranteed bit rate (GBR): Dedicated network resources related to a GBR value associated with the bearer are
permanently allocated when a bearer becomes established or modified.
Non-guaranteed bit rate (non-GBR):The non-GBR bearer is the default bearer. Non-GBR bearers do not
guarantee any particular bit rate as best effort (BE) in WiMAX.
In LTE the mapping of SDFs to a dedicated bearer is classified by IP five-tuple based packet filter either
provisioned in PCRF (Policy and Charging Rules Function is the policy entity that forms the linkage between the service
and transport layers) or defined by the application layer signaling. However, any SDF that does not match any of the
existing dedicated bearer packet filters is mapped onto the default bearer (Vadada, 2010).
LTE specifies a number of standardized QCI values with standardized characteristics, which are preconfigured for
the network elements. This ensures multivendor deployments and roaming. The mapping of standardized QCI values
to standardized characteristics is captured below diagram. Besides QCI, the following are QoS attributes associated
with the LTE bearer:

TABLE 11 - STANDARDIZED QCI VALUES ASSOCIATED WITH THE LTE BEARER
33

10.4. LTE definitions
QCI: It is a scalar and it determines the characteristics of packet forwarding (e.g., scheduling weights, admission
thresholds, queue management thresholds, and link layer protocol configuration).
Allocation and retention priority (ARP):Contains allocation and retention priority for SDF (Service Data Flow). It
is used by call admission control and overload control for control plane treatment of a bearer. The call admission
control uses the ARP to decide whether a bearer establishment or modification request is to be accepted or rejected.
Also, the overload control uses the ARP to decide which bearer to release during overload situations.
Maximum bit rate (MBR): The maximum sustained traffic rate the bearer may not exceed; only valid for GBR
bearers.
Aggregate MBR (AMBR):Used by non GBR flows. The total amount of bit rate of a group of non-GBR bearers. In
3GPP Release 8 the MBR must be equal to the GBR, but for future 3GPP releases an MBR can be greater than a GBR.
The AMBR can help an operator to differentiate between its subscribers by assigning higher values of AMBR to its
higher-priority customers compared to lower-priority ones.
10.5. LTE air interface scheduler
E-UTRAN is the air interface of LTE. Its main features are:
Peak download rates up to 292 Mbit/s and upload rates up to 71 Mbit/s depending on the user equipment
category.
Low data transfer latencies (sub-5 ms latency for small IP packets in optimal conditions), lower latencies for
handover and connection setup time than with previous radio access technologies.
Support for terminals moving at up to 350 km/h or 500 km/h depending on the frequency band.
Support for both FDD and TDD duplexes as well as half-duplex FDD with the same radio access technology
Support for all frequency bands currently used by IMT systems by ITU-R.
Flexible bandwidth: 1.4 MHz, 3 MHz, 5 MHz, 15 MHz and 20 MHz are standardized.
Support for cell sizes from tens of meters radius (femto and pico-cells) up to 100 km radius macro-cells
Simplified architecture: The network side of EUTRAN is composed only by the eNodeBs
Support for inter-operation with other systems (e.g. GSM/EDGE, UMTS, CDMA2000, WiMAX...)
Packet switched radio interface (Wikipedia, 2011).
The LTE air interface scheduler is responsible for dynamically allocating DL and UL air interface resources among the
bearers appropriately while maintaining their desired QoS level in both DL and UL directions. In order to make a
scheduling decision, the LTE air interface scheduler uses the following information as input:
Radio conditions at the UE measured at the eNB and/or reported by the UE.
The state of different bearers, such as uplink buffer status reports (BSR) that are required to provide support
for QoS-aware packet scheduling, elapsed time.
The QoS attributes of bearers and packet forwarding parameters associated with the QCIs.
The interference situation in the neighboring cells. The LTE scheduler can try to control inter-cell interference
on a slow basis. This improves the QoE associated with the MSs at the cell edge.
10.6. A New Approach for the Uplink Admission Control
UTRAN LTE is targeted to efficiently guarantee the quality of service (QoS) of services such as audio/video
streaming, gaming and voice over IP (VoIP). To provide QoS control, it is necessary that AC and packet scheduling (PS)
are QoS aware (K.I. Pedersen, 2006). The QoS aware AC determines whether a new radio bearer should be granted or
denied access based on if required QoS of the new radio bearer will be fulfilled while guaranteeing the required QoS
of the in-progress sessions (V8.3.0, 2008).
Because of the use of orthogonal multiple access scheme, the AC algorithms for WCDMA system will not be
suitable for LTE. An AC algorithm in downlink for IEEE 802.16e, which has similar PHY-MAC as LTE, to provide QoS is
proposed in (S.S. Jeong, 2005). But this downlink AC algorithm cannot be used in uplink because of an additional
degree of freedom i.e., user transmit power, which is different for all the users and it varies with transmission time
interval (TTI) due to power control.
34

In (M. Anas, 2008) the authors propose a new AC algorithm for LTE utilizing the fractional power control (FPC)
formula agreed in 3GPP [6].This paper considers GBR as the only QoS criterion of the bearer, and each user is assumed
to have a single-bearer.
10.6.1. Reference AC Algorithm
This AC algorithm decides to admit a new user if the sum of the GBR of the new and the existing users is less
than or equal to the predefined average uplink cell throughput( ) as expressed in the next formula:

The problem of this AC algorithm is that it treats all the users equally and does not differentiate them based on
their location in the cell. Furthermore, fixed value of does not represent the actual average uplink cell
throughput. For this algorithm will admit all the users requesting admission and is equivalent to the case of
no AC.
10.6.2. Proposed AC Algorithm
The proposed AC algorithm checks if the current resource allocation can be modified so as to admit the new user
and satisfy the GBR requirements of all the active users and the new user. Hence, the admission criterion for the new
user is that the sum of the required number of physical resource blocks1 (PRBs) per TTI ( ) by a new user requesting
admission and existing users is less than or equal to the total number of PRBs in the system bandwidth ( ). This can
be expressed as:

Where K is the number of existing users in the cell. Hence, the AC problem is to calculate the required number of
PRBs per TTI of a user while satisfying its GBR requirement and transmit power constraint.
The tests of the authors in (M. Anas, 2008) showed that the FPC based AC algorithm is robust and tunes itself
inherently to the load conditions, unlike tuning in reference AC. Hence, the FPC based AC is a good QoS-aware
AC algorithm for LTE uplink. Similar AC design approach could also be used for other NextG standards which uses
orthogonal multiple access techniques and FPC in uplink.
11. Comparison of WiMAX and LTE QoS
Here, we focus on a comparison of the QoS framework between LTE and IEEE 802.16e/IEEE 802.16m (WiMAX) at
the air interface:
QoS transport unit: The basic QoS transport unit in the IEEE 802.16e/IEEE 802.16m system is an SF, which is a
unidirectional flow of packets either UL from the MS/AMS or DL packets from the BS/ABS. The basic QoS transport in
LTE is a bearer between UE and the PDN-GW. All packets mapped to the same bearer receive the same treatment.
QoS scheduling types: There are five scheduling service types in IEEE 802.16m including UGS, ertPS, rtPS, nrtPS,
and BE from IEEE 802. LTE supports GBR and non-GBR bearers. The GBR bearer will be provided by the network with a
guaranteed service rate, and its mechanism is like rtPS; the non-GBR has no such requirement and performs like BE in
IEEE 802.16e/IEEE 802.16m.
QoS parameters per transport unit: Depending on the SF type, IEEE 802.16e/IEEE 802.16m can control
maximum packet delay and jitter, maximum sustained traffic rate (MSTR), minimum reserved traffic rate (MRTR), and
traffic priority. LTE MBR and GBR are similar to IEEE 802.16e/IEEE 802.16m MSTR and MRTR, respectively. However,
MBR and GBR are only attributes of GBR bearers, while in IEEE 802.16e/IEEE 802.16m even a BE SF can be rate limited
using its MSTR. Also, with 3GPP Release 8, GBR and MBR are set equal, while IEEE 802.16e/IEEE 802.16m allows the
operator to select independent values for MSTR and MRTR. On the other hand, LTE AMBR allows the operator to rate
cap the total non-GBR bearers of a subscriber.
QoS handling in the control plane: The SF QoS parameters are signaled in IEEE 802.16e/IEEE 802.16m via
DSx/AAI-DSx messages. In LTE the QCI and associated nine standardized characteristics are not signaled on any
interface. Network initiated or client initiated QoS are both supported in IEEE 802.16e/IEEE 802.16m systems.
Therefore, both operator managed service and unmanaged service can be supported. The flexible architecture gives
the mobile client opportunities for differentiation. LTE only supports network initiated QoS control.
35

QoS user plane treatment: The ARP parameter in LTE provides the following flexibilities to the operator:
Accept or reject establishment or modification of bearers during the call admission control decision based on
not only the requested bandwidth, available bandwidth, or number of established bearers, but also the
priority of the bearer
Selectively tear down bearers based on their priorities during an overload situation
12. 4G QoS
12.1. QoS Optimization Architecture
The 4G mobile networks will replace the existing mobile phone networks in an IP based network. With arrival
ofIPv6, every device in the world can easily get a unique IP address. This allows full IP based communications through
a mobile device. If 4G is deployed efficiently, it can solve many problems related to speedy connections, performance,
connectivity, and end user performance. The QoS guarantees are important if the network capacity is insufficient,
especially for real-time streaming multimedia applications such as voice over IP, online games and IP Telephony, since
these often require fixed bit rate and are delay sensitive, and in networks where the capacity is a limited resource as
we observe in cellular data communication.
In the absence of network congestion, QoS mechanisms are however not required. In particular, the QoS is
important in packet-switched telecommunication networks for traffic management and is meant to describe the ways
of possible reservation of control mechanisms. QoS is vital in cases when network jitter and congestion increases as in
case of digital media streaming applications, web TV, voice over IP etc (Xia Gao, 2004). Networks, where the traffic
load is normal, QoS may not be that much necessary unless congestion state appears to effect services availability as
we observe in cellular mobile communication networks. In terms of services, applications that use VoIP, video
streaming, web, e-mail access and file transfer have completely different prerequisites, and the network should be
able to differentiate their service. The scalability concerns favor a differentiated services approach. This approach is
laid on the assumption to control the requests at the borders of the network, and that end-to-end QoS assurance is
achieved by a concatenation of multiple managed entities. With such requirements, network resource control must be
under the control of the network service provider. It has to be able to control every resource, and to grant or deny
user and service access. This requirement calls for flexible and robust explicit connections admission control
mechanisms at the network edge, able to take fast decisions on user requests.
The proposed QoS architecture has been designed by taking into account the mobility, QoS, and other relevant
issues as jitter, queuing and bandwidth.
12.2. Components of the Architecture
The proposed architecture of the QoS is mainly composed by two principal components:
1. The Services Archives Unit denoted by SAU.
2. The Cumulative Services Archives Unit denoted by CSAU.
The proposed architecture is shown in the next figure.

FIGURE 29 - PROPOSED QOS ARCHITECTURE IN 4G
36

15.2.1. SAU Service Archives Unit
The proposed SAU is a server machine with an IP based detailed database of user record and application
program that covers a single access network. When a user gets connected to the network, it traces its network ID,
matches it with the allocated IP address and with the help of the application program calculates the types of services
(Voice, Multimedia, WAP etc) being used during the communication session. The SAU thus creates a log of the services
made used by various customers. Later on this log of users may be transmitted to the central processing machine
termed as CSAU by means of a transmitter (for example Mobile IP) as shown in the figure. The proposed architecture
can be used to identify issues of performance tuning/ diagnosis through performance measurements of factors like
jams, network jitter, network congestions etc.
15.2.2. CSAU Cumulative Service Archives Unit
The CSAU is meant to receive the data from the SAUs through its receiver. After receiving, the CSAU generates a
summary of services used in different access networks in form of a graph. This generated graph determines a
summary of services used, services usage time slots, breaks, and re-logins, network jitter and congestion states. On
implementation, the proposed architecture will be vital in diagnosing QoS problems and hence optimizing the network
accordingly. The data flow diagram of the proposed analysis unit based 4G network is shown in the next figure:

FIGURE 30 COMPLETE DATA FLOW DIAGRAM OF SAU ARCHITECTURE
The proposed structure once embedded in the 4G network can easily derive calculations for QoS enhancement
without disturbing the other concurrent processes. Moreover the architecture is least concerned with the resource
utilization of the 4G communication infrastructure as the model describes to have its own hardware and software
bases to perform its operations.
12.3. Discussion
As computer and telecommunication networks are in a state of merger, there is a dire need to improve the
availability of uninterrupted high quality of services to the users. This requires architectures with huge capacities of
handling data transfer and services provision. Such architectures must be flexible enough to handle multiple types of
data acquisition requests. A network must keep a personalized intelligent profile of user requirements so as to
facilitate them during network access. The suggested architecture fulfills the requirement of not disturbing the
Authentication, Authorization and Accounting mechanisms as it fetches the user data after connectivity for both static
and dynamic users. In areas with heavy loads, the users and terminals of a 4G use the handover between any of the
available technology without breaking their network connections and they maintain their QoS levels. In order to keep
a track record of user profile by the service provider, a user is approached both inside and outside his mother
network. If there are a lot of service providers, they all keep on exchanging information to keep track of the services
being used by their customers (Marques, 2003). In the proposed architecture all the network service discrepancies can
37

be removed with uniqueness without directly affecting the 4G network user performance because the structure is
working as an attached module and not affecting the already running components of the 4G system. The proposed
architecture is gaining information through the bandwidth utilization and user demand factors. The data acquired by
the proposed architecture is purely related to QoS optimization. The main feature of the proposed QoS architecture is
that it never asks for any information that is personal/ confidential. Hence the type of data received determines the
nature of user requirements. This identifies the services being utilized by a user on an access network, e.g. the
functionality of connection establishment time defines the time slots of services usage. All this information being vital
for calculating the trend of user needs makes the network optimization easier for the professionals who can model
the network accordingly and derive a general trend of user specific requirements. Once a measurement of user
requirements is achieved through this architecture, we can direct our optimization in a right direction without
disturbing the user services, hence saving extra costs and efforts.
The proposed SAU architecture has the capability and flexibility to fit in an already functional 4G network. It can
be customized according to the network usage priorities so as to considerably improve a networks QoS performance.
The concept will be refined by a field trial with real users after an initial test phase in controlled environments in
future (Aaqif Afzaal Abbasi, 2010).

38


13. References
[1-01] TECHNICAL WHITE PAPER: Long Term Evolution (LTE): A Technical Overview, Motorola Inc.
[1-02] E.Dahlman, S. Parkvall, J.Skold, P.Beming, 3G Evolution HSPA and LTE for Mobile Broadband 2nd
Edition Oct 2008, Academic Press.
[1-03] Technical Overview of 3GPP LTE, Hyung G. Myung. May 18, 2008.
[1-04] 3GPP LTE presentation Kyoto May 22rd 2007. 3GPP TSG RAN Chairman (Alcatel-Lucent).
[1-05] LTE and UMTS Terminology and Concepts, Chris Reece, Subject Matter Expert - 8/2009. Award
Solutions.
[1-06] Protocol Stack Testing for LTE, Effective test strategies can help transform UMTS into a cellular
wideband system, Christina Gener, Rohde & Schwarz. www.tmworld.com. 14-12-09
[1-07] LTE E-UTRAN and its Access Side Protocols, Suyash Tripathi, Vinay Kulkarni, and Alok Kumar,
Continuous Computing. http://www.ccpu.com/papers/lte-eutran. 14-12-09.
[1-08] 3GPP TS 36.300 V9.1.0 (2009-09), Technical Specification Release 9. www.3gpp.org. 14-12-09.
[1-09] LTE Whitepaper Santosh Kumar Dornal. http://wired-n-wireless.blogspot.com 26. 14-12-09.
[1-10] Agilent - 3GPP Long Term Evolution.
[1-11] Tejas Bendahare LTE and WiMAX Comparison.
[1-12] Ericsson LTE: The Evolution of Mobile Broadband.
[1-13] Ericsson LTE - An introduction.
[1-14] Myung Technical Overview of 3GPP LTE.
[1-15] A. Larmo, M. Lindstrm, M. Meyer, G. Pelletier, J. Torsner, H. Wiemann, Ericsson Research The
LTE Link-Layer Design.
[1-16] EventHelix.com Inc 3GPP LTE Channels and MAC Layer.
[1-17] Motorola Long Term Evolution (LTE): A Technical Overview
[1-18] EventHelix.com Inc 3GPP LTE Packet Data Convergence Protocol (PDCP) Sub Layer.
[1-19] Freescale Long Term Evolution Protocol Overview.
[1-20] EventHelix.com Inc 3GPP LTE Radio Link Control (RLC) Sub Layer.
[1-21] Duong 802.16 MAC layer: structure and QoS support.

You might also like