Long Term Evolution
Long Term Evolution
Long Term Evolution
What is LTE?:
Long-Term Evolution (LTE) of the UMTS Terrestrial Radio Access and Radio Access
Network is aimed at commercial deployment around 2010 timeframe. Goals for the
evolved system include support for high peak data rates, low latency, improved system
capacity and coverage, reduced operating costs, multi-antenna support, efficient support
for packet data transmission, flexible bandwidth operations and seamless integration with
existing 2G and 3G systems. LTE supports operations in Frequency Division Duplex
(FDD) and Time Division Duplex (TDD) modes to provide operators with deployment
flexibility regarding spectrum allocation. Support for Half Duplex FDD is also included.
LTE can deliver peak rates exceeding 100Mbps (in a 20MHz channel – and up to
172Mbps using 2x2 Multiple In Multiple Out (MIMO)) using the Orthogonal Frequency
Division Multiple Access (OFDMA) radio interface in the downlink and 50 Mbps (in a
20MHz channel) using Single Carrier- Frequency Division Multiple Access (SC-FDMA)
in the uplink. Developing in parallel, LTE and HSPA+ are expected to achieve similar
levels of spectral efficiency. Flexible and an efficient user of spectrum, LTE is
compatible with both Time Division Duplex (TDD) and Frequency Division Duplex
(FDD) operation. Latency is expected to be less than 100msec for call setup and to be
less than 5msec in radio access network (RAN) during data transfer, one of the operators
pioneering LTE.
LTE Technology:
When the project to define the evolution of 3G networks was started, the following
targets were set by Network operators as the performance design objectives. It was
against these objectives that the different solutions were developed by various
organizations and then proposed to 3GPP. The 3GPP then had a study to consider the
proposals, evaluate the performance of each, and then make a recommendation for the
way forward that would form the basis of LTE.
- Transition time of less than 100 ms from a camped state, such as Release 6 idle
mode, to an active state such as Release 6 CELL_DCH.
- Transition time of less than 50 ms between a dormant state such as Release 6
CELL _ PCH and an active state such as Release 6 CELL_DCH.
User Throughput:
- Downlink : average user throughput per MHz, 3 to 4 times Release 6 HSDPA
- Uplink: average user throughput per MHz, 2 to 3 times Release 6 Enhanced
Uplink
Spectrum Efficiency:
- Downlink: In a loaded network, target for spectrum efficiency (bits/sec/Hz/site), 3 to
4 times Release 6 HSDPA
- Uplink: In a loaded network, target for spectrum efficiency (bits/sec/Hz/site), 2 to 3
times Release 6 Enhanced Uplink
Mobility
- E-UTRAN should be optimized for low mobile speed from 0 to 15 km/h
- Higher mobile speed between 15 and 120 km/h should be supported with high
performance
- Mobility across the cellular network shall be maintained at speeds from 120 km/h to
350 km/h (or even up to 500 km/h depending on the frequency band)
Coverage
- Throughput, spectrum efficiency and mobility targets above should be met for 5 km
cells, and with a slight degradation for 30 km cells. Cells range up to 100 km should
not be precluded.
Complexity
- Minimize the number of options
- No redundant mandatory features
So we can see that LTE refers to a new radio access technology to deliver higher data
rates (50-100 MB/s), and fast connection times. The technology solution chosen by
3GPP uses OFDMA access technology, and MIMO technologies together with high
rate (64QAM) modulation. LTE uses the same principles as HSPA (in existing Release
6 3GPP networks) for scheduling of shared channel data, HARQ, and fast link
adaptation (AMC adaptive modulation and coding). This technology enables the
network to dynamically optimise for highest cell performance according to operator
demands (e.g. speed, capacity etc).
• Downlink multiplexing
• MIMO and transmit diversity
• MBM
• Scheduling, link adaptation, HARQ and measurements like in 3.5G
Uplink
• Single Carrier FDMA access, with BPSK, QPSK, 8PSK and 16QAM modulation
• Transmit diversity
• Scheduling, link adaptation, HARQ and measurements like in 3.5G
• Random Access Procedures
• In principle, the peak throughput with code re-use is M times the rate achievable
with a single transmit antenna.
OFDMA is used to provide higher data rates than used in 3G through the use of a
wider transmission bandwidth. The 5 MHz channel is limiting for WCDMA data rates,
and use of a wider RF band (20 MHz) leads to group delay problems that limit data
rate. So, OFDM breaks down the 20 MHz band into many narrow bands (sub-
channels) that do not suffer from the same limitation. Each sub-channel is modulated
at an optimum data rate and then the bands are combined to give total data throughput.
Algorithms select the suitable sub-channels according to the RF environment,
interference, loading, quality of each channel etc. These are then re-allocated on a
burst by burst level (each sub- frame, or 1 ms). The OFDM frequencies in LTE have
been defined with a carrier spacing of 15 kHz. Each ‘resource block’ (that represents
an allocation of radio resource to be used for transmission) is a group of 12 adjacent
sub- carriers (therefore 180 kHz) and a slot length of 0.5 ms.
Uplink:
• Single Carrier FDMA access, with BPSK, QPSK, 8PSK and 16QAM modulation
• Transmit diversity
• Scheduling, link adaptation, HARQ and measurements like in 3.5G
• Random Access Procedures
OFDMA is used to provide higher data rates than used in 3G through the use of a
wider transmission bandwidth. The 5 MHz channel is limiting for WCDMA data rates,
and use of a wider RF band (20 MHz) leads to group delay problems that limit data
rate. So, OFDM breaks down the 20 MHz band into many narrow bands (sub-
channels) that do not suffer from the same limitation. Each sub-channel is modulated
at an optimum data rate and then the bands are combined to give total data throughput.
Algorithms select the suitable sub-channels according to the RF environment,
interference, loading, quality of each channel etc. These are then re-allocated on a
burst by burst level (each sub- frame, or 1 ms). The OFDM frequencies in LTE have
been defined with a carrier spacing of 15 kHz. Each ‘resource block’ (that represents
an allocation of radio resource to be used for transmission) is a group of 12 adjacent
sub- carriers (therefore 180 kHz) and a slot length of 0.5 ms. MIMO is an abbreviation
of “Multiple Input Multiple Output”. This is an antenna technology together with
signal processing that can increase capacity in radio link. In LTE, the user data is
separated into 2 data streams, and these are then fed to 2 separate TX antennas, and
received by 2 separate RX antenna. Thus the data is sent over 2 separate RF paths. The
algorithm used to split and then recombine the paths allows the system to make use of
the independence of these 2 paths (not the same RF losses and interference on both) to
get extra data throughput better than just sending the same data on 2 paths. This is
done by separating the data sets in both space and time. The received signals are then
processed to be able to remove the effects of signal interference on each, and thus
creating 2 separate signal paths that occupy the same RF bandwidth at the same time.
This will then give a theoretical doubling of achievable peak data rates and throughput
in perfect conditions where each RF path is completely isolated and separate from the
other path.
The protocol stack for LTE is based on a standard SS7 signalling model, as used with
3GPP W-CDMA networks today. In this model, in layer 3 the Non Access Stratum
(NAS) is connected to the logical channels. These logical channels provide the
services and functions required by the higher layers (NAS) to deliver the applications
and services. The logical channels are then mapped onto transport channels in Layer 2,
using the RRC elements. These provide control and management for the flow of the
data, such as re-transmission, error control, and prioritisation. User data is managed in
Layer 2 by the Packet Data Convergence Protocol (PDCP) entity. The air interface and
physical layer connection is then controlled and managed by the Layer 1, using RLC
and MAC entities and the PHY layer. Here the transport channels are mapped into the
physical channels that are transmitted over the air.
LTE radio channels are separated into 2 types, physical channels and physical signals.
A physical channel corresponds to a set of resource elements carrying information
originating from higher layers (NAS) and is the interface defined between 36.212 and
36.211. A physical signal corresponds to a set of resource elements used only by the
physical layer (PHY) but does not carry information originating from higher layers.
Based on the above architecture, the downlink consists of 5 physical channels and 2
physical signals:
Each subframe is assumed to be self-decodable, i.e the BCH can be decoded from a
single reception, assuming sufficiently good channel conditions.
So we can see that both the uplink and downlink are composed of a shared channel (to
carry the data) together with its associated control channel. In addition there is a
downlink common control channel that provides non user data services (broadcast of
cell information, access control etc…).
Within the LTE protocol stack, the physical layer channels (described above) are
mapped through to the higher layers via the functions of the MAC and RLC layers of
the protocol stack. This is shown below for both the user plane and control plane data.
Here we can see the simplification of the network due to the SAE architecture
discussed previously. The eNB is responsible for managing the air interface and flow
control of the data, and the aGW is responsible for the higher layer control of user data
within PDCP and NAS services.
Control channels are used for transfer of control plane information only. The control
channels are:
• Broadcast Control Channel (BCCH)
A downlink channel for broadcasting system control information.
• Paging Control Channel (PCCH)
A downlink channel that transfers paging information. This channel is used when the
network does not know the location cell of the UE.
• Common Control Channel (CCCH)
• Multicast Control Channel (MCCH)
A point-to-multipoint downlink channel used for transmitting MBMS control
information from the network to the UE, for one or several MTCHs. This channel is
only used by UEs that receive MBMS.
• Dedicated Control Channel (DCCH)
A point-to-point bi-directional channel that transmits dedicated control information
between a UE and the network. Used by UEs when they have an RRC connection.
Traffic channels are used for the transfer of user plane information only. The traffic
channels are:
• Dedicated Traffic Channel (DTCH)
A Dedicated Traffic Channel (DTCH) is a point-to-point channel, dedicated to one
UE, for the transfer of user information. A DTCH can exist in both uplink and
downlink.