Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Software Networks: Virtualization, SDN, 5G and Security
Software Networks: Virtualization, SDN, 5G and Security
Software Networks: Virtualization, SDN, 5G and Security
Ebook377 pages3 hours

Software Networks: Virtualization, SDN, 5G and Security

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The goal of this book is to describe new concepts for Internet next generation. This architecture is based on virtual networking using Cloud and datacenters facilities. Main problems concern 1) the placement of virtual resources for opening a new network on the fly, and 2) the urbanisation of virtual resource implemented on physical network equipment. This architecture deals with mechanisms capable of controlling automatically the placement of all virtual resources within the physical network.

In this book, we describe how to create and delete virtual networks on the fly. Indeed, the system is able to create any new network with any kind of resource (e.g., virtual switch, virtual routers, virtual LSRs, virtual optical path, virtual firewall, virtual SIP-based servers, virtual devices, virtual servers, virtual access points, and so on). We will show how this architecture is compatible with new advances in SDN (Software Defined Networking), new high-speed transport protocol like TRILL (Transparent Interconnection of Lots of Links) and LISP (Locator/Identifier Separation Protocol), NGN, IMS, Wi-Fi new generation, and 4G/5G networks. Finally, we introduce the Cloud of security and the virtualisation of secure elements (smartcard) that should definitely transform how to secure the Internet.

LanguageEnglish
PublisherWiley
Release dateAug 5, 2015
ISBN9781119007968
Software Networks: Virtualization, SDN, 5G and Security

Related to Software Networks

Related ebooks

Telecommunications For You

View More

Related articles

Reviews for Software Networks

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Software Networks - Guy Pujolle

    Introduction

    Currently, networking technology is experiencing its third major wave of revolution. The first was the move from circuit-switched mode to packet-switched mode, and the second from hardwired to wireless mode. The third revolution, which we examine in this book, is the move from hardware to software mode. Let us briefly examine these three revolutions, before focusing more particularly on the third, which will be studied in detail in this book.

    I.1. The first two revolutions

    A circuit is a collection of hardware and software elements, allocated to two users – one at each end of the circuit. The resources of that circuit belong exclusively to those two users; nobody else can use them. In particular, this mode has been used in the context of the public switched telephone network (PSTN). Indeed, telephone voice communication is a continuous application for which circuits are very appropriate.

    A major change in traffic patterns brought about the first great revolution in the world of networks, pertaining to asynchronous and non-uniform applications. The data transported for these applications make only very incomplete use of circuits, but are appropriate for packet-switched mode. When a message needs to be sent from a transmitter to a receiver, the data for transmission are grouped together in one or more packets, depending on the total size of the message. For a short message, a single packet may be sufficient; however, for a long message, several packets are needed. The packets then pass through intermediary transfer nodes between the transmitter and the receiver, and ultimately make their way to the end-point. The resources needed to handle the packets include memories, links between the nodes and sender/receiver. These resources are shared between all users. Packet-switched mode requires a physical architecture and protocols – i.e. rules – to achieve end-to-end communication. Many different architectural arrangements have been proposed, using protocol layers and associated algorithms. In the early days, each hardware manufacturer had their own architecture (e.g. SNA, DNA, DecNet, etc.). Then, the OSI model (Open System Interconnection) was introduced in an attempt to make all these different architectures mutually compatible. The failure of compatibility between hardware manufacturers, even with a common model, led to the re-adoption of one of the very first architectures introduced for packet-switched mode: TCP/IP (Transport Control Protocol/Internet Protocol).

    The second revolution was the switch from hardwired mode to wireless mode. Figure I.1 shows that, by 2020, terminal connection should be essentially wireless, established using Wi-Fi technology, including 3G/4G/5G technology. In fact, increasingly, the two techniques are used together, as they are becoming mutually complimentary rather than representing competition for one another. In addition, when we look at the curve shown in Figure I.2, plotting worldwide user demand against the growth of what 3G/4G/5G technology is capable of delivering, we see that the gap is so significant that only Wi-Fi technology is capable of handling the demand. We shall come back to wireless architectures, because the third revolution also has a significant impact on this transition toward radio-based technologies.

    Figure I.1. Terminal connection by 2020

    Figure I.2. The gap between technological progress and user demand. For a color version of the figure, see www.iste.co.uk/pujolle/software.zip

    I.2. The third revolution

    The third revolution, which is our focus in this book, pertains to the move from hardware-based mode to software-based mode. This transition is taking place because of virtualization, whereby physical networking equipment is replaced by software fulfilling the same function.

    Let us take a look at the various elements which are creating a new generation of networks. To begin with, we can cite the Cloud. The Cloud is a set of resources which, instead of being held at the premises of a particular company or individual, are hosted on the Internet. The resources are de-localized, and brought together in resource centers, known as datacenters.

    The reasons for the Cloud’s creation stem from the low degree of use of server resources worldwide: only 10% of servers’ capacities is actually being used. This low value derived from the fact that servers are hardly used at all at night-time, and see relatively little use outside of peak hours, which represent no more than 4-5 hours each day. In addition, the relatively-low cost of hardware meant that, generally, servers were greatly oversized. Another factor which needs to be taken into account is the rising cost of personnel to manage and control the resources. In order to optimize the cost both of resources and engineers, those resources need to be shared. The purpose of Clouds is to facilitate such sharing in an efficient manner.

    Figure I.3 shows the growth of the public Cloud services market. Certainly, that growth is impressive, but in the final analysis, it is relatively low in comparison to what it could have been if there were no problems of security. Indeed, as the security of the data uploaded to such systems is rather lax, there has been a massive increase in private Clouds, taking the place of public Cloud services. In Chapter 6, we shall examine the advances made in terms of security, with the advent of secure Clouds.

    Figure I.3. Public Cloud services market and their annual growth rate

    Virtualization is also a key factor, as indicated at the start of this chapter. The increase in the number of virtual machines in undeniable, and in 2015 more than two thirds of the servers available throughout the world are virtual machines. Physical machines are able to host increasing numbers of virtual machines. This trend is illustrated in Figure I.4. In 2015, each physical server hosts around eight virtual machines.

    Figure I.4. Number of virtual machines per physical server

    The use of Cloud services has meant a significant increase in the data rates being sent over the networks. Indeed, processing is now done centrally, and both the data and the signaling must be sent to the Cloud and then returned after processing. We can see this increase in data rate requirement by examining the market of Ethernet ports for datacenters. Figure I.5 plots shipments of 1 Gbps Ethernet ports against those of 10 Gbps ports. As we can see, 1 Gbps ports, which are already fairly fast, are being replaced by ports that are ten times more powerful.

    Figure I.5. The rise in power of Ethernet ports for datacenters

    The world of the Cloud is, in fact, rather diverse, if we look at the number of functions which it can fulfill. There are numerous types of Clouds available, but three categories, which are indicated in Figure I.6, are sufficient to clearly differentiate them. The category which offers the greatest potential is the SaaS (Software as a Service) cloud. SaaS makes all services available to the user– processing, storage and networking. With this solution, a company asks its Cloud provider to supply all necessary applications. Indeed, the company subcontracts its IT system to the Cloud provider. With the second solution – PaaS (Platform as a Service) – the company remains responsible for the applications. The Cloud provider offers a complete platform, leaving only the management of the applications to the company. Finally, the third solution – IaaS (Infrastructure as a Service) – leaves a great deal more initiative in the hands of the client company. The provider still offers the processing, storage and networking, but the client is still responsible for the applications and the environments necessary for those applications, such as the operating systems and databases.

    Figure I.6. The three main types of Cloud

    More specifically, we can define the three Cloud architectures as follows.

    – IaaS (Infrastructure as a Service): this is the very first approach, with a portion of the virtualization being handled by the Cloud, such as the network servers, the storage servers, and the network itself. The Internet network is used to host PABX-type machines, firewalls or storage servers, and more generally, the servers connected to the network infrastructure;

    – PaaS (Platform as a Service): this is the second Cloud model whereby, in addition to the infrastructure, there is an intermediary software program corresponding to the Internet platform. The client company’s own servers only handle the applications;

    – SaaS (Software as a Service): with SaaS, in addition to the infrastructure and the platform, the Cloud provider actually provides the applications themselves. Ultimately, nothing is left to the company, apart from the Internet ports. This solution, which is also called Cloud Computing, outsources almost all of the company’s IT and networks.

    Figure I.7 shows the functions of the different types of Cloud in comparison with the classical model in operation today.

    Figure I.7. The different types of Clouds

    The main issue for a company that operates a Cloud is security. Indeed, there is nothing to prevent the Cloud provider from scrutinizing the data, or – as much more commonly happens – the data from being requisitioned by the countries in which the physical servers are located; the providers must comply. The rise of sovereign Clouds is also noteworthy: here, the data are not allowed to pass beyond the geographical borders. Most states insist on this for their own data.

    The advantage of the Cloud lies in the power of the datacenters, which are able to handle a great many virtual machines and provide the power necessary for their execution. Multiplexing between a large number of users greatly decreases costs. Datacenters may also serve as hubs for software networks and host virtual machines to create such networks. For this reason, numerous telecommunications operators have set up companies which provide Cloud services for the operators themselves and also for their customers.

    In the techniques which we shall examine in detail hereafter, we find SDN (Software-Defined Networking), whereby multiple forwarding tables are defined, and only datacenters have sufficient processing power to perform all the operations necessary to manage these tables. One of the problems is determining the necessary size of the datacenters, and where to build them. Very roughly, there are a whole range of sizes, from absolutely enormous datacenters, with a million servers, to femto-datacenters, with the equivalent of only a few servers, and everything in between.

    I.3. Cloudification of networks

    The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated to account for between 3% and 5% of the total carbon footprint, depending on which study we consult. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with ten such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by ten well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide.

    Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025. Therefore, it is absolutely crucial to find solutions to offset this rise. We shall come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it almost stops, and then speeds up depending on the workload received.

    Mobility is also another argument in favor of adopting a new form of network architecture. We can show that by 2020, 95% of devices will be connected to the network by a wireless solution. Therefore, we need to manage the mobility problem. Thus, the first order of business is management of multi-homing – i.e. being able to connect to several networks simultaneously. The word multi-homing stems from the fact that the terminal receives several IP addresses, assigned by the different connected networks. These multiple addresses are complex to manage, and the task requires specific characteristics. Mobility also involves managing simultaneous connections to several networks. On the basis of certain criteria (to be determined), the packets can be separated and sent via different networks. Thus, they need to be re-ordered when they arrive at their destination, which can cause numerous problems. Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: user identification enables us to determine who the user is, but an address is also required, to show where that user is. The difficulty lies in dealing with these two concepts simultaneously. Thus, when a customer moves sufficiently far to go beyond the subnetwork with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location.

    Another revolution that is currently under way pertains to the Internet of Things (IoT): billions of things will be connected within the next few years. The prediction is that 50 billion will be connected to the IoT by 2020. In other words, the number of connections will likely increase tenfold in the space of only a few years. The things belong to a variety of domains: 1) domestic, with household electrical goods, home health care, home management, etc.; 2) medicine, with all sorts of sensors both on and in the body to measure, analyze and perform actions; 3) business, with light level sensors, temperature sensors, security sensors, etc. Numerous problems arise in this new universe, such as identity management and the security of communications with the sensors. The price of identification is often set at $40 per object, which is absolutely incompatible with the cost of a sensor which is often less than $1. Security is also a complex factor, because the sensor has very little power, and is incapable of performing sufficiently-sophisticated encryption to ensure the confidentiality of the transmissions.

    Finally, there is one last reason to favor migration to a new network: security. Security requires a precise view and understanding of the problems at hand, which range from physical security to computer security, with the need to lay contingency plans for attacks that are sometimes entirely unforeseeable. The world of the Internet today is like a bicycle tire which is now made up entirely of patches (having been punctured and repaired multiple times), and every time an attack succeeds, a new patch is added. Such a tire is still roadworthy at the moment, but there is the danger that it will burst if no new solution is envisaged in the next few years. At the end of this book, in Chapter 7, we shall look at the secure Cloud, whereby, in a datacenter, a whole set of solutions is built around specialized virtual machines to provide new elements, the aim of which is to enhance the security of the applications and networks.

    An effective security mechanism must include a physical element: a safe box to protect the important elements of the arsenal, necessary to ensure confidentiality, authentication, etc. Software security is a reality, and to a large extent, may be sufficient for numerous applications. However, secure elements can always be circumvented when all of the defenses are software-based. This means that, for new generations, there must be a physical element, either local or remote. This hardware element is a secure microprocessor known as a secure element. A classic example of this type of device is the smartcard, used particularly prevalently by telecom operators and banks.

    Depending on whether it belongs to the world of business or public electronics, the secure element may be found in the terminal, near to it, or far away from the terminal. We shall examine the different solutions in the subsequent chapters of this book.

    Virtualization also has an impact on security: the power of the Cloud, with specialized virtual machines, means that attackers have remarkable striking force at their disposal. In the last few years, hackers’ ability to break encryption algorithms has increased by a factor of 5-6.

    Another important point which absolutely must be integrated in networks is intelligence. So-called intelligent networks have had their day, but the intelligence in this case was not really what we mean by intelligence in this field. Rather, it was a set of automatic mechanisms, employed to deal with problems perfectly determined in advance, such as a signaling protocol for providing additional features in the telephone system. Here, intelligence pertains to learning mechanisms and intelligent decisions based on the network status and user requests. The network needs to become an intelligent system, capable of making decisions on its own. One solution to help move in this direction was introduced by IBM in the early 2000s: autonomic. Autonomic means autonomous and spontaneous – autonomous in the sense that every device in the network must be able to independently make decisions with knowledge of the situated view, i.e. the state of the nodes surrounding it within a certain number of hops. The

    Enjoying the preview?
    Page 1 of 1