This document discusses the application layer of computer networks. It covers topics like principles of network applications, client-server and peer-to-peer architectures, processes and sockets. It also describes the different services provided by transport layer protocols like reliable data transfer, throughput, propagation delay, and security aspects. Popular applications of both client-server (web, email) and peer-to-peer (file sharing) architectures are discussed.
This document provides an overview of building and maintaining a small network. It discusses:
1) The key devices used in small networks, including routers, switches, servers and end devices. It emphasizes the importance of planning network designs and IP addressing schemes.
2) Common applications and protocols used in small networks, such as HTTP, SMTP, FTP and DHCP. It also discusses voice/video applications and protocols like RTP.
3) How small networks can scale to larger networks over time. It stresses the importance of network documentation, device inventory, budgeting, and traffic analysis when planning for growth.
18CS52 VTU Computer Network & Security
MODULE 1-Part 1
Principles of Network Applications: Network Application Architectures, Processes Communicating, Transport Services Available to Applications, Transport Services Provided by the Internet, Application-Layer Protocols. The Web and HTTP: Overview of HTTP, Non-persistent and Persistent Connections, HTTP Message Format, User-Server Interaction: Cookies, Web Caching, The Conditional GET, File Transfer: FTP Commands & Replies, Electronic Mail in the Internet: SMTP, Comparison with HTTP, Mail Message Format, Mail Access Protocols
The document discusses web technology and client-server computing. It provides an overview of the history and development of the World Wide Web from its creation by Tim Berners-Lee in 1989. It describes common web protocols like HTTP, TCP/IP, FTP, and SMTP. It also discusses strategies for web development projects and how to connect devices to the internet. Finally, it outlines the basic roles and interactions in a client-server computing model.
This document discusses various techniques to minimize transparency in information flow across computer networks. It begins by explaining how digital information is transmitted using the TCP/IP and OSI models. It then discusses tools like packet sniffers that can intercept network traffic. Various attacks that exploit transparency at different layers are described. Virtual private networks (VPNs) are presented as a method to secure information flow at the network layer through encryption. The document demonstrates traffic analysis with and without a VPN and discusses other strategies like Tor onion services and HTTPS. It concludes by addressing frequently asked questions about VPN services.
This document discusses principles of network applications. It describes how network applications involve programs running on different end systems that communicate over a network. It provides examples of the browser and web server communicating in a web application, and programs sharing files in a P2P system. The document outlines that when developing a new application, software must be written to run on multiple end systems, and can be written in languages like C, Java or Python. It also describes processes communicating by exchanging messages, and different network application architectures like client-server and peer-to-peer.
The document provides an overview of commands and techniques used to verify connectivity and acquire device information in a small network. It describes using ping and traceroute to test connectivity between devices and troubleshoot connectivity issues. It also explains using the ipconfig command on Windows and ifconfig/ip commands on Linux to view a host's IP configuration, and introduces commands like show ip interface brief for viewing IP information on routers.
This document provides information about a computer networks course including details about the lecturer, course content, objectives, and prerequisites. The course covers 12 weeks of material on topics ranging from the history of computer networks and the TCP/IP protocol stack to IP routing, data link layer services, and wireless networking principles. Assessment includes two term exams, a final exam, and a lab component. The goal is for students to gain an understanding of major computer network components, how the Internet works, and networking protocols at each layer of the TCP/IP model.
The document discusses the course outcomes and modules for a Computer Network course. The course aims to help students understand networking concepts and protocols at different layers. It will cover topics like network architectures, protocols, configurations, and analysis of simple networks. The textbook listed is Data Communications and Networking by Forouzan. Module 5 focuses on the application layer and protocols like SMTP, FTP, DNS etc. It also discusses client-server and peer-to-peer paradigms along with HTTP, web clients and servers, URLs, and caching using proxy servers.
This document provides an overview of building and maintaining a small network. It discusses:
1) The key devices used in small networks, including routers, switches, servers and end devices. It emphasizes the importance of planning network designs and IP addressing schemes.
2) Common applications and protocols used in small networks, such as HTTP, SMTP, FTP and DHCP. It also discusses voice/video applications and protocols like RTP.
3) How small networks can scale to larger networks over time. It stresses the importance of network documentation, device inventory, budgeting, and traffic analysis when planning for growth.
18CS52 VTU Computer Network & Security
MODULE 1-Part 1
Principles of Network Applications: Network Application Architectures, Processes Communicating, Transport Services Available to Applications, Transport Services Provided by the Internet, Application-Layer Protocols. The Web and HTTP: Overview of HTTP, Non-persistent and Persistent Connections, HTTP Message Format, User-Server Interaction: Cookies, Web Caching, The Conditional GET, File Transfer: FTP Commands & Replies, Electronic Mail in the Internet: SMTP, Comparison with HTTP, Mail Message Format, Mail Access Protocols
The document discusses web technology and client-server computing. It provides an overview of the history and development of the World Wide Web from its creation by Tim Berners-Lee in 1989. It describes common web protocols like HTTP, TCP/IP, FTP, and SMTP. It also discusses strategies for web development projects and how to connect devices to the internet. Finally, it outlines the basic roles and interactions in a client-server computing model.
This document discusses various techniques to minimize transparency in information flow across computer networks. It begins by explaining how digital information is transmitted using the TCP/IP and OSI models. It then discusses tools like packet sniffers that can intercept network traffic. Various attacks that exploit transparency at different layers are described. Virtual private networks (VPNs) are presented as a method to secure information flow at the network layer through encryption. The document demonstrates traffic analysis with and without a VPN and discusses other strategies like Tor onion services and HTTPS. It concludes by addressing frequently asked questions about VPN services.
This document discusses principles of network applications. It describes how network applications involve programs running on different end systems that communicate over a network. It provides examples of the browser and web server communicating in a web application, and programs sharing files in a P2P system. The document outlines that when developing a new application, software must be written to run on multiple end systems, and can be written in languages like C, Java or Python. It also describes processes communicating by exchanging messages, and different network application architectures like client-server and peer-to-peer.
The document provides an overview of commands and techniques used to verify connectivity and acquire device information in a small network. It describes using ping and traceroute to test connectivity between devices and troubleshoot connectivity issues. It also explains using the ipconfig command on Windows and ifconfig/ip commands on Linux to view a host's IP configuration, and introduces commands like show ip interface brief for viewing IP information on routers.
This document provides information about a computer networks course including details about the lecturer, course content, objectives, and prerequisites. The course covers 12 weeks of material on topics ranging from the history of computer networks and the TCP/IP protocol stack to IP routing, data link layer services, and wireless networking principles. Assessment includes two term exams, a final exam, and a lab component. The goal is for students to gain an understanding of major computer network components, how the Internet works, and networking protocols at each layer of the TCP/IP model.
The document discusses the course outcomes and modules for a Computer Network course. The course aims to help students understand networking concepts and protocols at different layers. It will cover topics like network architectures, protocols, configurations, and analysis of simple networks. The textbook listed is Data Communications and Networking by Forouzan. Module 5 focuses on the application layer and protocols like SMTP, FTP, DNS etc. It also discusses client-server and peer-to-peer paradigms along with HTTP, web clients and servers, URLs, and caching using proxy servers.
Introduction to the Internet and Web.pptxhishamousl
The document provides an introduction to the Internet and the World Wide Web. It defines the Internet as a global network of interconnected computer networks, and notes that no single entity controls it. It describes how the World Wide Web uses common protocols to allow computers to share text, graphics, and multimedia over the Internet. It also defines key concepts like URLs, domains, IP addresses, browsers, servers, and the client-server model.
A communications, data exchange, and resource-sharing system created by linking two or more computers and establishing standards, or protocols, so that they can work together
This document outlines the course Fundamentals of Computer Networks. It discusses the goals of conveying principles and mechanisms to build scalable computer networks that can grow globally and support diverse applications. The course covers topics like routing, end-to-end protocols, congestion control, wireless networks, and applications through a combination of lectures, practical assignments, and conceptual assignments. It also provides an outline of the first lecture covering requirements, architecture, implementation, and an overview of chapters in the textbook.
The document provides information about building a small network including devices, applications, protocols, and connectivity verification. It discusses selecting devices for a small network based on factors like cost and speed. Common network applications and protocols used in small networks are also identified, including protocols for real-time voice and video. The document explains how a small network design can scale to support larger networks as business needs grow. Methods for verifying connectivity between devices using commands like ping and traceroute are presented. Finally, commands for viewing host IP configurations on Windows and Linux systems are covered.
The document provides information about building a small network including devices, applications, protocols, and connectivity verification. It discusses [1] selecting common devices for a small network like routers, switches, and end devices, [2] applications and protocols used in small networks such as HTTP, SMTP, and DHCP, and [3] using the ping and traceroute commands to verify connectivity between devices and troubleshoot connectivity issues.
This document discusses telecommunication networks and their business value. It describes how networks connect autonomous computers to share information and resources, and how the internet constructs global information systems using web technologies. The internet is a complex, decentralized system of over 150 million interconnected computers. It provides various services for communication, information retrieval, e-commerce and more through components like clients, servers, networks and protocols. The document also discusses internet infrastructure, and the differences between internet, intranets and extranets for business use. Cloud computing delivers computing resources as a service over the internet and has various models like SaaS, PaaS and IaaS.
This document discusses telecommunication networks and their business value. It describes how networks connect autonomous computers to share information and resources, and how the internet constructs global information systems using www technology. The internet is a complex, decentralized system with over 150 million interconnected computers. It provides various services for communication, information retrieval, and e-commerce through components like clients, servers, networks, protocols, bandwidth, web browsers, search engines and more. The document also discusses internet, intranet and extranet architectures, as well as cloud computing concepts and models.
The document outlines a syllabus for a computer networks course taught by Usha Barad. The syllabus covers 5 topics: 1) introduction to computer networks and the Internet, 2) application layer, 3) transport layer, 4) network layer, and 5) link layer and local area networks. It also lists recommended reference books for the course.
Tim Berners-Lee originally proposed the World Wide Web in 1989 at CERN as a system for sharing information over a computer network. It utilized HTML for formatting documents, URLs for uniquely addressing resources, and HTTP for transporting messages. These basic technologies provided the building blocks for the web. HTTP became widely adopted, allowing the proliferation of static web pages in the early 1990s. This eventually led to more complex, dynamic web applications that still rely on HTTP and its request-response model today.
The document discusses the Open Systems Interconnection (OSI) reference model, which introduced standards for network communication. The OSI model organizes network functions into seven layers, with each layer building on the services provided by the previous layer. Layers 1-4 deal with flow of data through the network, while layers 5-7 deal with services for applications. The model helps ensure compatibility between different network technologies.
This document provides an overview and introduction to key concepts in computer networking. It discusses the growth of computer networking and its ubiquitous use today. It then covers five key aspects of networking including network applications, data communications, packet switching technologies, TCP/IP internetworking, and additional concepts. It also discusses why networking seems complex due to various standards, models and terminology. The document aims to provide background needed to understand networking complexity at a high level.
The document provides information about a cloud technology associate certification course, including details about the course, benefits of pursuing cloud certification through an unemployment program, and an overview of cloud computing concepts. The first day focuses on introductions, an overview of cloud characteristics and service models, the evolution of cloud from earlier computing approaches, cloud architectures, and benefits and limitations of cloud computing. The trainer has extensive experience in IT management and multiple technical certifications.
The application layer is the topmost layer and closest to the end user. It establishes communication between applications and uses lower layers to transfer data. Some key protocols include HTTP, FTP, SMTP, and DNS. The application layer can use client-server or peer-to-peer architectures. Client-server centralizes resources on a server while peer-to-peer allows direct communication. Key protocols like HTTP and DNS were also summarized.
Internetworking involves connecting distinct computer networks together using common routing technologies to form an internetwork. The largest example is the Internet, which connects networks worldwide using TCP/IP protocols. An internetwork functions as a single large network by connecting individual networks through intermediate devices like routers. [/SUMMARY]
The document summarizes several key application layer protocols in computer networks. It discusses the Domain Name System (DNS) which maps domain names to IP addresses. It also describes remote login protocols like Telnet and SSH, electronic mail protocols like SMTP, file transfer protocols like FTP and SCP, and the Hypertext Transfer Protocol (HTTP) which underlies the World Wide Web. It provides an overview of Voice over IP (VoIP) technology and signaling protocols used for call setup and management in IP telephony systems.
The document discusses information and communication technologies in education, focusing on the internet and the world wide web. It defines key terms like internet, intranet, extranet, and ethernet. It explains how the internet functions as a network of networks using protocols. The document outlines the history of the internet from its origins as ARPANET and the key developments that led to its growth. It defines the world wide web and how information is accessed on the web using browsers, search engines, and uniform resource locators (URLs). The client-server model of website development and common scripting languages used are also summarized.
chapter-4-networking hjgjjgj did hfhhfhjAmitDeshai
This document provides an overview of networking concepts including client-server computing, networking basics, ports, sockets, TCP, UDP, proxy servers, internet addressing, and Java networking APIs. Some key points:
- A client-server model involves a client machine making requests to a server machine that provides a shared resource. Common server types include web, print, file, and compute servers.
- Network communication uses TCP or UDP protocols over IP addresses and port numbers to direct data between applications on different devices.
- Sockets provide an endpoint for inter-process communication and are identified by an IP address and port number combination.
- Java supports networking through classes like InetAddress, ServerSocket, Socket,
Hardware and Software requirements for Internet 2.pptxRbalasubramani
The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. The purpose of the internet is to communicate between computers that are interconnected with each other. Internet is accessible to every user all over the world.
Principles of Roods Approach!!!!!!!.pptxibtesaam huma
Principles of Rood’s Approach
Treatment technique used in physiotherapy for neurological patients which aids them to recover and improve quality of life
Facilitatory techniques
Inhibitory techniques
More Related Content
Similar to Computer Networks notes 5- Module 5.pptx
Introduction to the Internet and Web.pptxhishamousl
The document provides an introduction to the Internet and the World Wide Web. It defines the Internet as a global network of interconnected computer networks, and notes that no single entity controls it. It describes how the World Wide Web uses common protocols to allow computers to share text, graphics, and multimedia over the Internet. It also defines key concepts like URLs, domains, IP addresses, browsers, servers, and the client-server model.
A communications, data exchange, and resource-sharing system created by linking two or more computers and establishing standards, or protocols, so that they can work together
This document outlines the course Fundamentals of Computer Networks. It discusses the goals of conveying principles and mechanisms to build scalable computer networks that can grow globally and support diverse applications. The course covers topics like routing, end-to-end protocols, congestion control, wireless networks, and applications through a combination of lectures, practical assignments, and conceptual assignments. It also provides an outline of the first lecture covering requirements, architecture, implementation, and an overview of chapters in the textbook.
The document provides information about building a small network including devices, applications, protocols, and connectivity verification. It discusses selecting devices for a small network based on factors like cost and speed. Common network applications and protocols used in small networks are also identified, including protocols for real-time voice and video. The document explains how a small network design can scale to support larger networks as business needs grow. Methods for verifying connectivity between devices using commands like ping and traceroute are presented. Finally, commands for viewing host IP configurations on Windows and Linux systems are covered.
The document provides information about building a small network including devices, applications, protocols, and connectivity verification. It discusses [1] selecting common devices for a small network like routers, switches, and end devices, [2] applications and protocols used in small networks such as HTTP, SMTP, and DHCP, and [3] using the ping and traceroute commands to verify connectivity between devices and troubleshoot connectivity issues.
This document discusses telecommunication networks and their business value. It describes how networks connect autonomous computers to share information and resources, and how the internet constructs global information systems using web technologies. The internet is a complex, decentralized system of over 150 million interconnected computers. It provides various services for communication, information retrieval, e-commerce and more through components like clients, servers, networks and protocols. The document also discusses internet infrastructure, and the differences between internet, intranets and extranets for business use. Cloud computing delivers computing resources as a service over the internet and has various models like SaaS, PaaS and IaaS.
This document discusses telecommunication networks and their business value. It describes how networks connect autonomous computers to share information and resources, and how the internet constructs global information systems using www technology. The internet is a complex, decentralized system with over 150 million interconnected computers. It provides various services for communication, information retrieval, and e-commerce through components like clients, servers, networks, protocols, bandwidth, web browsers, search engines and more. The document also discusses internet, intranet and extranet architectures, as well as cloud computing concepts and models.
The document outlines a syllabus for a computer networks course taught by Usha Barad. The syllabus covers 5 topics: 1) introduction to computer networks and the Internet, 2) application layer, 3) transport layer, 4) network layer, and 5) link layer and local area networks. It also lists recommended reference books for the course.
Tim Berners-Lee originally proposed the World Wide Web in 1989 at CERN as a system for sharing information over a computer network. It utilized HTML for formatting documents, URLs for uniquely addressing resources, and HTTP for transporting messages. These basic technologies provided the building blocks for the web. HTTP became widely adopted, allowing the proliferation of static web pages in the early 1990s. This eventually led to more complex, dynamic web applications that still rely on HTTP and its request-response model today.
The document discusses the Open Systems Interconnection (OSI) reference model, which introduced standards for network communication. The OSI model organizes network functions into seven layers, with each layer building on the services provided by the previous layer. Layers 1-4 deal with flow of data through the network, while layers 5-7 deal with services for applications. The model helps ensure compatibility between different network technologies.
This document provides an overview and introduction to key concepts in computer networking. It discusses the growth of computer networking and its ubiquitous use today. It then covers five key aspects of networking including network applications, data communications, packet switching technologies, TCP/IP internetworking, and additional concepts. It also discusses why networking seems complex due to various standards, models and terminology. The document aims to provide background needed to understand networking complexity at a high level.
The document provides information about a cloud technology associate certification course, including details about the course, benefits of pursuing cloud certification through an unemployment program, and an overview of cloud computing concepts. The first day focuses on introductions, an overview of cloud characteristics and service models, the evolution of cloud from earlier computing approaches, cloud architectures, and benefits and limitations of cloud computing. The trainer has extensive experience in IT management and multiple technical certifications.
The application layer is the topmost layer and closest to the end user. It establishes communication between applications and uses lower layers to transfer data. Some key protocols include HTTP, FTP, SMTP, and DNS. The application layer can use client-server or peer-to-peer architectures. Client-server centralizes resources on a server while peer-to-peer allows direct communication. Key protocols like HTTP and DNS were also summarized.
Internetworking involves connecting distinct computer networks together using common routing technologies to form an internetwork. The largest example is the Internet, which connects networks worldwide using TCP/IP protocols. An internetwork functions as a single large network by connecting individual networks through intermediate devices like routers. [/SUMMARY]
The document summarizes several key application layer protocols in computer networks. It discusses the Domain Name System (DNS) which maps domain names to IP addresses. It also describes remote login protocols like Telnet and SSH, electronic mail protocols like SMTP, file transfer protocols like FTP and SCP, and the Hypertext Transfer Protocol (HTTP) which underlies the World Wide Web. It provides an overview of Voice over IP (VoIP) technology and signaling protocols used for call setup and management in IP telephony systems.
The document discusses information and communication technologies in education, focusing on the internet and the world wide web. It defines key terms like internet, intranet, extranet, and ethernet. It explains how the internet functions as a network of networks using protocols. The document outlines the history of the internet from its origins as ARPANET and the key developments that led to its growth. It defines the world wide web and how information is accessed on the web using browsers, search engines, and uniform resource locators (URLs). The client-server model of website development and common scripting languages used are also summarized.
chapter-4-networking hjgjjgj did hfhhfhjAmitDeshai
This document provides an overview of networking concepts including client-server computing, networking basics, ports, sockets, TCP, UDP, proxy servers, internet addressing, and Java networking APIs. Some key points:
- A client-server model involves a client machine making requests to a server machine that provides a shared resource. Common server types include web, print, file, and compute servers.
- Network communication uses TCP or UDP protocols over IP addresses and port numbers to direct data between applications on different devices.
- Sockets provide an endpoint for inter-process communication and are identified by an IP address and port number combination.
- Java supports networking through classes like InetAddress, ServerSocket, Socket,
Hardware and Software requirements for Internet 2.pptxRbalasubramani
The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. The purpose of the internet is to communicate between computers that are interconnected with each other. Internet is accessible to every user all over the world.
Similar to Computer Networks notes 5- Module 5.pptx (20)
Principles of Roods Approach!!!!!!!.pptxibtesaam huma
Principles of Rood’s Approach
Treatment technique used in physiotherapy for neurological patients which aids them to recover and improve quality of life
Facilitatory techniques
Inhibitory techniques
Split Shifts From Gantt View in the Odoo 17Celine George
Odoo allows users to split long shifts into multiple segments directly from the Gantt view.Each segment retains details of the original shift, such as employee assignment, start time, end time, and specific tasks or descriptions.
Webinar Innovative assessments for SOcial Emotional SkillsEduSkills OECD
Presentations by Adriano Linzarini and Daniel Catarino da Silva of the OECD Rethinking Assessment of Social and Emotional Skills project from the OECD webinar "Innovations in measuring social and emotional skills and what AI will bring next" on 5 July 2024
How to Configure Time Off Types in Odoo 17Celine George
Now we can take look into how to configure time off types in odoo 17 through this slide. Time-off types are used to grant or request different types of leave. Only then the authorities will have a clear view or a clear understanding of what kind of leave the employee is taking.
Integrated Marketing Communications (IMC)- Concept, Features, Elements, Role of advertising in IMC
Advertising: Concept, Features, Evolution of Advertising, Active Participants, Benefits of advertising to Business firms and consumers.
Classification of advertising: Geographic, Media, Target audience and Functions.
How to Store Data on the Odoo 17 WebsiteCeline George
Here we are going to discuss how to store data in Odoo 17 Website.
It includes defining a model with few fields in it. Add demo data into the model using data directory. Also using a controller, pass the values into the template while rendering it and display the values in the website.
Understanding and Interpreting Teachers’ TPACK for Teaching Multimodalities i...Neny Isharyanti
Presented as a plenary session in iTELL 2024 in Salatiga on 4 July 2024.
The plenary focuses on understanding and intepreting relevant TPACK competence for teachers to be adept in teaching multimodality in the digital age. It juxtaposes the results of research on multimodality with its contextual implementation in the teaching of English subject in the Indonesian Emancipated Curriculum.
The membership Module in the Odoo 17 ERPCeline George
Some business organizations give membership to their customers to ensure the long term relationship with those customers. If the customer is a member of the business then they get special offers and other benefits. The membership module in odoo 17 is helpful to manage everything related to the membership of multiple customers.
Front Desk Management in the Odoo 17 ERPCeline George
Front desk officers are responsible for taking care of guests and customers. Their work mainly involves interacting with customers and business partners, either in person or through phone calls.
3. Main topics in Module 5
• Principles of Network Applications
• The Web and HTTP
• Electronic mail in the Internet
• DNS - The Internet’s Directory service
3
Computer Networks - Prof Ashok Herur
4. Application Layer
• A collection of applications that are useful for the users !
• Networks exist to support these applications.
• Popular applications, in the 1970’s and 1980’s:
• Text email
• File transfer
• Remote access to computers
• In 1990’s:
• World Wide Web (www) – surfing, searching, ecommerce.
4
Computer Networks - Prof Ashok Herur
5. Application Layer
• After 2000:
• VoIP (Voice over IP)
• Videoconferencing – Skype, Facetime, Google Hangouts, ..
• YouTube
• Movies on Demand – Netflix
• Multiplayer Online games
• Social networking – Facebook, Instagram, Twitter
• Messaging Apps - WhatsApp, WeChat
• Payment Apps – Google Pay, PayTM
• Transport Apps – Ola, Uber
• Location-based Apps – Maps, traffic, services (shops, hotels, petrol pumps,..)
5
Computer Networks - Prof Ashok Herur
6. Principles of Network applications
• Network application programs should be capable of running on different end
systems, and be able to communicate with each other over the network.
• For example, in a Web application, there are two distinct programs that
communicate with each other:
• The Browser program running in the user’s host (laptop, smartphone, etc);
• The Web Server program running in the Web Server host.
• As another example, in a video-on-demand application, like Netflix, there is:
• A Netflix-provided program running in the user’s host (TV, smartphone);
• The Netflix Server program running in the Netflix Server host.
6
Computer Networks - Prof Ashok Herur
7. Network Application Architecture
• Network application architecture is one of the following two types:
• Client-server architecture
• Peer-to-Peer (P2P) architecture
• In a Client-server architecture, there is an always-on host, called the Server,
which services requests from many other hosts, called Clients.
• As an example, a Web Server services requests from many browsers
running on Client hosts.
• It responds by sending the requested object.
7
Computer Networks - Prof Ashok Herur
8. Client-Server architecture
• Here, the server has a fixed, well-known IP address.
• Since it is always on, and since it services many requests, the address is
fixed for quick access.
• Common applications using this architecture include the Web, FTP, email.
• Often, a single server is incapable of handling the volume of requests.
• There are multiple servers housed in a data centre.
• Further, there could be multiple data centres spread across the globe.
• Google (search engine) has 19 data centres across the globe.
8
Computer Networks - Prof Ashok Herur
9. P2P architecture
• Here, there is minimal (or no) reliance on dedicated servers.
• The application exploits direct communication between pairs of intermittently
connected hosts, called peers.
• These peers are not owned by the service provider but instead are desktops
and laptops owned by the end users (in homes and offices).
• A popular P2P application is the file-sharing application (BitTorrent, InShare,
Zapya, etc).
• This architecture is distributed, scalable and cost effective, since it does not
need server infrastructure (data centres) and server bandwidth.
9
Computer Networks - Prof Ashok Herur
10. P2P architecture
• Since there are no fixed servers, peers must rely on some method to locate
fellow peers.
• The most basic approach is a centralized directory where resources are
indexed on a central server, and peers query this server for a lookup to find the
peer with the desired resource, and then make a connection to the peer.
• BitTorrent uses a centralized directory server, calling it the Tracker.
• Note that while resource lookup is still client-server, the actual resource
transmission, which accounts for the bulk of the network capacity usage, is
P2P.
10
Computer Networks - Prof Ashok Herur
12. P2P architecture
• However, due to the decentralised structure, P2P applications face challenges
of:
• Security (implications arise from abusing the trust between peers,
including privacy and identity issues);
• Performance (issues arising from lack of congestion control);
• Reliability (difficult to authenticate the peer or the content).
12
Computer Networks - Prof Ashok Herur
13. Processes
• When applications communicate with each other, there are processes on both
sides that enable the communication (sending and receiving data).
• In applications that use the Client-Server architecture, like the Web, the
browser on the client is a client process and the one running on the server is
the server process.
• In P2P architecture, a process on a host can be a client process (while
requesting) as well as a server process (while responding to a request).
• In general, the one that initiates the communication with a request is
referred to as client process.
13
Computer Networks - Prof Ashok Herur
14. Socket
• A process sends message into, and receives messages from, the network
through a software interface called a socket.
• It is the interface between the Application layer and the Transport layer (in the
TCP / IP Reference model).
• It is also referred to as Application Programming Interface (API).
• The application developer has control of everything on the Application layer
side, but has little control on the Transport layer side:
• May only be able to specify the choice of transport protocol (TCP or UDP),
a few parameters like maximum buffer, maximum segment size, etc.
14
Computer Networks - Prof Ashok Herur
16. Socket addressing
• A process on a host has to have an unique identity.
• Two pieces of information are needed here:
• The address of the host (IP address);
• An identifier that specifies the process on the host (Port number).
• The combination of the above two is the socket address.
• Popular applications have been assigned specific Port numbers by IANA:
• Web server – Port number 80
• Mail server (using SMTP protocol) – Port number 25.
16
Computer Networks - Prof Ashok Herur
17. Computer Networks - Prof Ashok Herur 17
Transport services available to Applications
18. Transport services available to Applications
• When a network application is developed, one must choose one of the
available Transport layer protocols.
• This is done by studying the services provided by those protocols, and choose
the one that best suits the needs of the application.
• Like how we choose to travel to a distant place – By our own car, or a bus,
train, plane, etc.
• Each one has different pros and cons, and none is the ideal mode of
transport for everyone.
18
Computer Networks - Prof Ashok Herur
19. Transport services available to Applications
• The services offered by the Transport layer protocols can broadly be classified
under the following heads:
• Reliable data transfer
• Throughput
• Propagation time (delay)
• Security aspects
19
Computer Networks - Prof Ashok Herur
20. Transport services – Reliable data transfer
• Reliability means that:
• Data that is corrupted due to noise should be detected and dealt with;
• Packets are not lost (due to buffer overruns or due to an error in the IP
Header);
• Packets are not delivered (to the Application) in improper sequence.
• Many applications (Financial transactions, email, remote host access, etc) do
not tolerate the data loss mentioned above.
• Some loss-tolerant applications (notably multimedia applications such as
conversational audio / video) can tolerate some amount of data loss.
20
Computer Networks - Prof Ashok Herur
21. Transport services – Throughput
• Throughput is the rate at which the sending process can deliver bits to the
receiving process.
• The available throughput will fluctuate with time because:
• The bandwidth along the network path is shared with other sessions (of
differing throughput needs);
• Sessions come on, get completed and closed at different points in time.
• Can an application request a guaranteed throughput from the Transport layer
protocol?
• And does it need such a guarantee?
21
Computer Networks - Prof Ashok Herur
22. Transport services – Throughput
• Bandwidth-sensitive applications are those that are virtually useless in the
absence of the required throughput.
• Many multimedia applications fall in this category, though a few can adjust
the coding schemes and settle for a slightly lower throughput.
• Bandwidth-elastic applications are those that can make use of as much, or as
little, throughput that is available at the moment.
• Email, file transfer, Web transfer fall in this category.
22
Computer Networks - Prof Ashok Herur
23. Transport services – Propagation time (delay)
• A transport-layer protocol can also provide / should provide some timing
guarantees.
• Interactive real-time applications definitely need such a guarantee.
• Eg.: Internet telephony, videoconferencing, multiplayer games, etc.
• Even for non-real-time applications, a lower delay would always be preferable,
though a tight constraint is not placed.
23
Computer Networks - Prof Ashok Herur
24. Transport services – Security
• A transport protocol can provide applications with one or more security
services, like:
• End-to-end encryption;
• End-point authentication;
• Data integrity (a check to find out if the data has been tampered with).
• Note that the traditional TCP does not provide a facility for the above features.
• If they are required, then the application should use an enhanced feature of
TCP, called Transport Layer Security (TLS).
• Note that TLS is not a third protocol (besides TCP and UDP).
• The special TLS code should be used in the application development.
24
Computer Networks - Prof Ashok Herur
25. Transport services – Security
• TLS has its own socket API that is similar to the traditional socket API.
• When an application uses TLS, the sending process passes cleartext data to the
TLS socket.
• TLS in the sending host then encrypts the data and passes it on to the TCP
socket.
• The reverse happens on the receiver side.
25
Computer Networks - Prof Ashok Herur
26. Transport services – Requirements of network applications
26
Computer Networks - Prof Ashok Herur
28. The Internet
• It is a globally distributed network comprising many voluntarily interconnected
autonomous networks.
• No one person, company, organization or government runs the Internet.
• It operates without a central governing body, with each constituent network
setting and enforcing its own policies.
• Until the early 1990s, the internet was primarily used in academic circles for
transferring files, sending emails, logging in to remote hosts, etc.
28
Computer Networks - Prof Ashok Herur
29. The World Wide Web (www)
• The Web was invented by computer scientist Tim Berners Lee, in 1993, while
working at CERN (French name, to mean European Organization for Nuclear
Research)
• He was motivated to solve the problem of storing, updating, and finding
documents and data files in that large and constantly changing organization, as
well as distributing them to collaborators outside CERN.
• This was one of the first popular internet application.
29
Computer Networks - Prof Ashok Herur
30. The World Wide Web (www)
• The World Wide Web (WWW), commonly referred to as the web, is a vast and
interconnected network of digital information that is accessible through the
internet.
• It consists of a collection of web pages, documents, multimedia content, and
resources linked together using hyperlinks.
• The web allows users to access, share, publish, and interact with a diverse
range of content, including text, images, videos, audio, and interactive
applications.
30
Computer Networks - Prof Ashok Herur
31. Components of WWW
• Web pages are individual documents containing information, often presented
in HTML format, that can include text, images, multimedia, and links.
• These web pages are grouped together to form websites, which are hosted on
web servers and accessible via web browsers.
• Hyperlinks, often called links, are clickable elements within web content that
connect to other web pages, websites, or resources.
• Clicking on a hyperlink navigates the user to the linked content, enabling
seamless exploration across the web.
31
Computer Networks - Prof Ashok Herur
32. Components of WWW
• Uniform Resource Locator (URL) is a web address that specifies the location of
a specific resource on the web.
• URL consists of a protocol (such as HTTP or HTTPS), a domain name (e.g.,
www.wipro.com), and a path to the resource.
• Web browsers are software applications used to access and view web content.
They interpret HTML and other web technologies to render web pages in a
readable and interactive format for user.
• Web servers are computers that store web content and respond to user
requests by sending the requested web pages and resources back to the user’s
browser.
32
Computer Networks - Prof Ashok Herur
33. Components of WWW
• Hyper Text Transfer Protocol (HTTP) and HTTPS: HTTP is the protocol used for
transferring data between a web browser and a web server. HTTPS is a secure
version of HTTP that encrypts data to enhance security and privacy during data
transmission.
• Search engines index web content, making it searchable and discoverable for
users. They use algorithms to rank and present search results based on user
queries, enabling efficient access to relevant information.
• Search Engine Optimization (SEO) is the practice of optimizing websites to
improve their visibility in search engine results. By following SEO best
practices, website owners increase the chances of their sites appearing
higher in search rankings.
33
Computer Networks - Prof Ashok Herur
34. SEO
• When website owners implement the SEO strategies effectively, they send
signals to search engines that their website is valuable and relevant to specific
search queries.
• For example, if a website sells organic skincare products, they would want
to optimize their website for keywords related to organic skincare, such as
“natural skincare,” “chemical-free skincare,” or “organic beauty products.”
• By strategically placing these keywords in their website’s content and URLs,
the website owner increases the likelihood of their site ranking higher
when users search for those terms.
• Additionally, search engines consider other factors when determining website
visibility, such as the website’s loading speed, mobile-friendliness, and user
experience.
34
Computer Networks - Prof Ashok Herur
35. Web cache
• A web cache is a server computer located either on the public Internet or
within an enterprise that stores recently accessed web pages to improve
response time for users when the same content is requested within a certain
time after the original request.
• Most web browsers also implement a browser cache by writing recently
obtained data to a local data storage device
• Enterprise firewalls often cache Web resources requested by one user for the
benefit of many users.
35
Computer Networks - Prof Ashok Herur
36. Key components of a website
• HTML (Hypertext Markup Language): HTML determines how the different
elements are organized (defines its structure and content) and presented to
the visitors.
• CSS (Cascading Style Sheets): CSS is responsible for the visual presentation and
layout of a website. It adds style (font, spacing, etc) colour, and aesthetic
appeal to the web pages, making them visually appealing and engaging.
• JavaScript: JavaScript enhances the user experience by adding interactivity and
dynamic elements to web pages. It allows websites to respond to user actions,
such as clicking on buttons or scrolling through content.
36
Computer Networks - Prof Ashok Herur
37. HTTP
• Hypertext Transfer Protocol (HTTP) is an Application layer protocol for
distributed, collaborative, information systems like the World Wide Web,
where hypertext documents include hyperlinks to other resources that the
user can easily access, for example by a mouse click or by tapping the screen.
• HTTP/1 was finalized as version 1.0 in 1996. It evolved (as version 1.1) in 1997.
• Its secure variant named HTTPS is used by more than 85% of websites.
• HTTP/2, published in 2015, provides a more efficient version of HTTP/1.
• As of January 2024, it is used by 36% of websites and supported by almost all web
browsers (over 98% of users).
• HTTP/3 was published in 2022.
• It is now used on 28% of websites, and is supported by most web browsers.
37
Computer Networks - Prof Ashok Herur
38. HTTP
• HTTP functions as a Request - Response protocol in the Client-Server model.
• The client (process on a browser) submits an HTTP request message to the
server.
• The server, which provides resources such as HTML files and other content
returns a response message to the client.
• HTTP defines the structure of these messages, and how the messages are
exchanged.
• It uses TCP as the transport layer protocol for a reliable, end-to-end transfer.
38
Computer Networks - Prof Ashok Herur
39. HTTP – Persistent and non-persistent connections
• As with the TCP handshake, the Request - Response way of working takes time
to initiate and transfer the requested web page.
• Therefore, in many applications, where the client and the server interact for an
extended period of time, the TCP connection is kept “open” for a
(configurable) amount of time.
• This is called Persistent connection (default mode of operation)
• On the other hand, in non-Persistent connection mode, a new TCP connection
is opened for each request.
39
Computer Networks - Prof Ashok Herur
41. HTTP – Message format
• An HTTP request contains a series of lines that each end with a Carriage
return (cr) character (takes to the beginning of a line), followed by a Line feed
(lf) character (takes to next line)
• The first line is called the Request line, and subsequent ones are called Header
lines.
• The Request line has 3 fields: Method field, the URL field and the HTTP version
field.
• The Method field can take on different values, but generally is GET, but can
also be PUT, DELETE, POST, and HEAD.
• The GET method is used when the browser requests something that is identified in the
URL field.
• The PUT method allows an user to upload an object to a specific path on the server.
41
Computer Networks - Prof Ashok Herur
43. HTTP – Request Message format
• The DELETE method is used to delete that one had earlier PUT on the Web.
• The POST method is used when the user fills out a Form with more details for a specific
search (the Entity body is now used to describe the Request in detail, by including key
search words).
• The HEAD method is used by developers for debugging, wherein the request is responded
with an HTTP message but without the requested object.
• The Header lines contain information about various things, including:
• The host on which the object resides (for use of Web proxy caches);
• If the connection should be Persistent or Non-persistent;
• The browser type (Chrome, Firefox, Mozilla,..) that is making the request;
• Language preference (if the requested object is available in that language).
43
Computer Networks - Prof Ashok Herur
44. HTTP – Response Message format
• The first line is called the Status line, and subsequent ones are called Header
lines.
• The Status line indicates the HTTP version (/1, /1.1, /2 or /3), followed by the
code and phrase representing the status of the Request (found what was
requested, not found, not found in the specific language, etc).
• The Header part contains details like Time Stamp, Server type (Apache, etc),
length of the content (object) in bytes, the content type (Text /HTML, etc), the
Last-modified time, etc.
• The Entity body is a large part of the message and contains the requested
object itself.
44
Computer Networks - Prof Ashok Herur
46. Cookies
• Cookies are small files of information that a web server generates and sends to
a web browser.
• Web browsers store the cookies they receive for a predetermined period of
time, or for the length of a user's session on a website.
• They attach the relevant cookies to any future requests that the user makes to
the web server.
• Cookies help inform websites about the user, enabling the websites to
personalize the user experience.
• For example, ecommerce websites use cookies to know what merchandise users have
placed in their shopping carts.
• In addition, some cookies are necessary for security purposes, such as authentication
cookies.
46
Computer Networks - Prof Ashok Herur
47. Cookies
47
Computer Networks - Prof Ashok Herur
• The customer has earlier shopped on
ebay and has a cookie 8734.
• Now, when he tries to buy on Amazon
(with a http request), the server creates a
cookie 1678 and tells the client browser
to note it down (in a cookie file).
• All subsequent requests (within a
specified time) will contain this cookie
number in the header line.
• The cookie is stored in the database and
also tagged with the transactions.
49. Electronic mail
• Most internet systems use Simple Mail Transfer Protocol (SMTP), an
application layer protocol, to transfer mail from one user to another.
• At a high-level view, the Internet email system has 3 major components:
• User agents
• Mail servers
• SMTP
• Microsoft Outlook, Apple Mail, Web-based Gmail, Gmail app on a smartphone
are some of the popular User Agents.
49
Computer Networks - Prof Ashok Herur
50. Electronic mail
• When an user has completed composing the mail, his / her User Agent sends
the mail to it’s own, always-on server where it is placed in the outgoing
message queue.
• When the turn comes up, the message is forwarded to the server of the
recipient, using the SMTP protocol (over a TCP connection).
• If it cannot be delivered for any reason, it retries after every 30 minutes.
• If it cannot be delivered within a set time (a day or two), the sender is
informed about the inability to deliver the message.
• When delivered, it is stored in the mailbox of the recipient (like a mailbox
in the physical Post Office).
50
Computer Networks - Prof Ashok Herur
52. Electronic mail
• When the recipient comes online, his / her user agent retrieves the message
from the mailbox in his / her server, using one of the following protocols:
• the earlier POP (Post Office Protocol), or
• the newer IMAP (Internet Message Access Protocol), or
• HTTP (when the recipient is using a web-based email or a smartphone app)
• When POP is used, the recipient can choose to keep a copy or to delete it from
the server after it is downloaded.
• Further, the mail server can only be accessed from one device.
• However, in the case of IMAP, the copy is always maintained for some
management functions, and can be accessed from multiple devices.
52
Computer Networks - Prof Ashok Herur
54. Domain Name System (DNS)
• Each one of us is known by our name and our Aadhaar number.
• Which one is more useful? And where?
• Just like humans, hosts on the Internet are also identified by a human-friendly
Hostname (of variable length) like www.google.com, but that will not provide
any information about where the host is located within the Internet.
• DNS is the “Internet’s Directory Service” that provides the location (IP address)
of the host for the transfer of the request or of the information.
• Routers effectively forward the packets using the fixed-length, hierarchical
IP address.
54
Computer Networks - Prof Ashok Herur
55. Domain Name System (DNS)
• The DNS is:
• A distributed database implemented in a hierarchy of DNS servers;
• An Application layer protocol that allows hosts to query the database;
• Also used by other Application layer protocols like HTTP and SMTP to translate
user-supplied host names to IP addresses.
• A simple design for DNS would have just one DNS server that contains all the
mappings; The problems here would be:
• Risk of failure of the server – everything crashes.
• Huge traffic volume.
• Distance between the querying host and the server.
• Maintenance – Updating changes to the allocated IP addresses.
55
Computer Networks - Prof Ashok Herur
56. Domain Name System (DNS)
• In order to deal with the scale of operations, the DNS uses a large number of
servers organised in a hierarchical fashion and distributed across the world.
• No single DNS server has all the mappings for all of the hosts in the
internet.
• Broadly, there are 3 classes of DNS servers:
• Root DNS servers;
• Top-level domain (TLD) DNS servers;
• Authoritative DNS servers.
56
Computer Networks - Prof Ashok Herur
57. Hierarchy of DNS servers
57
Computer Networks - Prof Ashok Herur
58. Hierarchy of DNS servers - Working
• When a DNS client wants to determine the IP address for a particular
hostname (say, www.amazon.com), it contacts one of the nearest Root DNS
servers (there are more than 1000 of them scattered across the world, and
managed by 12 different organisations, coordinated by IANA.
• The Root DNS server returns the IP addresses of the TLD servers for the top-
level domain <.com>, which returns the IP addresses of an authoritative server
for amazon.com. This, in turn, returns the IP address for the host name www.
amazon.com
• The top-level domains - .com, .edu, .org, .gov, and all the country level
domains like .in, .uk, .fr, there is a TLD server (or a cluster of them).
58
Computer Networks - Prof Ashok Herur
59. Hierarchy of DNS servers - Working
• These are maintained by various registry companies like Verisign Global
Registry Services, Educause, etc.
• Authoritative DNS servers:
• Every organisation with publicly accessible hosts must provide accessible
DNS records that map the names of the hosts to their IP addresses.
• These DNS records will be housed in the Authoritative DNS server of that
organisation.
• The organisation can choose to implement its own server or pay to have
these records stored in the Authoritative DNS server of a service provider.
59
Computer Networks - Prof Ashok Herur
60. Interaction of various DNS servers
60
Computer Networks - Prof Ashok Herur
• The local DNS server shown here is
strictly not a part of the hierarchy of
DNS servers.
• Each ISP (residential or institutional)
has a local DNS server that is
physically close (to the residence) or
even within the LAN (institutional).
• The DNS server acts as a proxy for the
host and forwards the query into the
DNS server hierarchy.
61. DNS caching
• Whenever a local DNS server receives a DNS reply, containing the mapping of a
hostname to an IP address, it stores that in its local memory.
• This is called DNS caching.
• When another query, for the same hostname, arrives at the DNS server, it can
provide the desired IP address immediately.
• Since the mapping is not permanent, the cache is clearly regularly.
• The DNS server can also cache the IP addresses of TLD servers, thereby
allowing it to bypass the Root DNS servers (in the query chain).
61
Computer Networks - Prof Ashok Herur
62. DNS cache poisoning
• A DNS cache becomes poisoned or polluted when unauthorized domain names
or IP addresses are inserted into it.
• Occasionally a cache may become corrupted because of technical glitches or
administrative accidents, but DNS cache poisoning is typically associated with
computer viruses or other network attacks that insert invalid DNS entries into
the cache.
• Poisoning causes client requests to be redirected to the wrong destinations,
usually malicious websites or pages full of advertisements.
62
Computer Networks - Prof Ashok Herur
64. DNS cache flushing
• When troubleshooting cache poisoning or other internet connectivity
problems, the administrator of the computer / Local DNS server may wish
to flush (i.e. clear, reset, or erase) the DNS cache.
• Since clearing the DNS cache removes all the entries, it deletes any invalid
records too and forces your computer to repopulate those addresses the next
time you try accessing those websites.
64
Computer Networks - Prof Ashok Herur