Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CN PYQs GTU

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Summer 2024 Paper Solutions

Q1.

(a) Define delay, loss, and throughput in the context of packet-switching networks.
(03 Marks)

1. Delay: As a packet travels from one node (host or router) to the subsequent node (host or
router) along this path suffers from several types of delays at each node along the path.
2. Loss: Packet loss occurs when packets are dropped during transmission due to network
congestion or errors. In a packet-switched network, if a router's buffer is full, incoming
packets may be discarded. Packet loss can lead to data retransmissions and impact the
quality of communication, especially in real-time applications.
3. Throughput: Throughput is the rate at which data is successfully transmitted over the
network, typically measured in bits per second (bps). It indicates how much data is delivered
from the sender to the receiver in a given amount of time and can be affected by network
congestion, bandwidth, and packet loss.

(b) Differentiate persistent HTTP and non-persistent HTTP. (04 Marks)

Feature Persistent HTTP Non-Persistent HTTP

Connection Uses one TCP connection for Opens a new TCP connection for
Handling multiple requests and responses. each request/response.

Efficiency More efficient; reduces connection Less efficient; more time spent
setup/teardown time. opening/closing connections.

Resource Usage Uses fewer server and client Uses more resources due to multiple
resources. connections.

Network Load Lowers network load by keeping Increases network load with more
fewer connections open. connections, leading to congestion.

RTT (Round-Trip Lower RTT since only one Higher RTT due to repeated
Time) handshake is needed for multiple handshakes for each request.
requests.

TCP Connection Closes TCP connection after Closes TCP connection immediately
Closure multiple requests. after each request.
(c) Explain the functions of the TCP/IP Protocol stack. (07 Marks)

The TCP/IP Protocol Stack is a set of layers that work together to enable data transmission over
the internet. Each layer has specific functions that contribute to the smooth and reliable transfer of
data between devices. Here’s a breakdown of the functions of each layer in the TCP/IP protocol
stack:

1. Application Layer

● Functions:
○ Provides services directly to end-users and applications.
○ Manages application-specific protocols like HTTP (for web browsing), FTP (for file
transfers), SMTP (for email), and DNS (for domain name resolution).
● Purpose: Ensures that the data is presented in a way that applications and users can
understand.
● Examples of Protocols: HTTP, FTP, SMTP, DNS.

2. Transport Layer

● Functions:
○ Provides reliable data transfer between devices.
○ Controls data flow using protocols like TCP (Transmission Control Protocol) and UDP
(User Datagram Protocol).
○ Manages error-checking, data integrity, and retransmission of lost data.
○ Breaks data into segments and reassembles them at the destination.
● Purpose: Ensures that data is delivered error-free, in sequence, and without loss.
● Examples of Protocols: TCP (reliable, connection-oriented) and UDP (fast,
connectionless).

3. Internet Layer

● Functions:
○ Handles logical addressing and routing of data packets between devices.
○ Uses IP addresses to identify source and destination devices.
○ Splits data into packets and ensures each packet reaches its destination via the best
possible route.
○ Manages fragmentation and reassembly of packets if they are too large for the
network.
● Purpose: Ensures packets are sent from the source to the correct destination across
multiple networks.
● Examples of Protocols: IP (Internet Protocol), ICMP (Internet Control Message Protocol),
ARP (Address Resolution Protocol).

4. Network Access Layer (Link Layer)

● Functions:
○ Defines how data is physically transmitted over network hardware like cables or
wireless signals.
○ Manages data framing, error detection, and MAC (Media Access Control) addresses
for device identification within the same network.
○ Interfaces with the physical hardware layer (e.g., Ethernet, Wi-Fi) to send data
packets over physical media.
● Purpose: Controls the delivery of data to devices on the same network and enables
communication over various physical media.
● Examples of Protocols: Ethernet, Wi-Fi, PPP (Point-to-Point Protocol).

Q.2

(a) Explain SMTP Protocol. (03 Marks)

● SMTP (Simple Mail Transfer Protocol) is a protocol used for sending emails over the
internet. It operates at the application layer of the TCP/IP protocol stack and ensures reliable
email transmission from the sender’s server to the recipient’s server.
● It uses TCP to reliably transfer email messages from client to server using port 25.
● It restricts the body (not just the headers) of all mail messages to simple 7-bit ASCII.
● SMTP does not use intermediate mail servers for sending mail.
● If the receiving end mail server is down, the message remains in the sending end mail server
and waits for a new attempt.

(b) Define the following socket function calls:

1. Connect
2. Bind
3. Listen
4. Send (04 Marks)

1. Connect:
○ Purpose: Establishes a connection between a client socket and a server socket.
○ Function: The connect() function is used by a client to initiate a connection to a
server using the server's IP address and port number.
○ Usage: This call is used in TCP connections where a reliable, two-way
communication channel is required.
2. Bind:
○ Purpose: Assigns an IP address and port to a socket.
○ Function: The bind() function associates a socket with a specific local IP address
and port number, making it identifiable on the network.
○ Usage: Often used by server applications to define the address and port they will
listen to for incoming client connections.
3. Listen:
○ Purpose: Prepares a bound socket to accept incoming connection requests.
○ Function: The listen() function tells the operating system to queue incoming
connection requests, allowing the server to accept them one at a time.
○ Usage: Used by servers to indicate they are ready to receive connections,
specifying a maximum number of pending connections.
4. Send:
○ Purpose: Sends data over a connected socket.
○ Function: The send() function is used to transmit data from one socket to another.
It sends data to the connected peer and returns the number of bytes sent.
○ Usage: Commonly used after a connection is established to send messages or data
between client and server.

(c) Explain the purpose of DNS and its role in translating domain names into IP
addresses. (07 Marks)

DNS (Domain Name System) is an internet service that translates human-readable domain names
(like www.amazon.com) into machine-readable IP addresses (like 192.0.2.1).

Purpose: People find it easier to remember alphabetic domain names than numeric IP addresses,
so DNS helps in making the internet user-friendly by allowing access to websites using names
instead of numbers.

How DNS Translates Domain Names into IP Addresses

1. User Request: When a user enters a domain name in their browser, a DNS query is initiated
to find the IP address associated with that domain.
2. Query Process:
○ The request is sent to a local DNS server (often managed by the user’s ISP).
○ If the local DNS server doesn’t have the answer, it forwards the request to root DNS
servers.
○ The root DNS servers direct the query to the relevant Top-Level Domain (TLD)
servers (e.g., .com or .org).
○ The TLD server then directs the query to the authoritative DNS server for the
specific domain, which holds the actual IP address.
3. IP Address Return: The authoritative DNS server returns the IP address to the local DNS
server, which then sends it to the user’s device.
4. Connection Establishment: With the IP address, the user’s device can now establish a
direct connection to the destination server, enabling data transfer.

OR

(c) How does recursive queries in DNS work? Explain the message format of DNS. (07
Marks)

Q.3

(a) What is multiplexing and demultiplexing in the transport layer? (03 Marks)

Multiplexing and Demultiplexing are processes in the transport layer that allow multiple
applications to send and receive data over a single network connection.
Multiplexing: At the sender’s side, multiplexing collects data from multiple applications and
combines them for transmission over a single network connection.
Demultiplexing: At the receiver’s side, demultiplexing takes incoming data from the network and
directs it to the correct application.

(b) What are the functions of the transport layer in the OSI reference model? How is
segmentation and reassembly performed in the transport layer? (04 Marks)

Functions of the Transport Layer

1. Reliable Data Transfer: Ensures data is transferred accurately and in order using protocols
like TCP (Transmission Control Protocol).
2. Flow Control: Manages the data flow rate between sender and receiver to prevent buffer
overflow at the receiver’s side.
3. Error Detection and Correction: Detects and corrects errors in data transmission, ensuring
data integrity.
4. Multiplexing and Demultiplexing: Allows multiple applications to share the same network
connection by using port numbers to identify data streams for specific applications.
5. Connection Management: Establishes, maintains, and terminates connections between
devices, as seen in connection-oriented protocols like TCP.

Segmentation and Reassembly in the Transport Layer

● Segmentation:
○ The transport layer breaks down large data messages from the application layer into
smaller, manageable segments for transmission.
○ Each segment is assigned a sequence number so the data can be reassembled
correctly at the destination.
● Reassembly:
○ At the receiver’s side, the transport layer uses the sequence numbers to reassemble
the segments into the original message in the correct order.
○ If any segments are missing or out of order, they can be requested for
retransmission.

(c) Differentiate between Congestion Control, Flow Control, and Error Control. (07
Marks)

Feature Congestion Control Flow Control Error Control

Purpose Prevents network Prevents a fast sender Ensures data integrity by


congestion by managing from overwhelming a detecting and correcting
data flow into the slow receiver errors
network

Scope Focuses on the entire Focuses on the sender Focuses on the data link
network, managing traffic and receiver pair or transport layer
from multiple sources (end-to-end)

How It Works Adjusts the rate of data Adjusts data flow Uses techniques like
entering the network based on the receiver’s checksums,
based on signs of buffer capacity acknowledgments, and
congestion (e.g., packet retransmissions
loss, delay)
Key TCP slow start, Receiver’s advertised Checksums, parity
Mechanisms congestion avoidance, window size (receiver checks, automatic repeat
and window size window) requests (ARQ)
adjustment

Common Primarily managed by Managed by both TCP Found in both TCP


Protocols TCP (Transmission and UDP (User (transport layer) and Data
Control Protocol) Datagram Protocol) Link layer protocols

Example When multiple devices When a fast sender When a corrupted packet
Scenario send data too quickly, overwhelms a slow is detected, it is
causing router overload receiver’s buffer, retransmitted to ensure
and packet loss leading to dropped accuracy
data

OR

(a) Enlist the advantages of virtual circuit over datagram. (03 Marks)
(b) Explain the leaky bucket protocol for congestion control. (04 Marks)

The Leaky Bucket Protocol is a congestion control technique used to regulate data flow in
networks, ensuring a steady transmission rate and preventing sudden data bursts that could
overwhelm the network.

How the Leaky Bucket Protocol Works

1. Bucket as a Buffer:
○ Imagine data packets entering a "bucket" that acts as a buffer.
○ Data flows into the bucket at varying rates but can only leave (be transmitted) at a
fixed, constant rate, similar to water leaking from a hole at the bottom of a bucket.
2. Constant Output Rate:
○ The protocol allows data to exit the bucket (be sent out) at a steady, pre-set rate,
regardless of the rate at which data enters.
○ This prevents sudden surges in data, which could lead to network congestion.
3. Overflow Control:
○ If the bucket fills up due to high incoming data, any additional data "overflows" and
is discarded. This prevents buffer overflow and data flooding, which could otherwise
disrupt the network.

The leaky bucket protocol manages data flow by buffering incoming data and releasing it at a fixed
rate, preventing network congestion by smoothing out traffic bursts and discarding excess data
when necessary.

(c) Explain the various fields of the TCP header. What are the advantages and
disadvantages of TCP? (07 Marks)
○ Source Port Number (16 bits): Identifies the port number of the application sending
the data on the source device.
○ Destination Port Number (16 bits): Identifies the port number of the application
receiving the data on the destination device.
○ Sequence Number (32 bits): Used to ensure data is received in the correct order. It
indicates the position of the first byte of data in this segment within the entire data
stream.
○ Acknowledgment Number (32 bits): If the ACK flag is set, this field contains the
next sequence number that the sender of the segment expects to receive.
○ Header Length (4 bits): Specifies the length of the TCP header in 32-bit words.
This is also known as the Data Offset field.
○ Reserved (6 bits): Reserved for future use and should be set to zero.
○ Flags (9 bits):
■ URG: Urgent pointer field is significant.
■ ACK: The acknowledgement field is significant.
■ PSH: Push function; data should be sent to the receiving application
immediately.
■ RST: Reset the connection.
■ SYN: Synchronize sequence numbers to initiate a connection.
■ FIN: No more data from the sender (finish the connection).
○ Window Size (16 bits): Specifies the size of the receive window, which is the buffer
space available for incoming data. It helps with flow control.
○ TCP Checksum (16 bits): Used for error-checking the header and data.
○ Urgent Pointer (16 bits): Points to the sequence number of the byte following
urgent data, if the URG flag is set.
○ Options (Variable length): Optional settings for TCP, such as maximum segment
size or window scaling.
○ Data (Variable length): Contains the actual data being transmitted. This is optional,
as a TCP segment could consist solely of control information.

Q.4

(a) What is the use of class D and Class E in IPv4 addressing? (03 Marks)

● Purpose: Reserved for multicasting.


● Address Range: IP addresses from 224.0.0.0 to 239.255.255.255.
● Use: Class D addresses allow a single data packet to be sent to a group of hosts
simultaneously, making it useful for applications like video conferencing, live streaming, and
online gaming.

● Purpose: Reserved for experimental and future use.


● Address Range: IP addresses from 240.0.0.0 to 255.255.255.254.
● Use: Class E addresses are not used for standard networking and are reserved for research
and experimental purposes. They are not available for assignment to end devices.

(b) Differentiate between classful and classless addressing in IPv4. (04 Marks)
Feature Classful Addressing Classless Addressing

Address Uses fixed IP address classes (A, B, Doesn’t use fixed classes; allows flexible
Classes C, D, E) with set ranges and subnet subnetting with custom subnet masks
masks

Subnet Has default subnet masks based on Allows custom subnet masks, giving more
Mask class (e.g., Class A uses 255.0.0.0) control over address allocation

Efficiency Can waste IP addresses due to fixed More efficient because addresses are
class sizes assigned based on actual network needs

Routing Larger routing tables since networks Smaller routing tables using CIDR
are managed in fixed blocks (Classless Inter-Domain Routing), which
groups addresses

(c) Explain the link state routing protocol. (07 Marks)

The Link State Routing Protocol is a type of routing protocol used to determine the best path for
data to travel across a network.

It’s often associated with Dijkstra’s algorithm, finds the shortest path from a source node to every
other node in the network

Key Components and Notations:

● c(x, y): Cost of the link between nodes xxx and yyy; ∞\infty∞ if they are not direct
neighbors.
● D(v): The current cost of the path from the source node to node vvv.
● p(v): Predecessor node along the path to vvv.
● N': Set of nodes whose least-cost paths from the source node are known.

Dijkstra’s Algorithm Process:


1. Initialization:
2. N' = {u}
3. for all nodes v
4. if v adjacent to u
5. then D(v) = c(u,v)
6. else D(v) = ∞
7.
8. Loop
9. find w not in N' such that D(w) is a minimum
10. add w to N'
11. update D(v) for all v adjacent to w and not in N'
12. :
13. D(v) = min( D(v), D(w) + c(w,v) )
14. /* new cost to v is either old cost to v or known
15. shortest path cost to w plus cost from w to v */
16. until all nodes in N'

OR

(a) Define Unicast, Multicast, and Broadcast. (03 Marks)

Unicast:

● Definition: Unicast is a one-to-one communication where data is sent from one sender to a
single specific receiver.
● Example: Sending an email to one recipient.

Multicast:

● Definition: Multicast is a one-to-many communication where data is sent from one sender
to multiple specific receivers within a group.
● Example: Streaming a live event to subscribers in a particular network group.

Broadcast:
● Definition: Broadcast is a one-to-all communication where data is sent from one sender to
all devices in a network.
● Example: Sending an update to all computers in a local network.

(b) Find the subnet mask value to create the following number of subnets in class A:
a) 2
b) 5
c) 16
d) 63 (04 Marks)

a) For 2 Subnets

● 21 = 2 → 1 bit needed for 2 subnets.


● Subnet Mask: 1 additional bit changes the subnet mask from /8 to /9.
● Result: Subnet Mask = 255.128.0.0 (or /9 in CIDR notation).

b) For 5 Subnets

● 23=8 → 3 bits needed for 5 subnets (since 22=42^2 = 422=4 is not enough).
● Subnet Mask: 3 additional bits change the subnet mask from /8 to /11.
● Result: Subnet Mask = 255.224.0.0 (or /11 in CIDR notation).

c) For 16 Subnets

● 24=16 → 4 bits needed for 16 subnets.


● Subnet Mask: 4 additional bits change the subnet mask from /8 to /12.
● Result: Subnet Mask = 255.240.0.0 (or /12 in CIDR notation).

d) For 63 Subnets

● 26=64 → 6 bits needed for 63 subnets (since 25=322^5 = 3225=32 is not enough).
● Subnet Mask: 6 additional bits change the subnet mask from /8 to /14.
● Result: Subnet Mask = 255.252.0.0 (or /14 in CIDR notation).

Summary of Subnet Masks

Number of Subnets Bits Subnet Mask (CIDR) Subnet Mask (Dotted Decimal)
Needed

2 1 /9 255.128.0.0

5 3 /11 255.224.0.0
16 4 /12 255.240.0.0

63 6 /14 255.252.0.0

(c) Explain the distance vector routing protocol. (07 Marks)

● The Distance Vector Routing Algorithm is a distributed, asynchronous approach based


on the Bellman-Ford equation.
● Distance Vector Routing is a routing algorithm used in computer networks to determine
the best path for data packets from source to destination.
● Each router maintains a table (called a routing table) that contains information about the
distance (in terms of hops) to reach each network in the system.
● Routers exchange this information with their neighbors to update their tables and make
decisions on the shortest path.

Process:

1. Each node waits for a change in local link cost or receives an update from a neighbor.
2. Recomputes its estimates.
3. If the distance to any destination changes, the node sends updates to neighbors.
Q.5

(a) Explain the bit stuffing method with an example. (03 Marks)

● What is it?: A technique used to ensure that special bit patterns (like frame delimiters) don’t
accidentally appear in the data stream.
● Why is it used?: To prevent confusion between actual data and control signals.
● How it works: If the data has 5 consecutive 1's, a 0 is automatically inserted (stuffed) to
break the pattern.
● Example:
○ Original data: 01111110
○ After bit stuffing: 011111010
(b) What is flow control? How does the Stop-and-Wait protocol perform flow control?
(04 Marks)

Flow Control: Flow control is a mechanism to prevent a fast sender from overwhelming a slow
receiver.

Stop-and-Wait Protocol performs flow control in the following way:

1. Sender-Receiver Interaction: In the Stop-and-Wait protocol, the sender transmits a single


frame of data and then waits for an acknowledgment (ACK) from the receiver before
sending the next frame.
2. Prevents Overflow: The sender does not send the next frame until the acknowledgment of
the current frame is received. This ensures that the receiver has processed the current data
before more data is sent, thus preventing the receiver from being overwhelmed.
3. Simple and Effective: The protocol is simple and effective for managing flow control in
low-throughput or unreliable networks, but it can be inefficient in high-latency networks, as
the sender is idle while waiting for the acknowledgment.

In summary, Stop-and-Wait uses a basic method of flow control by ensuring the sender does not
exceed the receiver's capacity to process data.

(c) Explain the Go-Back-N protocol. What is the limitation of it? How does the
Selective Repeat protocol overcome the limitation of the Go-Back-N protocol? (07
Marks)

Go-Back-N (GBN) Protocol

○ Concept: The sender can send multiple packets (up to a specific number, called the
"window size") without waiting for an acknowledgment for each one. If a packet is
lost or an error is detected, the sender goes back and retransmits all packets
starting from the lost or erroneous packet.
○ How It Works:
■ The sender maintains a window of packets it can send without waiting for
an acknowledgment.
■ For example, if the window size is 4, the sender can send packets 1, 2, 3, and
4 before waiting for an acknowledgment.
■ Each time a packet is sent, it is held in a buffer until it’s acknowledged by the
receiver.
■ The receiver sends cumulative ACKs for the last correctly received packet
in order. If packets 1 and 2 are received correctly, the receiver sends an ACK
for packet 2, acknowledging both packets 1 and 2.
■ If the sender doesn’t receive an acknowledgment for a packet within a
specific time (due to packet loss or error), it retransmits all packets in the
window starting from the unacknowledged packet.
○ Limitations:
■ Inefficiency with Errors: If a single packet in the window is lost or
corrupted, the sender has to retransmit all the packets in the window, even if
some packets were received correctly. This can lead to a lot of unnecessary
retransmissions, reducing the protocol's efficiency, especially in
environments with high error rates.
2. Selective Repeat(SR) Protocol:
○ Concept: In Selective Repeat, the sender also sends multiple packets at once, but it
only retransmits individual packets that are lost or contain errors rather than
retransmitting the entire window.
○ How It Works:
■ Similar to Go-Back-N, the sender has a window size and can send several
packets without waiting for an acknowledgment.
■ The receiver sends an acknowledgment (ACK) for each packet it receives,
even if packets are received out of order.
■ If a packet is received out of order, the receiver buffers it until it can fill in any
gaps (e.g., if it receives packets 1 and 3 but is missing packet 2, it waits for
packet 2 and keeps packet 3 in the buffer).
■ If a packet is lost or contains an error, the sender only retransmits the
specific packet that was not acknowledged, rather than the entire window.

OR

(a) Define the following types of assumptions in the MAC sub-layer:


a) Single channel
b) N Stations
c) Collision (03 Marks)

Single Channel:

● Assumes there is only one shared communication channel that all stations in the network
must use for data transmission. All stations have to access the same channel for sending
data, which can lead to conflicts (collisions) if multiple stations transmit simultaneously.

N Stations:

● Assumes that there are N stations or devices in the network, all of which must share the
same communication medium. These stations are competing for access to the channel, and
the MAC sub-layer manages how these stations access the shared medium efficiently to
avoid data collisions and ensure fair communication.

Collision:

● Assumes that when two or more stations transmit at the same time on the same channel,
their signals interfere with each other, causing a collision. The MAC sub-layer is responsible
for detecting and resolving these collisions, usually through retransmission protocols like
CSMA/CD (Carrier Sense Multiple Access with Collision Detection).

(b) How does two-dimensional parity check work for detecting errors in the data link
layer? (04 Marks)

Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, then both are sent along with the
data. At the receiving end, these are compared with the parity bits calculated on the received data.

How It Works:

1. Data Matrix:
The data is arranged in a matrix format, where each row contains data bits. A parity bit is
added at the end of each row.
2. Row Parity:
For each row, a parity bit is added to ensure the number of 1's in the row (including the parity
bit) is either even or odd, depending on the chosen parity type.
3. Column Parity:
Parity bits are also added for each column to ensure that the number of 1's in the column
(including the column parity bit) is even or odd.
4. Final Parity Bit:
A final parity bit is placed at the bottom-right corner of the matrix to ensure overall parity for
the entire matrix, taking into account both row and column parity bits.

(c) How do Pure ALOHA and Slotted ALOHA protocols work? (07 Marks)

Pure Aloha Protocol

● It allows users to transmit whenever they have data to be sent.


● Senders wait to see if a collision occurred (after whole message has been sent).
● If collision occurs, each station involved waits a random amount of time then tries again.
● Systems in which multiple users share a common channel in a way that can lead to conflicts
are widely known as contention systems.
● Whenever two frames try to occupy the channel at the same time, there will be a collision
and both will be garbled.
● If the first bit of a new frame overlaps with just the last bit of a frame almost finished, both
frames will be totally destroyed and both will have to be retransmitted later.
● Frames are transmitted at completely arbitrary times.
● The throughput of the Pure ALOHA is maximized when the frames are of uniform length.
● The formula to calculate the throughput of the Pure ALOHA is S-=G*e-2G
● The throughput is maximum when G=1/2 which is 18% of the total transmitted data frames.

(ii) Slotted Aloha

● It was invented to improve the efficiency of pure ALOHA as chances of collision in pure
ALOHA are very high.
● The time of the shared channel is divided into discrete intervals called slots.
● The stations can send a frame only at the beginning of the slot and only one frame is sent in
each slot.
● If any station is not able to place the frame onto the channel at the beginning of the slot
then the station has to wait until the beginning of the next time slot.
● The formula to calculate the throughput of the Slotted ALOHA is S=G*e-G
● The throughput is maximum when G=1 which is 37% of the total transmitted data frames.
● 37% of the time slot is empty, 37% successes and 26% collision

You might also like