Datocom Answers
Datocom Answers
Datocom Answers
When you access a Web site through a browser, you get a message
"Connecting to" followed by the IP address of the Web server. What is the use of
this address to you? Is it necessary that this address be displayed at all? Discuss.
Answer
When you access a Web site through a browser, the Domain Name Service of
your Internet service provider gives the IP address of the origin server in which
the resource is located. Then the TCP connection is established between the client
and the origin server.
When you see the message Connecting to followed by the IP address of the Web
server, it is an indication that the DNS of your ISP has done its job.
If the DNS is down, you will not get this message and will not be able to access
the URL. Note that the DNS may be working, but still you may not be able to
access the resource if the origin server is down.
2. Standardization of various protocols by the international standards bodies is
done through consensus. Everyone has to accept the proposal, only then the
proposal becomes a standard. For standardizing the OSI architecture, there were
two proposalsone proposal based on six layers and the other proposal based
on eight layers. Finally, seven layers were accepted (just the average of six and
eight, but no other reason!). Develop and suggest six layer and eight layer
architectures and study the pros and cons of each.
Answer
A six-layer architecture for computer communication can be just the elimination
of the session layer in the ISO/OSI architecture. Session layer functionality is
minimal and can be eliminated. An eight-layer-architecture can have an additional
layer to provide security features that runs above the transport layer
Source Port: The 16-bit port number of the process that originated
the TCP segment on the source device.
2.
Destination Port: The 16-bit port number of the process that is the
ultimate intended recipient of the message on the destination device.
3.
4.
5.
Data Offset: Specifies the number of 32-bit words of data in the TCP
header.
6.
7.
Control bits : are certain bits that are set to indicate the the
communication of control information .
8.
9.
10.
Urgent Pointer: Used in conjunction with the URG control bit for
priority data transfer. This field contains the sequence number of the
last byte of urgent data.
11.
Options : This variable-length field specifies extra TCP options such as the
maximum segment size..
12.
13.
that the sum of the application data rates is less than the capacities of each and
every link. Is some form of congestion control needed? Why?
Answer:
(a) A circuit-switched network would be well suited to the application described,
because the application involves long sessions with predictable smooth
bandwidth requirements. Since the transmission rate is known and the trac is
not bursty, bandwidth can be reserved for each application session circuit with no
signicant waste. In addition, we need not worry greatly about the overhead
costs of setting up and tearing down a circuit connection, which are amortized
over the lengthy duration of a typical application session.
(b) Given such generous link capacities, the network needs no congestion control
mechanism. In the worst (most potentially congested) case, all the applications
simultaneously transmit over one or more particular network links. However,
since each link oers sucient bandwidth to handle the sum of all of the
applications data rates, no congestion (very little queueing) will occur.
15. The following question is about propagation delay and transmission delay.
Consider two hosts: A and B, connected by a single link of R bps. Suppose that
the two hosts are separated by m meters, and suppose the propagation speed
along the link is s meters/second. Host A is to send a packet of size L bits to Host
B.
(a) Express the propagation delay Dprop in terms of m and s
(b) Determine the transmission time of the packet Dtrans in terms of Land R
(c) Ignoring processing and queuing delays, obtain an expression for the end-toend delay.
Answer:(a) Expression of propagation delay in term of m and s is:
dprop = m/s
(b) Time taken to transmit L bits of data at the rate of R bps is shown in the
following equation.
dtran = L/R
(c) We know that the end-to-end delay is the sum of all delays that the data
encounters. In this case, it will be sum of transmission and propagation delays
only.
dtotal = m/s+ L/R
15. Consider two hosts, A and B, connected by a single link of rate R bps.
Suppose that the two hosts are separated by m meters, and suppose the propagation
speed along the link is s meters/sec. Host A is to send a packet of size L bits to
Host B.
(a) Express the propagation delay, dprop, in terms of m and s.
(b) Determine the transmission time of the packet, dtrans, in terms of L and R.
(c) Ignoring processing and queueing delays, obtain an expression for the end-toend delay.
(d) Suppose Host A begins to transmit the packet at time t = 0. At time t = dtrans,
where is the last bit of the packet?
(e) Suppose dprop is greater than dtrans. At time t = dtrans, where is the first bit of
the packet?
(f) Suppose dprop is less than dtrans. At time t = dtrans, where is the first bit of the
packet?
(g) Suppose s = 2.5108, L = 100 bits, and R = 28 kbps. Find the distance m so
that dprop equals dtrans.
a) d prop = m / s seconds.
b) d trans = L / R seconds.
32. Host A is sending an enormous file to Host B over a TCP connection. Over this
connection there is never any packet loss and the timers never expire. Denote the
transmission rate of the link connecting Host A to the Internet by R bps. Suppose
that the process in Host A is capable of sending data into its TCP socket at a rate
of S bps, where S = 10*R. Further suppose that the TCP receive buffer is large
enough to hold the entire file, and the send buffer can hold only 1% of the file.
What would prevent the process in Host A from continuously passing data to its
TCP socket at rate of S bps? TCP flow control? TCP congestion control? Or
something else? Explain.
Answer
In this situation there is no danger in overflowing the receiver since there is no
loss and acknowledgements are returned before timeouts. Also TCP congestion
congestion control does not hold back the sender. But in one case host A will not
continuously pass data to the socket because the send buffer will quickly fill up.
39. Suppose the IEEE 802.11 RTS and CTS frames were as long as the standard
DATA and ACK frames. Would there be any advantage to using the CTS and RTS
frames? Why?
Answer
=> Yes. They are still neccessary to avoid the hidden terminal problem. The CTS
and RTS frames makes sure a node A sending to B wont interfer with another
node C also attempting to contact B, even if A and C are unable to see each other.
==> If a collision occurs while transmitting the RTS frame, one could just as
well have transmitted the DATA frame. The point of RTS/CTS is to reduce
collisions when transmitting a large amount of data.
Second, the CTS/RTS frames introduce delay and consumes channel
resources.
42. What is the essential difference between Dijkstras algorithm and the
Bellman-Ford algorithm?
Answer
==>Dijkstra's algorithm requires all edge costs to be nonnegative, whereas the
Bellman-Ford algorithm does not. They are used to find shortest paths, so for
example, you could use such an algorithm to suggest shortest driving routes.
43.There is one kind of adaptive routing scheme known as backward learning. As
a packet is routed through the network, it carries not only the destination
address, but the source address plus a running hop count that is incremented for
each hop.
Each node builds a routing table that gives the next node and hop count for
each destination. How is the packet information used to build the table? What
are the advantages and disadvantages of this technique?
==>The popularity of adaptive routing is mainly due to the following reasons:
Adaptive routing improves performance of the network. It aids in avoiding
congestion.
44. Consider a system using flooding with a hop counter. Suppose that the hop
counter is originally set to the diameter of the network. When the hop count
reaches zero, the packet is discarded except at its destination. Does this always
ensure that a packet will reach its destination if there exists at least one operable
path? Why or why not?
Answer
Yes. With flooding, all possible paths are used. So at least one path that is
the minimum-hop path to the destination will be used.
45. Why is it that when the load exceeds the network capacity, delay tends to
infinity? Here is a simple intuitive explanation of why delay must go to infinity:
Suppose that each node in the network is equipped with buffers of infinite size
and suppose that the input load exceeds network capacity. Under ideal
conditions, the network will continue to sustain a normalized throughput of 1.0.
Therefore, the rate of packets leaving the network is 1.0. Because the
rate of packets entering the network is greater than 1.0, internal queue sizes
grow. In the steady state, with input greater than output, these queue sizes grow
without bound and therefore queuing delays grow without bound.
46. What is the difference between backward and forward explicit congestion
signaling?
Backward: Notifies the source that congestion avoidance procedures should be
initiated where applicable for traffic in the opposite direction of the received
notification. It indicates that the packets that the user transmits on this logical
connection may encounter congested resources.
Forward: Notifies the user that congestion avoidance procedures should be
initiated where applicable for traffic in the same direction as the received
notification. It indicates that this packet, on this logical connection, has
encountered congested resources.
49.Explain the difference between slow FHSS and fast FHSS
=>Slow FHSS = multiple signal elements per hop.
fast FHSS = multiple hops per signal element."