ipj27.2
ipj27.2
ipj27.2
In This Issue Internet access by means of Low Earth Orbit (LEO) satellites has
become very popular in recent years, particularly in rural areas where
From the Editor....................... 1 alternative solutions are limited. We covered this technology in an
article in our September 2023 issue (Volume 26, No. 2). The benefits
of LEO systems include a much lower cost to launch and place the
Starlink and TCP..................... 2 satellites into a low orbit, and a shorter Round Trip Time (RTT) as
compared to solutions involving geosynchronous satellites. However,
since LEO satellites move across the sky, a complex system of tracking
DNS Evolution....................... 14 and handoffs is deployed in order to provide continuous connectivity
to the end user. In our first article, Geoff Huston examines the perfor-
mance of Starlink from the point of view of the Transmission Control
Fragments............................... 23
Protocol (TCP).
Thank You............................. 28 When I joined the Network Information Center (NIC) at SRI Inter-
national in 1984, I was handed two Request For Comments (RFCs)
describing the Domain Name System (DNS), and I was told that the
Call for Papers........................ 30 DNS would soon be deployed across the Internet (mainly known as
ARPANET and MILNET at the time). The NIC was still maintain-
ing and publishing a host table in 1984, and it would take a couple
Supporters and Sponsors........ 31 of years before the DNS became fully operational. Our second article,
also by Geoff Huston, looks at how the DNS has evolved in the last 40
years with various enhancements and extensions. The DNS is still one
of the most active areas of work within the Internet Engineering Task
Force (IETF).
You can download IPJ Publication of this journal is made possible by the generous support of
back issues and find our donors, supporters, and sponsors. We also depend on your feed-
subscription information at: back and suggestions. If you would like to comment on, donate to, or
sponsor IPJ, please contact us at ipj@protocoljournal.org
www.protocoljournal.org
—Ole J. Jacobsen, Editor and Publisher
ISSN 1944-1134
ole@protocoljournal.org
A View of Starlink from a Transport Protocol
by Geoff Huston, APNIC
D
igital communications systems always represent a collection
of design trade-offs. Maximising one characteristic of a system
may impair other characteristics, and various communica-
tions services may offer different performance characteristics based
on the intersection of these design decisions with the physical char-
acteristics of the communications medium. In this article I’ll look at
the Starlink service[0,1], and how the Transmission Control Protocol
(TCP)—the transport-protocol workhorse of the Internet—interacts
with the Starlink service.
To start, it’s useful to recall a small piece of Newtonian physics from
some 340 years ago[2]. On the surface of the earth, assuming that you are
high enough to clear various mountains that may be in the way—and
also assuming that the earth has no friction-inducing atmosphere—if
you fire a projectile horizontally fast enough it will not return to the
earth, but head into space. There is, however, a critical velocity where
the projectile will be captured by the earth’s gravity and neither fall to
ground nor head out into space. That orbital velocity at the surface of
the earth is some 40,320 km/sec. The orbital velocity decreases with
altitude, and at an altitude of 35,786 km above the surface of the earth
the orbital velocity of the projective relative to a point on the surface
of the spinning earth is 0 km/sec. This altitude is of a geosynchronous
equatorial orbit, where the object appears to sit at a fixed location in
the sky.
Geosynchronous Services
Geosynchronous satellites were the favoured approach for the first
wave of satellite-based communications services. Each satellite could
“cover” an entire hemisphere. If the satellite was on the equatorial
plane, then it was at a fixed location in the sky with respect to the
earth, allowing the use of large antennas. These antennas could oper-
ate at a low signal-to-noise ratio, allowing the signal modulation to
use a high density of discrete phase amplitude points, which lifted the
capacity of the service. All these advantages have to be offset against
the less-favourable aspects of this service.
This extended latency means that the endpoints need to use large buf-
fers to hold a copy of all the unacknowledged data, as is required by
the TCP protocol. TCP is a feedback-governed protocol that uses
ACK pacing. The longer the RTT the greater the lag in feedback, and
the slower the response from endpoints to congestion or to avail-
able capacity. The congestion considerations lead to the common
use of large buffers in the systems that drive the satellite circuits, which
can further exacerbate congestion-induced instability. In geosynchro-
nous service contexts, the individual TCP sessions are more prone to
instability and they experience longer recovery times following low
events[3].
This group of orbital altitudes, from some 160 to 2,000 km, are collec-
tively termed LEOs[4]. The objective is to keep the orbit of the satellite
high enough to prevent its slowing down by grazing the denser parts
of the earth’s ionosphere, but not so high that it loses the radiation
protection afforded by the Inner Van Allen belt. At a height of 550 km,
the minimum signal propagation delay to reach the satellite and return
to the surface of the earth is just 3.7 ms.
But all of these facts come with some different issues. At a height of
550 km, an orbiting satellite can be seen from only a small part of the
earth. If the minimum effective elevation to establish communication
is 25 degrees of elevation above the horizon, then the footprint of the
satellite is a circle with a radius of 940 km, or a circle of area 2M km2.
At this altitude, the satellite orbits with a relative speed of 27,000 km/
hour and it passes across the sky from horizon to horizon in less than
5 minutes. Some implications for the design of the radio component of
the service are evident. The satellites are close enough that there is no
need to use larger dish antennas that require some mechanised steering
arrangement, but this situation itself it not without its downsides. An
individual signal carrier might be initially received as a weak signal (in
relative terms), increase in strength as the satellite transponder and the
earth antenna move into alignment, and weaken again as the satellite
moves on. Starlink’s services use a phased-array arrangement with a
grid of smaller antennas on a planar surface, which allows the anten-
nas to be electronically steered by altering the phase difference between
each of the antennas in the grid. Even so, this arrangement is relatively
coarse, so the signal quality is not consistent, implying a constantly
variable signal-to-noise ratio as the phased-array antenna tracks each
satellite.
One way to see how this variability affects the service characteristics is
to use a capacity measurement tool to measure the service capacity reg-
ularly. The results of such regularity of testing are shown in Figure 1.
Here the test is a Speedtest measurement test[5], performed on a 4-hourly
basis for the period January 2024 through March 2024.
In Internet terms, ping[6] is a very old tool. However, at the same time
it is very useful which probably explains its longevity. Figure 2 shows a
plot of a continuous (flood) ping across a Starlink connection from the
customer-side terminal to the first IP endpoint behind the Starlink
earth station.
The first major characteristic of this data is that the minimum latency
changes every 15 seconds. It appears that this change correlates to
the user’s being assigned to a different satellite, which implies that the
user equipment “tracks” each spacecraft for 15-second intervals. This
period corresponds to a tracking angle of 11 degrees of arc.
The second characteristic is that loss events are seen to occur at times
of switchover between satellites (as shown in Figure 3), as well as
occurring less frequently as a result of obstruction, signal quality, or
congestion.
The algorithm is called the RENO TCP control algorithm. Its use in
today’s Internet has been largely supplanted by the CUBIC TCP control
algorithm[8], which uses a varying window inflation rate that attempts
to stabilise the sending rate at a level just below a level that causes the
buildup of network queues, which ultimately leads to packet loss.
Obviously, as we’ve noted, the first two conditions do not hold for
end-to-end paths that include a Starlink component. The loss profile
is also different. There is the potential for congestion-induced packet
loss, as is the case in any non-synchronous packet-switched medium,
but an additional loss component can occur during satellite handover,
and other impairments can further affect the radio signal.
In this case, BBR has made an initial estimate of some 250 Mbps for
the path bandwidth. This estimate appears to have been revised at sec-
ond 14 to 350 Mbps, and then dropped to 200 Mbps 15 seconds later
for the final 10 seconds of this test. It is likely that these changes are the
result of BBR responding to satellite handover in Starlink.
The same BBR test was performed in an off-peak time and had a very
similar outcome (Figure 8 on the following page).
GEOFF HUSTON AM, B.Sc., M.Sc., is the Chief Scientist at APNIC, the Regional
Internet Registry serving the Asia Pacific region. He has been closely involved with the
development of the Internet for many years, particularly within Australia, where he
was responsible for building the Internet within the Australian academic and research
sector in the early 1990s. He is author of numerous Internet-related books and was
a member of the Internet Architecture Board from 1999 until 2005. He served on
the Board of Trustees of the Internet Society from 1992 until 2001. At various times
Geoff has worked as an Internet researcher, an ISP systems architect, and a network
operator. E-mail: gih@apnic.net
T
he Domain Name System (DNS) is a crucial part of today’s
Internet. With the fracturing of network address space as a
byproduct of IPv4 address rundown and the protracted IPv6
transition, the namespace of the Internet is now the defining attri-
bute that makes it one network. However, the DNS is not a rigid and
unchanging technology. It has changed considerably over the lifetime
of the Internet, and here I’d like to look at what has changed and what
has remained the same.
Some five years later, in 1983, RFC 882[2] defined a hierarchical name-
space using a tree-structure name hierarchy. It also defined a name
server as a service that holds information about a part of the name
hierarchy, and also refers to other name servers that hold information
about lower parts of the name hierarchy. The document also defined
a resolver that can resolve names into their stored attributes by
following referrals to find the appropriate name server to query,
and then obtaining this information from the server. RFC 883[3] de-
fined the DNS query and response protocol, a simple stateless protocol.
Evolutionary Pressures
However, I think that such a perspective ignores a large body of refine-
ment in the DNS world that has occurred. The DNS is by no means
perfect; it can be extremely slow to resolve a name, and even slower to
incorporate changes into the distributed data framework.
For a common and fundamental service that every user not only uses,
but implicitly relies upon, the DNS in practice is far from a paragon of
sound operational engineering.
DNS Privacy
The DNS is not what you might call a discrete protocol. By default,
queries are made in the clear. The IP addresses of the querier, the server
being queried, and the name being queried are visible to any party that
is in a position to inspect DNS traffic. These parties include not only
potential eavesdroppers in the network, but also the operating system
platform that hosts the application making the DNS query, the recur-
sive resolver that receives the query, and any forwarding agent that
the recursive resolver uses. Depending on the state of the local cache
in the recursive resolver, the recursive resolver may need to perform
some level of top-down navigation through the nameserver hierarchy,
asking an authoritative server at each level the full original query
name. The recursive resolver normally lists itself as the source of these
queries, so the identity of the original user is occluded, but the query
name is still visible.
There is some overhead to setting up a TLS session, and the most effi-
cient use of this approach is in the stub-to-recursive DNS environment
where a single TLS session can be kept open and reused for subsequent
queries, amortizing the initial setup overheads across these queries.
The standard specification of DoT defines the use of TCP port 853,
which allows an onlooker to identify that DoT is being used and iden-
tify the two end parties by their IP addresses, but not the DNS queries
or responses.
You might think that a tool that allows the client to verify a DNS
response would be immediately popular. If the relationship between
the names that applications use and services and IP addresses that
are used at the protocol level is disrupted, then users can be readily
deceived. Yet, after close to three decades from its initial specification,
DNSSEC is still struggling to achieve mainstream adoption. Part of the
issue is that the strong binding of the DNS protocol to a UDP trans-
port causes a set of problems when responses bloat in size because of
attached signatures and keys. Another part of the issue lies in the care
and attention required to manage cryptographic keys and the unforgiv-
ing nature of cryptographic validation. And a large part of the problem
is that when the Web began using TLS as a means of verifying the iden-
tity of a remote server, many didn’t consider any marginal incremental
benefit of DNSSEC in the DNS part of session creation to be worth the
incremental effort and cost of using DNSSEC.
This functional shift was further extended in the Service Binding and
Parameter Specification via the DNS (SVCB and HTTPS Resource
Records) specification, RFC 9460[11]. By providing more information
to the client before it attempts to establish a connection, these records
offer potential benefits to both performance and privacy. These
enhancements represent a shift in the design approach of the DNS,
where the prior use of DNS resource record types was to segment the
information associated with a DNS name, so that a complete collection
of information about a service name was obtained by making a set of
queries. The SVCB record effectively provides an “omnibus” response
to a service query, so that the client can gather sufficient information
to connect to a service with a single DNS transaction.
Delegation Records
One of the fundamental parts of the DNS data structure is the delega-
tion record, which passes the control of an entire subtree in the DNS
hierarchy from one node to another.
While this NS record has served the DNS since its inception, it has
a few limitations. The target of the delegation record is one or more
DNS server names, not their IP addresses.
Many alternative naming systems in use today come bundled with the
specific applications that use them: a particular alternative naming sys-
tem is often tied to a corresponding application, and this application
often bypasses administrator-controlled settings and any preconfig-
ured DNS settings. For example, the Tor Project uses its own naming
system that bypasses traditional DNS resolution. Users can install the
Tor Browser, and it will use the Tor naming system for names ending
in .ONION, while forwarding any other names to the local DNS library.
The application developer makes the choice of which naming system
to use without users even knowing that they are using an alternative
naming system, nor do they understand potential implications.
Conclusions
Only a completely moribund technology is impervious to change! As
digital technologies and services evolve, the demands placed on the
associated namespaces also evolve in novel and unpredictable ways.
The DNS is an interesting case in that so far it has been able to respond
to the evolving Internet without requiring fundamental changes to the
structure of its namespace, the distributed information model, or the
name-resolution protocol. Most of the evolutionary changes that have
been folded into the DNS to date have been undertaken in a way that
preserves backward compatibility, and the cohesion of the underlying
namespace has been largely preserved.
GEOFF HUSTON AM, B.Sc., M.Sc., is the Chief Scientist at APNIC, the Regional
Internet Registry serving the Asia Pacific region. He has been closely involved with the
development of the Internet for many years, particularly within Australia, where he was
responsible for building the Internet within the Australian academic and research sector
in the early 1990s. He is author of numerous Internet-related books, and was a mem-
ber of the Internet Architecture Board from 1999 until 2005. He served on the Board of
Trustees of the Internet Society from 1992 until 2001. At various times Geoff has worked
as an Internet researcher, an ISP systems architect, and a network operator.
E-mail: gih@apnic.net
______________________
Daniel Appelquist, W3C TAG co-chair Olaf Kolkman, former IAB chair
David Baron, former W3C TAG Konstantinos Komaitis, senior resident
Hadley Beeman, W3C TAG fellow, Internet Governance lead,
Robin Berjon, former W3C TAG; former Democracy and Tech Initiative,
W3C HTML Activity Lead Atlantic Council
Andrew Betts, former W3C TAG Chris Lilley, W3C Technical Director;
Sir Tim Berners-Lee, inventor of the World former W3C TAG
Wide Web; founder & emeritus director, Peter Linss, W3C TAG co-chair
W3C Sangwhan Moon, former W3C TAG
Tim Bray, former W3C TAG; Editor of XML Jun Murai, former IAB; WIDE Project
(W3C), JSON (IETF) founder; former W3C steering
Randy Bush, former IESG, former ISO/WG13 committee; former ISOC BoT
Dr. Brian E. Carpenter, former Group Mark Nottingham, former IAB;
Leader, Communication Systems, CERN; former W3C TAG
former IAB chair; former ISOC BoT chair; Lukasz Olejnik, former W3C TAG
former IETF chair Colin Perkins, IRTF chair
Vint Cerf, Internet Pioneer Pete Resnick, former IAB; former IESG
David Conrad, former IANA general Alex Russell, former W3C TAG
manager; former ICANN CTO Peter Saint-Andre, former IESG
Martin Duke, former IESG David Schinazi, IAB
Dr. Lars Eggert, former IETF chair; Melinda Shore, IRSG; former IAB
former IRTF chair Robert Sparks, former IAB; former IESG
David Jack Farber, former IAB; former ISOC Lynn St. Amour, former Internet Society
BoT; former Chief Technologist USA FCC President and CEO; former UN IGF
Dr. Stephen Farrell, Trinity College Dublin; Multistakeholder Advisory Group chair
former IESG; former IAB Andrew Sullivan, former IAB chair
Demi Getschko, .br Martin Thomson, W3C TAG; former IAB
Christian Huitema, former IAB chair Brian Trammell, IRSG; former IAB
Geoff Huston, former ISOC BoT chair; Léonie Watson, W3C Web Applications
former IAB Working Group Chair
Erik Kline, IESG Paul Wouters, IESG
Mallory Knodel, former IAB
_____________________
The Internet Protocol Journal is published under the “CC BY-NC-ND” Creative Commons
Licence. Quotation with attribution encouraged.
This publication is distributed on an “as-is” basis, without warranty of any kind either
express or implied, including but not limited to the implied warranties of merchantability,
fitness for a particular purpose, or non-infringement. This publication could contain technical
inaccuracies or typographical errors. Later issues may modify or update information provided
in this issue. Neither the publisher nor any contributor shall have any liability to any person
for any loss or damage caused directly or indirectly by the information contained herein.
Emerald Sponsors
Corporate Subscriptions