Networking Notes - May 2021
Welcome to the May 2021 edition of the Networking Notes newsletter.
This newsletter gathers the most recent news about the evolution of the networking field. Its main objective is to inform the students who have read the Computer Networking: Principles, Protocols and Practice ebook about the evolution of the field.
You can subscribe to this newsletter.
The CNP3 ebook
The beta version of the third edition of Computer Networking: Principles, Protocols and Practice is an interactive ebook that includes various online exercises. These exercises are open-source like the entire ebook and during the last week of August, we will organize a hackathon during SIGCOMM’21 to create new open exercises for networking students. More details will be provided in the coming weeks.
For more than one year, the covid pandemic has impact all aspects of our life. A recent report published by the Broadband Internet Technical Advisory Group (BITAG) summarises some interesting data about how the usage of different types of networks has changed during the pandemic.
Link failures happen regularly in networks. Most are due to the failure of networking equipment. However, some have unexpected root causes. In Canada, beavers have cut optical fibers and took down Internet service for 900 users.
Some countries are tried different approaches to control or censor social networks. The most recent example is Russia that throttled twitter. Throttling, i.e. reducing the allocated bandwidth, is clever than simply blocking the IP addresses used by the social network, but not easy to do at a country level. Researchers provide many details on how this throttling works. This is an interesting for application developers or hacktivists who would like to circumvent such censorship.
DNS is a critical part of the Internet infrastructure and bad performance of DNS servers/resolvers could hurt the performance of many entreprise or ISP networks. As these networks deploy new DNS features such as DNS over TLS (DoT) or DNS over HTTPS (DoH), it might be useful to verify the performance of their DNS servers/resolvers using benchmark tools such as DNS shotgun.
Commercial web servers use HTTP Cookies for various reasons. Some cookies simply encode user preferences on a single web site. Unfortunately, these HTTP Cookies have been overused to track users from one website to another and some websites include trackers from dozens of companies. Companies that heavily rely on such HTTP Cookies have started to explored other ways to track users. A recent attempt is Google’s Federated Learning of Cohorts (FLoC). Google has already started to deploy this approach to some users despite concerns raised by privacy advocates like EFF. You can check whether your Google Chrome browsers uses this extension on https://amifloced.org/. Brave, a browser based on Chrome, has removed FLoC from their latest browsers. Wordpress also announced their plans to block FLoC on WordPress sites.
In parallel, T-Mobile announced that they would sell web usage data of their customers unless they opt out. Cellular providers explore new sources of revenue, but selling user usage data does not seem the right approach.
Web browsers collect statistics from many users. Mozilla releases some of the statistics collected by Firefox on its measurement dashboard. Looking at HTTP, it is interesting to observe that firefox users rarely use HTTP/1.0. About 30% of the requests use HTTP/1.1, slightly less than 50% HTTP/2.0 and already 20% use HTTP/3 over QUIC.
TLS is used by a large number of applications. Over the years, one can expect that most of the Internet traffic will be encrypted and authenticated using TLS. TCPLS goes one step further by closely integrating TCP and TLS together. A recent blog post provides a nice introduction to TCPLS.
HTTPS has replaced HTTP on most public websites. Many of these websites use Let’s Encrypt that simplified the provisioning of TLS certificates on websites. While HTTPS would also be very useful inside home networks, it can be difficult to obtain the required certificates. A recent blog post describes how to use a reverse proxy to solve this problem.
The Datagram Congestion Control (DCCP) is a transport protocol that was designed to transport datagrams unreliably. DCCP is rarely used, but is still included in operating systems such as Linux and thus available on a wide range of devices. This has apparently attracted attackers as a recent report shows that there are DDoS attacks using DCCP packets.
IPv6 continues to be deployed and several studies analyze its deployment. A French website, lafibre.info combines these four sources of data to provide per-country statistics on https://lafibre.info/ipv6/ipv6-pays/. Some countries manage to deploy IPv6 faster than others. The latest example is Saudi Arabia.
IPv6 routers do not fragment IPv6 packets that are too large to be transmitted over a link with a small MTU. The IPv6 protocol relies on a specific header to allow a host to fragment its packets before sending them. Unfortunately, it can be difficult to correctly implement IPv6 fragmentation. Despite more than a decade of experience with deployed IPv6, we still discover some security issues in IPv6 stack. A detailed analysis describes in details a recent bug in the Windows IPv6 stack.
While IPv6 fragmentation is a standard part of IPv6, there are routers and firewalls that block IPv6 fragments for security reasons, including fear of attacks using fragments or the above problem. A recent measurement study shows that there are large parts of the Internet where fragments do not go through. Another reason to avoid using IPv6 fragments…
Interdomain routing (BGP)
With the RPKI, ISPs are taking a first and important step to prevent all the problems that have been caused by networks announcing BGP prefixes that they do not own. A recent blog post summarises some lessons learned from using the RPKI. This did not prevent a large prefix hijack from Vodafone. The MANRS project that encourages the deployment of Routing Security project also analyzes this hijack.
There are some situations where network operators have some non-transient loops in their networks. A recent blog post argues that these loops could be exploited by attackers to amplify DDoS attacks.
On March 23 2021, the London Internet eXchange Point (LINX) suffered from several problems on a part of its network. This caused huge traffic shits. Emile Aben provides a detailed analysis of this event using public data.
BGP is a routing protocol where routers exchange incremental updates, i.e. they only advertise changes to their routing tables. In theory, if the origin of a BGP route ceases to announce this prefix, the route should eventually disappear from all Internet routers. Unfortunately, some such BGP routes, called BGP zombies, are not correctly withdrawn and remain in the routing tables of BGP routers in parts of the Internet. A recent blog post provides an interesting experiment showing that BGP zombies still exist and proposes a possible explanation.
Some organisations own large IPv4 prefixes that they do not advertise on the public Internet. In the early days of the Internet, large blocks of the IPv4 addressing space (221,828,864 addresses) were allocated to the US Army. Nobody really knows whether these IPv4 addresses were actually used, but network operators were very surprised to see an unknown company advertise these blocks of IPv4 addresses. Some argued that this block of addresses could used to create another network telescope to track ongoing attacks. You can find additional information in a blog post and a magazine article.
With the deployment a new satellite-based services like Starlink, Oneweb and others we could have several satellite constellations above our heads in the combing years. Some of these low-orbit satellites are sometimes visible from the ground provided that one looks in the right direction at the right time. A recent web application allows you to plan your next satellite observation through a nice application that leverages Street View to show you where to look at.
Data transmission networks existed before the invention of computers and later computer networks. Besides the telegraphs and other early networks described in The early history of data networks there were other types of networks. Companies and cities used pneumatic networks to quickly exchange written notes. In Paris, the pneumatic network created in 1866 spanned 15 kms. The French television celebrated the 100th anniversary of this network in 1966 with a three minutes movie.
Software and tools
The Linux TCP/IP stack is one of the most widely used TCP/IP implementation these days. The stack is complex and includes a large number of functions. When exploring the kernel code, it is often useful to be able to trace the functions that are executed to perform a specific task. ipftrace2 allows to trace how specific packets are processed by the various functions of the Linux kernel.
Many users measure the performance of their cellular or home networks by using websites that try to serve large files as quickly as possible and then use short flows to report latency measurements. Unfortunately, these tests are not representative for most applications. Applications rarely need to exchange huge files. Many applications would like to have a network that has enough available bandwidth to transfert the blocks of data that it produces with a low and stable latency. For many applications, spikes in latency are much more problematic as soon as the throughput is above a few tens of Mbps. A recent report argues for hybrid measurements, i.e. measurements of latency and latency variations while applications actually run.
Network congestion typically happens on some parts of the network, e.g. low-speed access links or overloaded peering links. When the network does not perform as expected, it can be useful for users to try to isolate where packet losses occur. Mtr is a variant of traceroute that sends probes to identify where packet losses occur. A recent blog post describes some use cases for mtr.