HTTP/2 server push, the Mirage dataset of Android packet traces, preventing BGP route leaks, Passive Optical Networks

Server push is a feature of the HTTP/2 protocol that enables a server to send an HTTP/2 response before the client sent a request. This feature was included in the HTTP/2 specification to speedup web transfers by enabling the servers to send objects that will be requested later by the clients. In practice, it was very difficult to deploy this feature on servers. Chrome disabled it two years ago and the Firefox developers also intend to remove this feature. If you want to experiment with server push, it remains supported by libcurl.

Read More

URL shorteners, QUIC amplification limit, Wi-Fi security, DHCP and DHCPv6 options, configuring a mail server on FreeBSD, BBR congestion control

URL shorteners are popular services. Many users rely on URL shorteners to post URLs that they consider too long. Compared to the original URL that is stable and precisely defined, the shortened URL depends on a third party that provides a services that maps the shortened URL to the real one. If you use an URL shortening service there is a risk that the shortened URL might become unavailable once the company that manages the mapping service decides to stop it because it is not profitable anymore. Google’s URL shortener service will stop responding to shortened URLs on August 25th, 2025. If you used a https://goo.gl/* link in a publication, email or whatever, the mapping will be inaccessible. This is bad news for archives and the Internet history. Daniel Stenberg looked at the importance of the https://goo.gl/* URLs for the Linux kernel mailing list. He found more than 19,000 messages containing references to these URLs. If you use an URL shortener service, remember that it could stop its operation at any time…

Read More

Reflections on outages, evolution of the CDN market, location data

Link and routers fails, but routing protocols have been designed to cope with these failures and preserve connectivity. While most failures are caused by hardware issues or software bugs, some outages that affect network and servers are caused by configuration errors. A small change in the configuration of one of a few devices can have a huge effect on a network. In some cases, configurations changes have caused a cascade of problems that brought complete networks down. Andree Toonk published a very interesting blog post were he discusses some of the lessons which can be learned from large outages.

Read More

Packet rate attacks, reflections on DNSSEC, routing incident for 1.1.1.1, lessons learned from Roger's outage in 2022

A recent blog post from OVHCloud provides interesting information about recent DDoS attacks that targeted the OVH network. Until recently, attacks reaching 1 Tbps were rare. This is not the case anymore. During the first half of 2024, OVH saw a huge increase in the volume of attacks, with attacks of 1 Tbps and more happening almost once per day on average. This requires plenty of available network and filtering capacity to cope with such attacks.

Read More

BGP routing policies, openDNS blocked, DNS history

The global Internet interconnects about 80,000 ASes through peerings with two main types of routing policies: customer-provider and share cost. In a blog post, Savvas Kastanakis presents the evolution of the Internet and its routing policies during the last twenty years. In 1998, the Internet gathered 3,549 ASes with 6,475 links. 13% of these links with shared cost peering links. In 2023, 75,160 ASes had 494,508 links with 69% of shared cost peering links. The blog post also highlight several other findings of the study such as the importance of selective announcements.

Read More

DDoS attacks, X11, SMTP and co packet captures, rotating MAC addresses, CIDR

Distributed denial of service attacks were hundreds, thousands and sometimes millions of compromised devices send packets to target servers to overload them. When faced with such attacks, server administrators have limited solutions. They can subscribe to CDN providers like Akamai or Cloudflare that have enough server capacity to filter attacks while still preserving the availability of the content. A second option is to subscribe to scrubbing services from large ISPs that can also help to filter DDoS attacks. Besides that, there are no real options for NGOs and organizations willing to remain independent. A blog post argues for innovative solutions to its problem.

Read More

Slow Internet, DMARC and co, DNS4ALL, a simple DNS resolver

A periodical reminder for protocol or web site designers. While many of these designers use a recent computer with a good or very good Internet connection, some of their users rely on low performance Internet connection or older computers. If you can optimize your protocol or website for these users as well, every body will benefit. A recent blog post discusses this problem from the viewpoint of a researcher working in Antarctica, but there are many locations with a less powerful Internet connection on Earth.

Read More

Class E IPv4 addresses, DNSSEC, RPKI covers 50% of BGP prefixes, looking back at the last 50 years of TCP, consensus in Internet standards, 600,000 access routers destroyed in 3 days

The exhaustion of the IPv4 addressing space remains a problem in various parts of the Internet. With prices reaching about 10,000$ for a /24 IPv4 subnet, network engineers explore various alternations. One of them is to reuse blocks of IPv4 addresses that are not actively used. This includes the 240.0.0.0/4 (aka Class E) block of IPv4 addresses. This block was reserved for future use by RFC1112 and still remains reserved in the current table of the IPv4 addresses allocations. In a blog post and a RIPE presentation, Ben Cartwright-Cox discusses the problem of reusing the 240.0.0.0/4 address block in details. While there will be hacks to extend the IPv4 addressing space, the deployment of IPv6 everywhere remains the best solution to the address exhaustion problem.

Read More

Multipath TCP in the Linux kernel, beyond robots.txt, web performance, high frequency trading and the communication network of the Tudors

The implementation of Multipath TCP continues to be improved in the mainstream Linux kernel. With Multipath TCP, the packets that belong to a single connection can be exchanged over different paths. For example, a smartphone can use both Wi-Fi and cellular for a single connection. This enables seamless handovers for mobile applications and improves user experience. With Multipath TCP, a TCP can use both IPv4 and IPv6 and dynamically select the best performing address family in real time. If you have not yet enabled Multipath TCP on your Linux hosts or servers, check https://www.mptcp.dev, the website of the Multipath TCP maintainers in the Linux kernel.

Read More

A solar eclipse influences Internet traffic, reverse traceroute, xzutils backdoor, page load times, http/2 attacks using continuation frames, Gigabit Ethernet links going down

Last week, a solar eclipse was visible in North America. Amazon engineers looked at some of the traffic data they collect to see whether their Internet traffic was affected by the solar eclipse. They expected an indirect effect as humans were looking at the eclipse instead of using the Internet. They observed a visible effect by looking at the traffic decrease per groups of post codes every ten times. In case of doubt, this is another illustration of the massive amount of data that Amazon collects and uses…

Read More

Modern SMTP, Gigabit Ethernet, Google stops using route servers at IXPs, Green software, Cryptography with openssl, Ballad of UDP and TCP

The SMTP protocol is one of the oldest Internet protocols. It is still used to deliver email messages. However, due to the proliferation of spam, modern SMTP servers need to complement SMTP with various different protocols to perform different types of checks on the messages that are sent. Large email providers like GMail or Yahoo use they additional protocols, which puts a barrier to companies willing to deploy and manage their own SMTP server. A recent blog post summarizes the requirements imposed by large email providers.

Read More

AI and networking, Internet Protocol Journal, Slicing, Fiber cut, HTTP history

Artificial Intelligence techniques are applied to more and more use cases including networking. The networking channel recently organized two interesting discussions on AI techniques. Jean-Philippe Vasseur gave a talk on Are we living the long-awaited AI revolution for Networking with Generative AI while Vishal Misra discussed on Integrating LLMs in Computer Networking Education: can it be done ? what are the challenges ?. These two talks and the associated discussion are are good starting point to explore the interactions between AI and networking.

Read More

Improving TCP performance, eBPF in the Linux kernel, sending packets faster in go, time to repair undersea cables, TLS certificates, netlab and the NTP pool

TCP remains the most widely used reliable transport protocol. During the last years, QUIC has started to replace TLS over TCP for some applications such as HTTP/3. This is not the first time that new transport protocols are proposed to replace TCP. During the late 1980s, XTP and similar protocols aimed at being faster than TCP that was already to be considered as an old protocol. David Clark, Van Jacobson and their colleagues showed that TCP implementations could run much faster. They introduced a fast path, i.e. a part of the TCP implementation where the stack is optimized to process the next packet if it arrives in sequence. Improving the performance of TCP implementations is an ongoing effort. A recent set of patches proposed by Coco Li for the Linux kernel achieves 30-40% performance improvements on AMD processors by better exploiting their caches.

Read More

Text compression using large language models, Starlink as a cellular access network, the FreeBSD networking stack and Communications of the ACM

Data compression algorithms are widely used by Internet applications to reduce the amount of data that needs to be exchanged over the network. The Gzip format is one of these popular techniques. New compression schemes have been proposed during the last years. Fabrice Bellard explores a new approach that improves the compression ratio for text files. The ts_zip software uses a large language model to efficiently compress files. It achieves excellent compression ratios, but uses more memory and almost requires a GPU.

Read More

Memory safety, bloating web pages with JavaScript, future Internet eXchange Points, securing instant messaging and troubleshooting techniques

A large fraction of the client and servers that we used on the Internet are programmed using the C or C++ programming languages. These languages allow programmers to interact with the low-level hardware and achieve good performance. Unfortunately, small errors in the handling of memory have caused huge security problems. More recent programming languages like Rust provide stronger protection when accessing memory and allow to avoid a range of security problems. The White House has published a report that encourages the entire software industry to adopt techniques to design and implement memory safe programs.

Read More

Design flaw in DNSSEC implementations, access network standards

The Domain Name System is one of the key protocols in today’s Internet. DNS allows to map names onto addresses. Twenty-five years ago, the IETF adopted DNSSEC to secure the DNS infrastructure. DNSSEC allows resolvers and DNS clients to authenticate DNS responses using public-key cryptography. Measurements indicate that about 31% of domains use DNSSEC. To cope with changes in the public keys used by DNS servers, DNSSEC allows a server to advertise different keys that it uses to sign its responses. When a resolver or a client receives a response, it must try to validate the response using any of the announced public keys. Unfortunately, German researchers have recently announced that this can be abused by attackers to consume a huge amount of CPU on DNS resolvers that validate DNSSEC responses. With specially crafted DNSSEC messages, they even managed to consume up to 16 CPU hours using a single DNS response. This is probably the worst problem that affects DNSSEC. Since the root of the problem is part of the DNSSEC specification, all DNSSEC implementations that validate DNSSEC messages are vulnerable to this attack. The attack requires a specially crafted DNS zone. The IETF will need to update the DNSSEC specifications and see whether this type of attack is possible on other related protocols.

Read More

TCP connect in details, deploying 5G, packet fragmentation, security attacks and traffic engineering

When a TCP client creates a connection, it must select an unused source port. On Linux, this is part of the connect system call. In theory, finding an unused source port is simple. In practice, finding this source port quickly without iterating on all established TCP connections is not so simple. A recent Cloudflare blog post describes in detail how the Linux connect system call works.

Read More

Openwrt routers, Media over QUIC, new Fiber optics record, Alice and Bob, Starlink lasers

The Openwrt project provides a custom Linux distribution that is targeted for routers. The xDSL or cable router that your ISP gave you probably uses a variant of Openwrt. The Openwrt distribution runs on a wide range of routers, from very small ones to enterprise routers. Recently, the leaders of the project have announced that they were working on a reference router that would be commercially available. If you plan to experiment with open routers, this could become your preferred platform.

Read More

Secure shell over HTTP/3, Fortnite versus football, Cloudflare radar, contributing to Wireshark and latency

SSH is the standard method to access remote servers and configure network devices. SSH runs above TCP and can forward TCP connections. François Michel has explored how secure shells could provided over QUIC instead of TCP. He found that it’s much easier and more efficient to design secure shells over HTTP/3 and QUIC instead of directly QUIC. His findings are available in a technical report and a prototype in go.

Read More

Doom and L4S

The DOOM video game was released on December 10th, 1993. It is now 30 years old. Doom is a first-person shooter that uses 3D graphics. From a networking viewpoint, its main characteristic was that it was designed as a multiplayer game. Several computers attached to the same Ethernet LAN could participate to the same game. At that time, game developers had limited networking expertise and the Doom developers opted for using IPX a proprietary networking protocol developed by Novell to exchange information about the position of the players in the game. Doom opted for using IPX broadcast frames to exchange this information, possibly assuming that all computers in the LAN would play Doom… This post provides Doom packets for the interested reader.

Read More

AMS-IX lost 8 Tbps of traffic due to a subtle problem with the LACP protocol, networking conferences

Network failures can have a huge impact and force large traffic movements. A [problem with the LACP protocol](https://www.ams-ix.net/ams/outage-on-amsterdam-peering-platform) on [AMS-IX](https://www.ams-ix.net), one of the largest Internet eXchange Point caused a drop of more than 8 Tbps of traffic in Amsterdam. This was a serious event for the European Internet as show in the screenshot below.

Read More

Resilience, iperf, MIRAI, Encrypted ClientHello and new DNS records

Resilience is an important factor when evaluating Internet Service Providers. Unfortunately, it is not always easy to quantify the resilience of a given ISP by using measurements or information about the ISP. The resilience of an ISP depends on a wide range of factors and a small detail can sometimes significantly lower the resilience of an entire ISP. Often, these details are only exposed by catastrophic or unexpected events. This happened a few weeks ago when Optus, a major ISP in Australia, went offline for almost half a day. Several posts provide an analysis and attempt to explain the reasons for this outage: a detailed blog by kentik show the impact on traffic and BGP and a short article on LightReading points a possible culprit.

Read More

Open BGP routers and the future of HTTP/3

The Border Gateway Protocol is the most important Internet routing protocol. It enables routers from different networks to exchange routes. BGP is mainly used on routers. There are two main BGP deployments: eBGP and iBGP. Two routers belonging to two different networks but connected via a direct link establish an external BGP session. This session runs above a TCP connection usually on port 179. iBGP is used between routers that belong to the same Autonomous System.

Read More

New HTTP/2 RST attacks and Vint Cerf on 50 years of the Internet

New types of denial of service attacks often reveal new details about deployed protocols or their implementations. The new HTTP/2 RST attacks that affected Google, AWS and CloudFlare and August 2023 and have been disclosed recently provide interesting information about some details of the HTTP/2 protocol. Google and Cloudflare provide detailed the impact of this new attack on blog posts: google’s blog and cloudflare’s blog

Read More

BGP errors, DMARC, video streaming, network design and IPv6 only networks

The Border Gateway Protocol is the most important routing protocol. BGP routers exchanges routing messages over BGP sessions. These BGP sessions run on TCP connections between routers. An important characteristic of the BGP protocol is that since it relies on incremental updates, if a router detects a problem in an incoming message, it must tear down the corresponding session and thus discard all the routes learned over this session. This BGP session is usually quickly restarted and routes are reannounced.

Read More

SSH certificates, AWS IPv4 addresses, URLs, emojis and exmaple.com

ssh is the standard technique to connect securely to distant computers. The first version of ssh was designed in the mid-nineties to replace the rsh, rlogin and telnet solutions. Its main advantage for many users is that it provides security protections for the session with a distant server. However, many ssh deployments rely on the Trust on First Use (TOFU) principle. During the first connection with a server, the client assumes that the public key announced by the server is valid and can be trusted. This key is then stored in the client cache and used to validate future connections with the same server. However, this is not the only way to authenticate servers and ssh supports certificates like TLS. In a recent blog post , Mike Malone discusses the benefits of these ssh certificates and why they are deployed by large organizations.

Read More

ECMP, IPv6 DNS and BGP failures

Equal Cost Multipath (ECMP) is a a key feature of IP networks compared to other types of networks. With ECMP, routers can send the packets belonging to different flows over different paths provided that these paths have the same cost. This technique is widely used to spread the load in large networks. Dip Sing describes with a lot of details in a recent blog post how routers spread packets from different flows over different paths.

Read More

Visualizing the radio spectrum

With Wi-Fi, cellular and satellite access networks, there is a growing fraction of the Internet traffic which is carried over radio waves. Although a detailed understanding of all the technology behind radio transmission is outside the scope of most networking courses for computer scientists, it is useful to have some basics understanding of the principles behind the transmission of radio signals.

Read More

A closer look at TCP Maximum Segment Sizes in the wild

A TCP connection always starts with the three-way handshake. The client sends a SYN packet contains several TCP options including the Maximum Segment Size (MSS) that announces the largest segment that the client agrees to receive. The server provides its own MSS in the SYN+ACK.

Read More

BGP policies from a real network

The Border Gateway Protocol is probably the most important routing protocols. It allows ISPs and enterprise networks to exchange interdomain routes, but can also used inside enterprise or ISP networks. Much of BGP’s power resides in its flexibility and ability to support various types of filters. Operates use these filters to prevent some routes over others and many other objectives.

Read More

Shakespeare explains DNSSec

Artificial Intelligence tools like ChatGPT sometimes have expected results. Users have tested ChatGPT at a wide range of tasks, from generating code in various programming languages to solving mathematical problems. ChatGPT can also provide more funny results.

Read More

This could be the main motivation to deploy IPv6 on servers and in enterprise networks

The IPv6 deployment continues globally. The measurements carried by Google and others indicate that IPv6 is now widely used in access networks, both fixed and wireless ones. Many home users use both IPv4 and IPv6 without knowing that there are two different versions of IP. In enterprise networks, IPv4 is still the dominant protocol and enterprises have been relunctant to deploy IPv6 with some exceptions.

Read More

Do we really need to accurately measure fiber access links ?

Internet Service Providers that serve end users like to use metrics to compare their service with their competitors. For many years, maximum throughput has been the metric of choice among ISPs. When dial-up links were popular a 56 kbps connections made a difference with a 34 kbps one. The same applied with the early deployments of xDSL and cable networks.

Read More

The early history of Netnews

Netnews is a distributed bulletin board system that was deployed in the early 1980s. Netnews had an important influence on the Internet and served as a precursor of today’s social networks. Netnews was the platform of choice for major technical announcements. Linus Torvalds announced Linux on netnews, Marc Andreessen announced the first release of the Mosaic browser, …

Read More

Energy savings in cellular networks

Energy consumption is a major concern among network operators. While studies show that the total energy consumption of telecommunications networks tend to remain constant despite a growth in subscribers and a higher growth in traffic, operators look at different solutions to minimize their energy consumption.

Read More

Predicting the future of mobile networks

Cellular and mobile networks play a growing role on today’s Internet with a growing fraction of the traffic produced or consumed in these networks. Every year, Ericsson publishes a mobility report that summarizes the recent statistics about these networks and makes some predictions.

Read More

ASCIIFlow - an interactive web site to produce ASCII drawings

The Request for Comments (RFC) contains all the specifications of the Internet protocols. Since the publication of RFC1 on 7 April 1969, these documents have been published in ASCII format. Compared to word processing tools, ASCII has the main advantage that old documents remain easily readable on any platform fifty years later. Unfortunately, it also means that RFC authors need to sometimes struggle to use ASCII art to prepare figures such as state machines, protocol messages, …

Read More

A good and scalable architecture always matters

Dynamic web sites often evolve from a simple and not necessarily efficient proof-of-concept to a larger system that needs to serve many customers. Getting dynamic web sites to scale is not always easy and HTTP is rarely the culprit. In a very interesting talk, Willy Tarreau, the author of HA Proxy explores several of these factors with a simple use case.

Read More

The BGP horror show

The Border Gateway Protocol (BGP) is probably the most important Internet routing protocol. It is used by more than 80k Internet Service Providers and enterprise networks of various sizes to exchange routing information. BGP enables all Internet routers to obtain routes to reach Internet destinations.

Read More

The QUIC specification was published one year ago, how is its deployment progressing ?

The web revolution started with the HTTP protocol. The first version of HTTP used a single TCP connection to transfer each HTML page. This was not acceptable for the first web pages that contained only text, but became inefficient when JavaScript, images and videos were added to all web pages. HTTP evolved and the main versions are HTTP/1.1 and HTTP/2.0 which uses a binary format and supports multiple streams over a single TCP connection.

Read More

An introduction to 802.1x

Many Ethernet and Wi-Fi Local Area Networks (LAN) in enterprises use 802.1x to verify the user credentials before authorizing them to access the network. 802.1x defines a flexible set of protocols that support various forms of user authorization and authentication and can be used at scale.

Read More

Which ASes have the largest public peering capacity ?

Internet Service Providers (ISP) and content providers exchange traffic using private and public Internet peering links. On these peering links, they rely on the Border Gateway Protocol (BGP) to exchange routing information. Various websites provide information about the peering links used by these ISPs, notably PeeringDB. Anurag Bhatia published on twitter an interesting list of the top 25 ISPs with the largest public peering capacity :

Read More

A closer look at the SHA secure hash function

Most of the security protocols that are used on the Internet, including TLS, ssh, IPSec and many others use secure hash functions and cryptographic techniques. The designers of these security protocols know that they must support different hash and cryptographic functions because some of them could be declared as insecure or worse broken after some or several years of analysis by cryptographers. The first important hash function was MD5, but it is now deprecated because it is not considered secure anymore. Most security protocols rely on SHA-1 and its descendants.

Read More

Preparing for the 4th edition of Computer Networking - Principles, Protocols and Practice

The Computer Networking: Principles, Protocols, and Practice ebook continues to evolve in parallel with Internet technology. During the 2022-2023 academic year, UCLouvain experimented with a new way of organizing the teaching material. Instead of using a top-down or a bottom approach as most networking books do, we divided the course in two parts. The first part covers the protocols that are used by the endhosts and the network is considered as a black box. This part starts with the applications that the students already use. The second part focus on the network infrastructure with the routing protocols and the local area networks.

Read More

The first fifty years of Ethernet

Ethernet is now the ubiquitous fixed Local Area Network technology. This technology was invented by Bob Metcalfe and David Boggs at the Xerox’s Palo Alto Research Center (PARC) to connect the ALTO workstations and the first laser printers. In 2021, Bob Metcalfe shared a dropbox with several historical documents:

Read More

25th birthday of the Internet Protocol Journal

Students and readers of the Computer Networking: Principles, Protocols, and Practice ebook who want to stay up-to-date on the evolution of the networking technology should subscribe to the Internet Protocol Journal. The first issue of this journal was published 25 years ago. Initially distributed on paper, it is now mainly available as pdf files from https://ipj.dreamhosters.com/. Each issue contains very interesting tutorial articles that describe new protocols or discuss the evolution of networking technologies.

Read More

Automated evaluation of networking skills

Students and readers of the Computer Networking: Principles, Protocols, and Practice ebook often ask for exercises to enable them to evaluate their understanding of the different topics discussed in the ebook. Over the years, we have added various exercises on the INGInious. In the framework of their ongoing Master thesis, Clément Linsmeau and Matthieu Leclercq have developed an INGInious extension that uses statistical techniques to evaluate the students using simple questions. Their INGInious suggests a series of about a dozen of exercises to each student to evaluate hir/her knowledge. The exercises that are selected depend on the answers given by each student to each exercise. A brigth student will receive a set of challenging exercises to see whether she/he really masters the course while an average student will receive simpler exercises.

Read More

Interactive teaching in covid times

The covid crisis has forced most educators to reconsider how they interact with students using online tools instead of in-class discussions. Many university courses have been reorganised as video podcasts during which the professor explains his slides to passive students. Our spring semester starts in February and when it was clear that I would need to teach the networking course online, I thought about possible solutions to provide a better experience to the students. During the previous months, I had attended some remote lectures where basically the professor was mainly explaining his slides and the students were taking notes and sometimes had to answer questions. I thought that there could be a better approach for the theoretical lessons and the exercise sessions.

Read More

Networking Notes - dialup modems

Today, most of the telephone traffic is carried over IP networks using Voice over IP technologies. A few decades ago, it was the opposite. The first data connections were made using modems that convert a binary signal into an acoustic signal which can be carried over regular telephone lines.

Read More

New networking notes

This blog aims at encouraging students to continue to explore the networking field after having followed their first networking course. Until November 2020, the blog mainly contained medium size articles that were published as time permitted.

Read More

A consolidated TCP specification

The Transmission Control Protocol (TCP) is one of the most important protocols used on the Internet. The first TCP specification was published in RFC793 in September 1981. During the last 39 years, TCP has been regularly improved without a revision of RFC793. In 2013, Wesley Eddy started to work on rfc793bis. After almost two years, the TCPM IETF working group decided to adopt this effort. Five years later, we now have a consolidated version of the updated version of RFC793.

Read More

ISP networks use diverse routers

Routers are an important part of the Internet infrastructure as they carry all the packets that we exchange. Several vendors sell these routers. Some vendors supply different types of routers while others focus on specialised ones such as access routers or backbone routers. Industry analysts often publish market studies that provide the market share of each vendor.

Read More

mTCP - a TCP stack running on a 35 years old IBM PC Junior

The TCP/IP protocol suite has been implemented on a wide range of devices, ranging from embedded systems to supercomputers. Besides the classical TCP/IP stacks, there are specialized stacks that are used on specific devices. mTCP is an example of such stacks. This software runs on older PCs that are still running MS-DOS or FreeDOS. The stack runs on a 35 years old IBM PC Junior that is reachable from http://50.125.82.27:8088/mtcp/

Read More

A successful student project - analyzing popular websites

Most computer networking classes in universities include lab sessions and projects that enable students to learn how protocols are used in the real worl. For almost a decade, I’ve asked the students who follow the networking course at UCLouvain to analyze one web site. This project idea was suggested by Gisueppe Di Battista who also teaches computer networking at Roma III university.

Read More

Celebrating the 50th anniversary of the Internet

It is difficult to exactly track the first steps of the networking technology that laid the foundation for today’s Internet, but most experts consider the ARPART as the major ancestor to our current Internet. This network was funded by DARPA, the Advanced Research Project Agency of the US Department of Defense. The first experiments started in the late 1960s with the installation of a few ARPANET nodes in US universities and labs.

Read More

A reminder on passwords

Passwords are used for a wide range of services. Every time I explain passwords to students, I strongly encourage them to never, ever design a software that stores passwords as clear text inside a file. Storing clear text passwords is a recipe for disaster. We’ve seen dozen myriads of websites having their “secure” password file being hacked. Since the publication o Password security: a case history every computer scientist should know that passwords should be hashed before being stored in a file, even if the file is protected by strict permissions. This is the default solution on Unix since the early days and has been adopted by all derivatives.

Read More

As the Internet gets older, it's important for networking students to also study its history

Many network experts consider the beginning of the ARPANet network in the US as the first days of the global network that we call the Internet today. The first ARPANet nodes were installed during the latest months of 1969. The Internet turns 50 this year. At the same time, the first networking researchers agreed to create a working group that later between ACM’s Special Interest Group in Datacommunications (SIGCOMM). Many of the articles that have influenced the development of computer networks and the Internet in particular have been published by ACM SIGCOMM. Computer Communication Review recently published a special issue that includes technical articles that summarise the evolution of computer networks. Several of these articles are particularly interesting for networking students:

Read More

Mapping IP addresses to AS numbers and countries

When network engineers analyze log files, collect packets or observe traceroute data, they sometimes want to know the AS that announces a given IP address. This information can be extracted from BGP routing tables to by using services such as RIPE RIS or RouteViews. There is now an interesting alternative with the https://iptoasn.com website that provides both files containing the mappings between IP addresses and AS numbers and an API to retrieve this mapping.

Read More

Many Internet hosts often phone home

While collecting some DNS packet traces to prepare new DNS exercises for the students, I was surprised to notice that my Linux host was regularly doing DNS requests for [connectivity-check.ubuntu.com]

Read More

If your firewall only allows https, all applications will move to https

Many entreprise networks restrict the applications that users can use by blocking some TCP and UDP ports at the entreprise firewalls. This happens as well in campus networks. To cope with these restrictions, some applications, notably those running on smartphones, have moved to the well known and usually open HTTP or HTTPS ports. Over the years, firewalls have evolved. instad of simply looking at port numbers, most of today’s firewalls inspect the packets exchanged over a connection to ensure that HTTP is used on port 80 and TLS on port 443. If this is not the case, the connection is considered to be suspicious and blocked.

Read More

A Man in the Middle in Kazakhstan

In less than 20 years, Transport Layer Security (TLS) moved from a niche protocol that was only used by banks and e-commerce websites to almost a default solution. Today, a growing fraction of the Internet traffic is encrypted by using TLS or similar protocols. This encryption and the associated authentication improve the security of Internet users since attackers cannot observe or modify the packets that they exchange.

Read More

A closer look at modern Network Interface Cards

Network Interface Cards (NIC) play an important role in the protocol stack since contain all the hardware functions that are required to transmit and receive packets. In the early days, the NICs mainly implemented the physical layer and a fraction of the datalink layer (e.g. CSMA/CD for Ethernet or CSMA/CA for Wi-Fi). Over the years, a variety of functions have been added to the NICs, starting from the computation of the datalink layer checksums and CRCs. Then they have also be capable of fragmentation packets and even splitting large TCP segments in a series of IP packets. Some NICs can offload cryptographic computations for TLS or IPSec and the latest generation of NICs are fully programmables.

Read More

Tracking the deployment of TLS 1.3

Transport Layer Security is a key part of the protocol stack. During the last years, a lot of effort have been invested in creating version 1.3 of this important protocol. TLS 1.3 RFC8446 was published in August 2018. In contrast with some protocols that are specified and then implemented, the TLS 1.3 implementations were written in parallel with the specification work and several operational issues influenced the protocol design. When RFC8446 was published, several TLS 1.3 implementations were available and large companies have quickly deployed it on clients and servers. On server side, RedHat has enabled TLS 1.3 in their latest Linux distribution. On the client side, Apple has enabled TLS 1.3 in March 2019 on both iOS and macOS.

Read More

How do different IPv6 hosts generate their addresses ?

One of the main advantages of IPv6 is that it uses 128 bits long addresses. The 64 high order bits of the address are used to identify the subnet while the low order 64 bits are reserved for the host identifier. This host identifier can be configured manually, allocated by DHCPv6 or auto-configured using the Stateless Address Autoconfiguration (SLAAC). SLAAC evolved over the years. The first versions, RFC1971, RFC2462 RFC4862 mainly computed the low order 64 bits of the address from the MAC address of the endhost. However, this utilisation of a stable identifier raised privacy concerns as a host would use the same low order bits in any IPv6 address that it generates RFC3041. Today’s stack implement the privacy extensions defined in RFC4941 and generate random identifiers that are used as the low order 64 bits of the IPv6 addresses that they generate.

Read More

A closer look at spam

One of the benefits of the Internet is that it lowers the cost of communicating between distant users. Today, anyone takes for granted that it is possible to send an email or an instant message to any other Internet user. All these exchanges can be done at a very low cost, which enabled a wide range of activities that would have not been possible without the Internet. Unfortunately, the ridiculously low cost of sending information to any Internet user has attracted a range of fraudulent users that send unsolicited commercial or phishing messages.

Read More

Details matter in protocol security

Securing network protocols remains a difficult task as illustrated by the KNOB Attack that was recently announced on a wide range of Bluetooth devices. Bluetooth is a widely used short-range wireless technology that is used to connect devices such as keyboard, mouse, headphones to computers. It also used to directly exchange data between mobile devices such as smartphones. It is also possible to use a Bluetooth link to exchange IP packets. The development of Bluetooth started almost thirty years ago and the first devices appeared 20 years ago.

Read More

Strange TCP MSS values

TCP is today the most widely used transport protocol. It was defined in RFC793. It provides a connection-oriented bi-directional bytestream service on top of the unreliable IP layer. A TCP connection always starts with a three-way handshake. During this handshake, the client and the server can use TCP options to negotiate the utilisation of TCP extensions. These TCP options are also used to exchange a key parameter of a TCP connection: the Maximum Segment Size (MSS).

Read More

Some packet captures with recent DNS extensions

The Domain Name System is one of the key protocols in today’s Internet as it allows to map names onto IP addresses. Most networking students probably see DNS as the typical request-response application running over UDP. To retrieve the DNS record corresponding to a given name, a client sends a DNS request to its resolver inside a single UDP packet. DNS was defined in RFC1035. Over the years, the protocol has evolved have several extensions have been added. Several IETF working groups have been targeted around the Domain Name System:

Read More

JMAP, a new protocol to retrieve emails

Email is one of the oldest applications on the Internet. Initially, email was used to deliver emails between two Internet hosts by using the SMTP protocol defined in RFC821 and updated several times after. SMTP is used to exchange emails between servers. With the proliferation of client devices such as PCs, and later laptops or smartphones, several protocols have defined to enable clients to retrieve emails from servers. The most popular are POP and IMAP. These two protocols are classical ASCII-based protocols where clients and servers exchanged commands encoded as one ASCII line over a TCP connection.

Read More

TFO deployment seems to grow

TCP Fast Open is a TCP extension that enables client and servers to place data inside the SYN and SYN+ACK packets during the three-way handshake. This extension has been pushed by google to speedup short transfers. It is defined in RFC7413 and its Linux implementation is described in a nice LWN.net article.

Read More

An IPv6 enabled whiteboard

The large size of the IPv6 addresses enables unexpected use cases and nice demos. Using a Raspberry PI, Markus Klock has designed an open IPv6 board where anyone can write short messages by sending IPv6 ICMP packets. The raspberry listens to IPv6 prefix 2001:6b0:1001:105::/64 and captures any packet sent to this prefix. To write a message on the board, simply encode the ASCII characters as the low order 64 bits of the address that you ping.

Read More

Some ebooks unfortunately disappear

For many readers, ebooks are a simple variant of the traditionnal books that we are used to since Gutenberg and before. As a book owner, I am free to use the book as I want, share it with others, … Unfortunately, many ebooks are different. Companies like Amazon with the kindle, Apple with iBooks, Adobe, Microsoft and others consider that an ebook is a software product that is subject to a license. The ebook license allows you to read the ebook, take notes, … However, these licenses are not always perpetual. In April 2019, Microsoft announced that they stopped their ebook business, probably because it was not profitable enough and planned to stop the licensing servers for their ebooks in July 2019. Microsoft announced some refunds, but all ebook readers are now warned. Their preferred ebook might stop to be available due to a decision of the ebook publisher.

Read More

A simpler way to correctly configure TLS servers

The Transport Layer Security (TLS) protocol plays a more and more important role in today’s Internet. It secures wbesites, but also mail servers and a wide range of other services. Like many security protocols, TLS can be configured in very different ways and a minor change in a configuration file might have an important impact on the security of a TLS deployment. Another factor that contributes to the complexity of configuring TLS is that there are different implementations of this protocol that can be integrated in very different web servers. Each web server uses its own configuration file and different web servers use different parameters.

Read More

Documenting the format of robots.txt

In the early days of the web, the google web crawler that crawls public web pages to build the web index used by the search engine received complains from webmasters who did not agree to let google index their website or were concerned by the load that these crawlers put on their infrastructure. To cope with this problem, google engineers developed a simple text file called robots.txt which can be used by webmasters to specify the parts of the web site that can be crawled and by which crawler. Over the years, the robots.txt file evolved, but its format had never been formally specified despite its widespread usage. 25 years later, google fills this gap by releasing an Internet draft that eventually describes the format of this text file. It is likely that the IETF will discuss minor details on this document in the coming years or maybe months before accepting it.

Read More

Do HTTP/2 or QUIC provide better performance than HTTP/1.1 in mobile networks

The performance of the web protocols has been heavily optimised during the last decades. Given the importance of smartphones and mobile data networks, many web sites try to offer the best performance for mobile users. Many factors can influence the performance of web sites. One of the classical benchmark is the Page Load Time (PLT), i.e. the time required to completely dowload a web page. Many studies has analysed the PLT of web sites and proposed techniques to reduce it in specific scenarios. In a recently published paper, a group of researchers led by Mohammad Rajiullah has analysed a large dataset of measurements performed in mobile networks throughout Europe using the MONROE testbed. They summarise their findings in a paper entitled Web Experience in Mobile Networks: Lessons from Two Million Page Visits that they have recently presented at the Web Conference.

Read More

Internet measurements can reveal unexpected results

Measurements always reveal new insights or unexpected results about the system under study. This applies to a wide range of systems including the Internet. Two recent scientific articles provide unexpected results about two very different protocols: ICMP and BGP.

Read More

Recent networking notes

curl is one of the most versatile implementation of a wide range of application-layer networking protocols. It started as an open-source project twenty years ago and is now used on billions of devices. In a recent post on stackoverflow, Daniel Stenberg explains why he developed this project.

Read More

The RFC series turns 50

Most of the Internet protocols have been document in Requests For Comments (RFCs). Initially, these documents were simply a set of notes that were exchanged among networking researchers. The first of these RFCs was published on April 7th, 1969, 50 years ago.

Read More

A nice visualisation of the Internet growth

During the last thirty years, computer networks and the Internet in particular evolved from a niche technology that was only known by few scientists into a mainstream technology that affects a large fraction of our society. Thirty years ago, Ethernet was already deployed in universities and many of them started to be connected to the Internet with bandwidth that would be considered as ridiculous today. In parallel, the number of users who have been able to access the Internet has grown tremedously and the Internet is slowly being considered as important as electricity in many countries.

Read More

Disconnecting an entire country from the Internet

In the early days of the Internet, governments considered it as a strange research experiment and did not really bother to try to understand how it worked. During the last years, governments all over the world have made more and more efforts to try to control its utilization. The numbers of laws that affect the Internet consider to grow and various governments restrict the utilization of the Internet. It is impossible to list all government interferences that affect the Internet, but here are a few notable ones.

Read More

Have you seen f8:e0:79:af:57:eb ?

Wi-Fi and Ethernet adapters contain a unique MAC address that they use to when exchanging frames in the LAN. These addresses are assigned by IEEE to each manufacturer that is supposed to configure each adapter with a unique address. Everytime you use a laptop, smartphone, tablet or Wi-Fi equipped device, it sends frames with its unique MAC address. These MAC addresses do not leave the LAN where they are used, but they are used by services such as DHCP to allocate addresses. Some of these services log the MAC addresses that they have seen for security reasons.

Read More

Five Years at the Edge - Watching Internet from the ISP Network

The Internet is a dynamic system that continuously evolves. This evolution can be observed from several vantage points. A recent article, entitled Five Years at the Edge: Watching Internet from the ISP Network and co-authored by Martino Trevisan and his colleagues from Politechnico di Torino provides an unusual and very interesting perpective on the evolution of Internet traffic. This paper was presented last week at the Conext’2018 and can be downloaded from the conference program page

Read More

A first analysis of a TCP server

Networking students can learn a lot about Internet protocols by analyzing how they are actually deployed. For several years, Computer Science students at UCLouvain have analyzed different websites within their introductory networking course. This project considers several key Internet protocols, DNS, HTTP, TLS and TCP. In this post, we briefly analyze how TCP is used on some web sites as a starting point for these students.

Read More

A first analysis of a TLS server

Networking students can learn a lot about Internet protocols by analyzing how they are actually deployed. For several years, Computer Science students at UCLouvain have analyzed different websites within their introductory networking course. This project considers several key Internet protocols, DNS, HTTP, TLS and TCP. In this post, we briefly analyze how TLS is used on some websites as a starting point for these students.

Read More

A first analysis of an HTTP server

Networking students can learn a lot about Internet protocols by analyzing how they are actually deployed. For several years, Computer Science students at UCLouvain have analyzed different websites within their introductory networking course. This project considers several key Internet protocols, DNS, HTTP, TLS and TCP. In this post, we briefly analyze how HTTP is used on some websites as a starting point for these students.

Read More

A flipped classroom model allows students to better understand BGP

The Border Gateway Protocol (BGP) is an important protocol in today’s Internet. As such, it is part of the standard networking textbooks. At UCLouvain, timing constraints force me to explain BGP in two different courses. The students learn the basics of external BGP within the introductory networking course that is mandatory for all CS students. We mainly cover routing policies (customer provider and shared-cost peerings) and the basics of eBGP with the utilisation of the AS-Path and the local-pref attribute. Some students register for the advanced networking course that covers BGP in more details, MPLS, VPNs, multicast and other advanced topics.

Read More

Observing the DNS configuration

During their first networking course, each CS student at UCLouvain writes a four-pages report that analyses the organisation of a popular web site and the optimisations or sometimes the errors that the maintainers of this website have made when configuring their DNS, HTTP, TLS or protocols TCP. This project lasts one month and the students receive every week guidelines and suggestions on how to carry their analysis. Here are a few examples which can be used to bootstrap the DNS analysis of such a website.

Read More

Encrypting Server Name Indication in TLS

A growing fraction of our webservers are now reachable via https instead of http. With the http scheme, all the information is transported in plain, including the HTTP headers, cookies, web pages and other sensitive information. For many years, https, which combines http with Transport Layer Security, has been restricted to sensitive web sites such as those that require a password or e-commerce. During the last five years, the deployment of https changed significantly. Today, Mozilla’s telemetry reports that roughly 80% of the webpages downloaded by firefox users are served over https.

Read More

Discussions on TCP timestamp

The TCP Timestamp option was proposed in RFC1323 in 1992 at the same time as the Window Scale option. There were two motivations for the initial TCP Timestamp option : improving round-trip-time estimation and protecting agains wrapped sequence numbers (PAWS). By adding timestamps to each packets, it becomes easier to estimate round-trip-times, especially when packets are lost because retransmissions of a packet carry different timestamps. The PAWS mechanism is less well understood. It is a direct consequence of the utilisation of 32 bits sequence numbers in TCP. TCP RFC793 was designed under the assumption that the IP layer guarantees that a packet will not live in the network for more than 2 minutes (the Maximum Segment Lifetime). TCP’s reliable transmission can be guaranteed provided that it does not use the same sequence number for different packets within MSL seconds. In 1981, with 32 bits sequence numbers, nobody thought that reusing the same sequence number over 2 minutes would become a problem. Today, this is a reality, even in wide area networks. PAWS RFC7323 solves this problem by using timestamps to detects spurious packets and prevent from problems where old packets are delayed within MSL seconds. It took more than a decade to reach a significant deployment of RFC1323.

Read More

Learning TCP sockets in C

In a previous post, we have described a first INGINIOUS exercise that enables students to check their understanding of the utilisation of the socket API with UDP. This API is more frequently used to interact with TCP. Interacting correctly with TCP is more challenging than interacting correctly with UDP. As TCP provides a reliable, connection-oriented, bystream service, there are several subtleties that the students need to consider to write code that interacts correctly with the TCP socket API.

Read More

Learning UDP sockets in C

Created in the early days of the TCP/IP protocol suite, the socket API remains the standard low-level API to interact with the underlying networking stack. Despite its age, it remains widely used and most networking students are exposed to it during their studies. Although more recent languages and higher-level APIs can simplify the interactions between applications and the networking stack, it remains important for students to understand its operation. At UCLouvain, we aks the students to write a simple transport protocol in C over UDP. This enables them to understand how to parse packets, but also how to manage timers and how to interact with the socket API.

Read More

A closer look at dropbox

Dropbox is a very popular file sharing service. Many users rely on its infrastructure to store large files, perform backups or share files. Like other commercial services such as Apple’s iCloud or Microsoft’s OneDrive, Dropbox uses a proprietary protocol to exchange information between client applications and its servers. The most detailed description of Dropbox’s protocol was published in Inside Dropbox: Understanding Personal Cloud Storage Services. This paper appeared in 2012 and it is unfortunately very likely that Dropbox’s protocols and architecture have evolved since then.

Read More

Running your own ISP

Various types of carreers are possible in the networking business. Some develop new applications, others deploy network services or manage enterprise networks. Most of the people who are active in the field work in established organisations that already have a running network. Some decide to create their own business or their own company. The same happens when considering Internet Service Providers. Most of the existing ISPs were created almost twenty years ago. While it is more difficult to launch an ISP business today that when the Internet was booming, there are still new ISPs that are created from scratch. In a series of two blog posts, Chris Hacken discusses many of the technical bareers that exist in this type of businesses. There are very few documents that describe those business, practical and operational issues.

Read More

Disabling IPv4

A recent post on twitter shared a Swedish website that briefly describes how to disable IPv4 on Windows (see below), Linux and MacOS.

Read More

Eliminating IP Spoofing

When IP routers forward packets, they inspect their destination address to determine the outgoing interface or the next step router towards the packet’s destination. Given this, a simple router does not need to look at the source address of the packets. The source address is mainly used by the destination to send the return packets or by intermediate routers to generate ICMP messages when problems are detected. This assumption was true in the the early days of the Internet and most routers only looked at destination addresses.

Read More

TCP on planes and highspeed trains

Internet protocols continue to be used in a variety of scenarios that go beyond the initial objectives of the TCP/IP protocol suite. Two recent scientific articles provide an insight at the performance of TCP/IP in challenging environments.

Read More

The end of plain DNS ?

The Domain Name System is one on the venerable Internet protocols like IP or TCP. For performance reasons, the DNS protocol is usually used on top of UDP. This enables clients to send their DNS request in a single message to which the servers reply in a single message as well. Both the request and the response are sent in plain text, which raises obvious security and privacy concerns. Many of these are documented in RFC7626. In a recent Usenix Security article, B. Liu et al. revealed that 259 of the 3,047 ASes where they could perform measurements used some form of DNS interception. The IETF has explored several solutions to secure the information exchanged between DNS clients and servers. RFC7858 and [RFC8310] have specified solutions to transport DNS over TLS and DTLS. Some public resolvers already support these extensions. Apparently, Android P also supports it. Geoff Huston published an interesting blog post that compares different DNS securisation techniques.

Read More

How to wire a network ?

There are many ways to wire Ethernet networks. When students create simple labs with a few cables and a few switches and a few hosts, they simply plug any suitable cable and run their experiment. In real networks, a good wiring strategy can help to avoid lots of problems and lost time debugging those problems.

Read More

Animals love computer networks

Networks are composed of cables and equipment whose normal utilisation is sometimes disrupted by animals that view them from a different angle than humans. Fiber optic cables that are laid under the sea to connect continents attract a variety of animals. Sharks can be attracted by the shape of the cable or the magnetic fields that they emit. One of these cable biting sharks has even been stopped by under the sea surveillance cameras…

Read More

The end of plaintext protocols ?

Internet protocols have traditionally been clear-text protocols and many protocols like SMTP or HTTP could be tested by using a simple telnet session. This feature was very handy when testing or debugging protocol implementations. However, it is difficult to implement a correct parser for plaintext protocols and many of these parsers have suffered from bugs. Binary protocols have a more precise syntax and are thus easier to parse at least if they do not contain lots of extensibility. All Internet security protocols including IPSec, TLS or ssh are binary protocols. With the Snowden revelations, the IETF has strongly encouraged the utilisation of security protocols to counter pervasive monitoring as explained in RFC7258.

Read More

Recent TCP pointers

Despite its age, TCP continues to evolve and the existing TCP implementations continue to be improved. Some recent blog posts provide useful information about the evolution of TCP in the wild.

Read More

Unusual File Systems

There is a wide variety of file systems that store files on remote servers. NFS is very popular in the Unix world while Samba allows Windows clients to store files on Unix servers. Besides those regular file systems, some networkers have developed special file systems that use or abuse popular Internet protocols. A first example is pingfs, a filesystem that relies on ICMP request/response packets sent by the popular ping software to “store” information inside the network itself. To store a file, pingfs needs to split it in packets that are sent on a regular basis to remote hosts that return ICMP messages. This file is then “stored” as packets that are flying through the network but the entire file does not reside on a disk somewhere.

Read More

Fixing incorrect IPv6 routing tables

Students have sometimes difficulties to understand how IPv6 static routes work. A typical exam question to check their ability at understanding IPv6 static routes is to prepare a simple network containing static routes that have been incorrectly specified. Here is a simple IPMininet example network with four routers and two hosts:

Read More

Exploring BGP with IPMininet

IPMininet supports various routing protocols. In this post, we use it to study how the Border Gateway Protocol operates in a simple network containing only BGP routers. Our virtual lab contains four routers and four hosts:

Read More

Playing with Ethernet Organisation Unique Identifiers

Ethernet remains the mostly widely used LAN technology. Since the invention of Ethernet in the early 1970s, the only part of the specification that remains unchanged is the format of the addresses. Ethernet was the first Local Area Network technology to introduce 48 bits long addresses. These addresses, sometimes called MAC addresses, are divided in two parts. The high order bits contain an Organisation Unique Identifier which identifies a company or organisation. Any organisation can register a OUI from which it can allocate Ethernet addresses. Most OUIs identify companies selling networking equipment, but there are a few exceptions.

Read More

Exploring static routing with IPMininet

In a previous post we have shown that IPMininet can be used to develop exercises that enable students to explore how IPv6 routers forward packets. We used a simple example with only three routers and very simple static routes. In this post, we build a larger network and introduce different static routes on the main routers. Our IPMininet network contains two hosts and five routers.

Read More

Observing IPv6 link local addresses

Link local addresses play an important role in IPv6 since they enable hosts that are attached to the same subnet to directly exchange packets without requiring any configuration. When an IPv6 host or router boots, the first thing that it tries to do is to create a link-local address for each of its interfaces. It is interesting to observe how those link-local addresses are used in a very simple network.

Read More

Alternatives to man pages

When I discovered Unix as a student, one of its most impressive features was the availability of the entire documentation through the man command. Compared with the other computers that I had use before, this online and searchable documentation was a major change. These Unix computers were also connected to the Internet, but the entire university had a few tens of kilobits per second of bandwidth and the Internet was not as interactive as it is today.

Read More

Experimenting with Mininet and IPv6 routes

When students discover IPv6, they usually start playing with static routes to understand how routing tables are built. At UCL, we’ve used a variety of techniques to let the students understand routing tables. A first approach is to simply use the blackboard and let the students analyse routing tables and explain how packets will be forwarded in a given network. This works well, but students often ask for additional exercises to practice before the exam. Another approach is to use netkit. netkit was designed by researchers at Roma3 University as an experimental learning tool. It relies on User Mode Linux to run a Linux kernels as processes on a virtual machine. Several student labs were provided by the netkit authors. We have used it in the past, but the project does not seem to make progress anymore. A third approach is to use Mininet. Mininet is an emulation framework developed at Stanford University that leverages the namespaces features of recent Linux kernel. With those features, a single Linux kernel can support a variety of routers and hosts interconnected by virtual links. Mininet has been used by various universities as an educational tool, but unfortunately it was designed with IPv4 in mind while Computer Networking : Principles, Protocols and Practice has been focussed on IPv6.

Read More

TCP's initial congestion window

TCP’s initial congestion window is a key performance factor for short TCP connections. For many years, the initial value of the congestion window was set to less than two segments RFC2581. In 2002, RFC3390 proposed to increase this value up to 4 segments. This conservative value was a compromise between a fast startup of the TCP connection and preventing congestion collapse. In 2010, Nandita Dukkipati and her colleagues argued in An Argument for Increasing TCP’s Initial Congestion Window for increasing this initial value and demonstrated its benefits on google servers. After the publication of this article, and a patch to include this modification in the Linux kernel, it took only three years for the IETF to adopt the change in RFC6928.

Read More

We will eventually deprecate IPv4

IPv4 has been a huge success that goes beyond the dreams of its inventors. However, the IPv4 addressing space is far too small to cope with all the needs for Internet connected hosts. IPv6 is slowly replacing IPv4 and deployment continues. The plot below shows the growth in the number of IPv6 browsers worldwide.

Read More

Deploying new TCP options takes time

TCP is an extensible protocol. Since the publication of RFC793, various TCP extensions have been proposed, specified and eventually deployed. When looking at the deployment of TCP extensions, one needs to distinguish between the extensions that provide benefits once implemented on senders and receivers and the implementations that need to be supported by both client and servers to be actually used.

Read More

Even more bandwidth accross the oceans

Fiber optics play a key role in Wide Area Networks. With very small exceptions, most of the links that compose WANs are composed of optical fibers. As the demand for bandwidth continues to grow, network operators and large cloud companies continue to deploy new optical fiber links, both on the ground and accross the oceans. The latest announcement came from Microsoft and Facebook. Together, they have commissioned a new fiber optical link between Virginia Beach, Virginia (USA) and Bilbao, Spain. The landing points chosen for this fiber are a bit unusual since many of the fiber optic cables that cross the Atlantic Ocean land in the UK for obvious geographical reasons. This new cable brings 160 Terabits/sec of capacity and adds diversity to the fiber routes between America and Europe. This diveristy is beneficial against unexpected failures but also against organisations that captures Internet traffic by tapping optical fibers as revealed by Edward Snowden.

Read More

Using public-key crypto remains difficult

Pretty Good Privacy, released in 1991, was probably one of the first software packages to make public-key cryptography available for regular users. Until then, crytography was mainly used by banks, soldiers and researchers. Public-key cryptography is a very powerful technique that plays a key role in securing the Internet. Despite of its importance, we still face issues to deploy it to all Internet users. The recent release of Adobe security team’s private key on a public web page is one example of this difficulty, but by far not the only one.

Read More

Networking notes for readers of Computer Networking - Principles, Protocols and Practice

Networking education has changed a lot during the last twenty years. When a was still a student, before the invention of the web, students learned most from the explanations of their professors and teachning assistants. Additional information was available in scientific librairies, but few students could access it. Today’s students live in a completely different world. Computer networks and the Internet in particular have completely changed our society. Students have access to much more information that I could imagine when I was a student. Wikipedia provides lots of useful information, Internet drafts, RFCs and many scientific articles and open-source software are within the reach of all students provided that they understand the basics that enable them to navigate through this deluge of information.

Read More

Beyond today's add-supported web

The web was designed in the 20th century as a decentralised technique to freely share information. The initial audience for the web protocols were scientific researchers who needed to share scientific documents. HTTP was designed as a stateless protocol and Netscape added HTTP cookies to ease e-commerce. These cookies play a crucial role in today’s ad-supported Internet. They have also enabled companies like Google or Facebook to collect huge amount of data about the browsing habits of almost all Internet users in order to deliver targeted advertisements.

Read More

A 100-hops IPv6 wireless mesh

IPv6 is used for a variety of services. Wireless mesh networks are networks were routers use wireless links between themselves. This blog post describes such a large mesh network and provides several experiments conducted over it.

Read More