Do we really need to accurately measure fiber access links ?
Internet Service Providers that serve end users like to use metrics to compare their service with their competitors. For many years, maximum throughput has been the metric of choice among ISPs. When dial-up links were popular a 56 kbps connections made a difference with a 34 kbps one. The same applied with the early deployments of xDSL and cable networks.
This obsession for throughput encouraged vendors and operators to deploy new access technologies that are now capable of reaching several Gbps to the home. In parallel with this deployment various techniques have been proposed to accurately measure the throughput of Internet connections: speedtest, fast.com, speed.cloudflare.com, nperf.com or Measurement Lab’s NDT.
These measurement techniques typically try to saturate the user’s access link to estimate the maximum throughput that can be reached. With speeds above 1 Gbps, this poses a challenge to the servers that perform these measurements and Measurement Lab recently announced that it would introduce data transfer limits in NDT.
This is a first step, but in the long term we need to reevaluate how we assess the performance of an Internet access link. With access link bandwidth above 100 Mbps, very few users are able to fully utilize their access link with real applications. With a very high bandwidth access link, delay, packet losses and the reliability of the link become much more important metrics that needs to be measured.
You can follow this blog by subscribing to its RSS feed or by following @cnp3_ebook on mastodon.