Full Throttle: Comparing packet-level and DevTools throttling
When you're doing performance testing, one of the most important variables to consider is the connection type. The web is built on a set of a very chatty protocols—there's a lot of back and forth between the browser and the server throughout the browsing experience. Each trip from the server to the browser, and vice versa, is subject to the limitations of the network in use: how much bandwidth is available, how high is the latency, how much packet loss is there, etc.
When doing synthetic testing, results are only as good as the accuracy of the throttling being applied to the network—use throttling that is too optimistic or has fundamental limitations of accuracy, and you could find yourself drawing the wrong conclusions about potential bottlenecks and their impact.
WebPageTest uses something called packet-level network throttling. In other words, the additional latency is applied for each individual packet. In terms of approaches to throttling goes, packet-level throttling is the gold standard in accuracy.
Recently, we added support for optionally running tests using DevTools throttling instead. We don't recommend using it, except for scientific purposes, but it does make it easy to compare and contrast the two approaches and see how they impact your results.
anchorDifferences in Throttling Approaches
DevTools throttling applies at the request level and operates between the HTTP disk cache and the connection layer of the browser. This means that activity that occurs at that connection layer are out of reach for DevTools throttling. DevTools throttling won't have any effect on things like:
- TCP slow-start
- DNS resolutions
- TCP connection times
- Packet-loss simulation
- TLS handshake
On top of all of that, because of where it sits, DevTools throttling means any network-level HTTP/2 prioritization won't be applied either.
There's also something called simulated throttling, which is what Lighthouse uses. Simulated throttling doesn't actually apply throttling at all during the actual page load. Instead, Lighthouse runs a test without any throttling applied, then uses some adjustment factors to simulate how that page load would have looked over a slow connection.
Applying packet-level network throttling requires being able to affect the entire operating system's network connectivity, which is why something like a simulated throttling approach makes a lot of sense for a tool like Lighthouse—it can't impact the entire machine's operating system in most cases, so a simulated approach is it's next best option.
To learn more Lighthouse's simulated throttling approach, the Lighthouse team has written up some interesting analysis where they compare and contrast the simulated throttling with DevTools throttling and WebPageTest's network throttling.
On the other hand, since packet-level throttling applies to the underlying network, the impact of packet-level throttling can be felt on each of those processes while also maintaining any network-level HTTP/2 prioritization. The result is that packet-level throttling is a much more accurate representation of real network conditions.
The impact might sound academic, but let's dive into some specific examples where the type of throttling may lead you to very different conclusions.
anchorMinimizing the Impact of Third-Party Domains
One frequent bit of advice, particularly since HTTP/2 came along, has been to try to self-host those resources whenever you can. If you can't self-host them, then proxy them using a solution like Fastly's Compute@Edge, Cloudflare Workers, or Akamai EdgeWorkers so that the time to connect to those other domains is handled at the CDN level, where it can likely happen much faster.
But just how big is the impact, really?
The following screenshot of a page loaded on an emulated Moto G4 over a 4G connection shows three different third-party domains, all serving up render-blocking resources.
In this case, we've applied DevTools throttling. Notice how the connection cost (TCP + DNS + TLS) for these resources doesn't seem particularly high:
- cdn.shopify.com: 18ms
- use.typekit.net: 37ms
- hello.myfonts.net: 18ms
Here are the same requests with the same settings, only this time we've applied WebPageTest's default packet-level throttling.
The connection costs are much more expensive:
- cdn.shopify.com: 550ms
- use.typekit.net: 549ms
- hello.myfonts.net: 545ms
If we were looking at the results with DevTools throttling applied, we might conclude the cost of the third-party domain is pretty light (what's 20-40ms, after all?) and, as a result, that any efforts to self-host those resources could be more work than it's worth.
With packet-level throttling, however, we see the reality: those connection costs are an order of magnitude more expensive, costing us around 550ms. Self-hosting here makes a lot more sense—an improvement of half a second or more in page load time is likely very worth the time and energy it would take to fix it.
Just to re-emphasize the point, notice how if we exclude the connection costs, the actual download times for these requests are pretty close. For example, without the connection costs, the CSS requested from Shopify takes 185ms to retrieve with packet-level throttling and 176ms to retrieve with DevTools throttling. That's because DevTools throttling is able to be applied at that stage of the request, so we see that throttling in action here. Matt Zeunert's article on throttling does a good job of highlighting this as well.
anchorMasking the cost of redirects
Unpkg is a popular CDN for node modules (HTTP Archive data currently discovers it on 129,324 sites). If you want to pull in a library (like React for example), you pass the package name, version number, and file like so:
Alternatively, you can opt not to omit the version entirely,or use a semver range or tag. When that happens, unpkg will redirect the request to the latest matching version of the file.
For example, given the following address:
Unpkg will respond with a 302 redirect to:
That redirect is a handy way to pull in the latest version of a library automatically, but it's also expensive. It means the browser has to first issue the request, wait for the response, process the redirect, and then issue a new request.
With DevTools throttling, the impact looks minimal.
In the following screenshot of a truncated waterfall, the first group of requests (#12-14) all result in 302 redirects which trigger the actual requests (requests #27-29).
While this isn't ideal, the time it takes for those redirects looks pretty minimal—they all take under 20ms. DevTools throttling isn't applying any throttle to those redirects since they occur at the network level, so things don't look so bad. Based on what we see here, we might decide that eliminating the redirect is, at best, a minor improvement.
Here are the same requests on the same browser and network setting, but with packet-level throttling applied instead of DevTools throttling.
Now the redirects look much more expensive—instead of 17ms for the longest redirect, we're spending 1.4s! ch
The request order also changes. With the packet-level throttling more accurately showing the impact on connection costs, downloads for requests with connection costs associated start at different times than before, meaning even the request order is different.
anchorSumming it up
Accurate network throttling is so important when doing synthetic performance analysis, and a ton of work goes into getting it right. DevTools throttling tends to use more aggressive throttling factors to account for the fact that's it's a little over-optimistic, and Lighthouse's simulated throttling has also been carefully calibrated against network-level throttling to try to get results to be as accurate as possible.
You don't need to be an expert in network throttling approaches to improve the performance of your site, but some basic understanding of how they work, and where the limitations of DevTools throttling, in particular, can help you to understand the differences in results across tools and, more importantly, help you to avoid drawing the wrong conclusions about potential optimizations.
Tim Kadlec is the Director of Engineering for WebPageTest, a web performance consultant, and trainer focused on building a web everyone can use. He is the author of High Performance Images (O'Reilly, 2016) and Implementing Responsive Design: Building sites for an anywhere, everywhere web (New Riders, 2012). He writes about all things web at timkadlec.com.@tkadlec