TLDR: if your site is Twitter, AirBnB, Apple, Spotify, Reddit, CNN, FedEx, or so many others then probably yes!
Appropriateness aside, sites built this way can suffer longer initial loading times due to the stepped nature of their content delivery, delayed requests for images and videos, and the time it takes an average device to process code after it's delivered.
anchorSigns of Dependence
Let's look at some examples.
This WebPageTest filmstrip shows Twitter’s Explore page loading on a 4G connection on a mobile device in Chrome, at 1-second intervals.
Notice how the initial filmstrip keyframes display a loading image (the blue bird in this case), then an unpopulated placeholder page layout, and ultimately the real content. Ironically, that bird will register as the site's First Contentful Paint metric, but the actual page content will replace it much later. Apparently, humans aren't the only audience for visual loading tricks!
Here’s another example. This is AirBnB’s homepage loaded on a cable connection in Chrome on a desktop computer.
Here, the telltale “skeleton” loading screen is visible for about 5 seconds, and only after that page’s HTML is generated can the browser begin to discover and subsequently fetch the images that it will eventually populate the grid. Those grid images register as the site's Largest Contentful Paint (LCP) metric, one of Google's “Core Web Vitals”:
Tradeoffs and How to Know When to Make a Change
Now, it’s very important to note that while the examples in this post helpfully display this pattern, the architectural decisions of these sites are made thoughtfully by highly-skilled teams. Web development involves tradeoffs, and in some cases a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions. But tradeoffs aside, it’s reasonable to suspect that if these sites were able to deliver meaningful HTML up-front, a browser would be able render initial content sooner and their users would see improvements as a result.
And that situation tends to put us in a bind: it's one thing to suspect that a change will improve performance, and another thing to be able to see the impact for yourself. For many sites, the changes involved in generating HTML on the server instead of the client can be quite complicated, making them difficult to prototype, let alone change in production.
It's hard to commit to big, time-consuming changes when you don't know whether they will help...
anchorEnter, WebPageTest Opportunities & Experiments
One of my favorite parts of WebPageTest's new Opportunities & Experiments feature is that it can diagnose this exact problem and reliably predict just how much a fix would potentially improve performance.
Here it is on Twitter’s result:
…and on AirBnB’s respectively:
Those observations come free to any WebPageTest user as part of any test run, and we're constantly refining and adding more diagnostics to that page. In addition, a particularly novel companion to those observations will be offered to users with access to WebPageTest Experiments, which are part of the WebPageTest Pro plan.
Pro users who click that obvservation to expand it will be presented with the following experiment:
Once applied, that experiment will test the impact of delivering that site’s final HTML directly from the server at the start, allowing developers to understand the potential impact of making that change before doing any of the work!
anchorThe Mimic Pre-Rendered HTML Experiment
Like all WebPageTest experiments, this experiment works by testing the performance of making one (or many) changes to a live site mid-request using a special proxy server and comparing the performance of that test to an identical test that does not modify the site at all. These two groups of test runs are called the Experiment and the Control of a WebPageTest experiment, and we typically encourage users to run at least 3 of each to get a good median run. To make the comparison as fair as possible, WebPageTest runs both the experiment and the control through its experiments proxy server, either making changes on the fly or simply passing requests directly through, respectively. That last part is important because simply proxying a site can impact its performance at least in subtle ways, so it's best not to compare a proxied test to an original unproxied test. With Experiments, our aim is to ensure that the only difference we’re measuring between the experiment and the control is the optimization itself.
The changes that the Mimic Pre-Rendered HTML experiment makes occur in one interesting swap, using some special information collected in the original test. As of this summer, every test run on WebPageTest captures the final state of a page’s HTML (or, technically its DOM), and stores it as part of a test's data. When the initial page is requested during the pre-render experiment, the proxy fetches that page’s stored final HTML and replaces the site's initial HTML response body with that final HTML text as it passes it along to the browser. While not always perfect, for many sites this experiment should reveal the potential performance benefits of an actual implementation in just one click.
anchorPredicting the Benefits of Serving Useful HTML
Let’s look at Twitter first. Running the Mimic Pre-Rendered HTML experiment on Twitter’s Explore page gives us the following initial results.
At initial glance, we can see the huge, expected impact of meaningful HTML in the comparison video on the top right, where the page is fully populated with content at 3.4 seconds, down from the original time of 12 seconds.
Notably, a couple of metrics are slower in the experiment run: start render and first contentful paint. But that's only because the control site happens to render its bird image very early, and the experiment doesn't quite render its real content quite as soon as that bird.
anchorA huge improvement! But it’s actually even huge...r
More good news! Just beneath the experiment results, WebPageTest added a note telling us that there were notable initial response time differences between the experiment and the control run. Specifically, the experiment took a little longer to arrive at Time To First Byte. This can happen with any experiment due to common network variance or inconsistent server response times, and sometimes it can highlight server issues worth looking into.
But with the pre-render HTML experiment, the variance is expected because the proxy task itself takes a little time to apply mid-flight, given that it requires making a request for that final HTML.
Delays like this that occur as a result of our proxy tasks are not useful in a comparison and they wouldn't likely exist in a real implementation of the technique on a live site. For that reason, whenever server timing varies by more than 100ms WebPageTest offers a link to view the experiment results with each run's first byte times ignored. With that link, we can see the metric differences more fairly, as if the experiment and control had delivered their initial HTML at the same moment.
Wow! Now that we've normalized the experiment's response time, we’re looking at a 9.32 second improvement in Largest Contentful Paint for new visits to that page on Chrome/mobile with a 4G connection speed.
Just for fun, here’s that experiment shown head-to-head in a real-time video (Note: this comparison video does not include the TTFB normalization above, so render times appear a little later than they would ideally be).
Here’s the same experiment on desktop/Chrome as well, which is also dramatic, with over 6 seconds earlier LCP.
By now, we’ve probably done enough to be able to understand the impact of this optimization, but it would be possible to refine the experiment further to eliminate some unhelpful noise. For example, an artifact in this experiment’s results comparison shows that the experiment had 1 additional render-blocking request that was not present in the control. This is peculiar and likely the result of the final HTML snapshot containing link or script elements that were originally added dynamically (and thus non-blocking), yet appear to be render-blocking when viewed as static output. A quick glance at the experiment's request waterfall confirms that a google account stylesheet is to blame, shown with a render blocking indicator on row 2:
In a real implementation, that blocking request would not exist in the HTML at load time, so you may choose to refine the experiment further by removing it from the source. For now though, our result is so dramatic that we don't need to further reduce noise to make the experiment run even faster. Regardless, this situation is a helpful reminder to take these comparisons with a grain of salt. They are a good prediction for how an optimization would apply, but keep any eye out for artifacts that can sometimes skew the results.
Let's move on!
anchorExperimenting on AirBnB
Running the Mimic Pre-Rendered HTML experiment on AirBnB’s homepage gives us the following results (adjusted for differences in proxy timing, once again).
anchorHow About a Few More Sites?
Here’s Spotify (with an almost 7-second improvement in LCP on Desktop Chrome cable):
Here’s Reddit (5.38s LCP improvement on 4G Mobile Chrome):
Here’s Apple.com (4.0s improvement in start render LCP on 4G Chrome mobile):
Here’s FedEx.com (4 second faster start render on 4G Chrome mobile):
Here’s one from my favorite local ice cream shop (8.47s faster LCP on 4G Chrome mobile):
So many wins! As this post demonstrates, serving useful HTML up-front is faster than client-side generated HTML–often by a lot. And there are many other reasons useful HTML is better too! Accessibility is a big one. The moment a site becomes interactive–that is, when a page not only looks usable but actually is usable from a user input perspective–is often the moment that a site becomes accessible to assistive technology like screen readers.
anchorOkay... I'm convinced. How do I do it?
anchorThanks for reading!
I hope this post makes it clear that serving meaningful HTML can be one of the absolute best things you can do for a site's performance. WebPageTest Experiments are designed to help us understand which changes are worth our effort before we do any of the work, and the “Mimic Pre-Rendered HTML” experiment is a particularly great example of that value.
The more information we have, the more informed decisions we can make!
Thanks for reading!
Scott Jehl is a Senior Experience Engineer at Catchpoint who cares about creating fast, compelling digital experiences that can be delivered to the broadest possible audience. He is the creator of the Lightning-Fast Web Performance Course, author of Responsible Responsive Design (A Book Apart, 2014), and co-author of Designing with Progressive Enhancement (New Riders, 2010). He is also a frequent speaker at web conferences around the world. More at scottjehl.com.@scottjehl