Not Just a Pretty Face: Performance and the New Yahoo! Search

Today we announced the new Yahoo! Search Results Page, which ships with a wide array of rich new features. What you might be surprised to learn is that the new design is actually a little faster than the original. Through diligent use of modern performance techniques, we not only held the line on the total page size and number of HTTP requests, but we also made a number of improvements to the load time of the page. Now that you've seen the new search results page, let's walk through some of the performance considerations we used when constructing the new template.

Code Refactoring

Any sweeping design change like this is a great opportunity to refactor, and we took full advantage, rebuilding the Yahoo! Search page's HTML, CSS, and JavaScript foundation from scratch. If you think of the template as a shell that wraps the "10 blue links" in the center of the page, all the markup around the middle content well has been rewritten. This allowed us to get rid of old cruft and take advantage of quite a few new techniques and best practices, reducing core pageweight and render complexity in the process.

As just one quick example of what's new, our search page now uses CSS image flipping. Rather than including separate images in our sprite for up and down arrows, we actually only include a down arrow. To generate an up arrow for all A-grade browsers, we use vendor-provided CSS hooks:

-moz-transform: rotate(180deg);
-webkit-transform: rotate(180deg);
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=2);

The actual byte savings are small, but every little bit counts, and this was relatively easy to implement.

We also took this opportunity to improve page structure and accessibility. As far as we are concerned, the philosophy that you have to create a separate experience for accessibility is a fallacy; we believe you can write accessible markup without hurting performance. A key improvement in the new design is simply creating better document structure using <h1>, <h2>, and <h3>, which enables screenreaders to navigate the page more easily. We've also added some better keyboard interactions, such as making sure the first tab key press takes you directly to the search box instead of hitting navigation links, and enabling CTRL-SHIFT DOWN to jump past the header and sidebar and put focus on the first web result.

Data URI Images

The new design incorporates several subtle, repeating gradients, which look great but can be absolute performance killers. To help alleviate this problem, we took advantage of an obscure feature supported by all modern browsers called Data URI images. This technique enables you to embed the encoded data for individual images right into your CSS. The technique has been around for a while, but it's only recently become widely supported enough to use in production.

Data URI images enabled us to avoid the extra sprite weight associated with repeating gradients, while at the same time improving perceived performance by avoiding the "pop-in" effect that you sometimes see with template images. In a traditional CSS file that refers to external images, the browser loads the CSS, parses the CSS, and starts rendering the page. Any image references in the CSS spawn a new HTTP request. Depending on your connection speed, the page might have already rendered by the time the image returns, which causes the image to appear like it suddenly popped in to the page. Data URI images helped us eliminate the pop-in effect entirely and significantly reduce the number of HTTP requests.

To maintain backwards compatibility, we provided a separate gradient-only sprite for IE6 and IE7. This means that those browsers encounter slightly worse performance than more modern browsers, but the net effect is still an overall win. Of course, managing a split code base is a little risky. Many sites prefer to do this at runtime, using conditional comments or other techniques. In our case, the overall difference is actually pretty small — our build tools push the right static resources to our CDN, and our frontend does browser sniffing and swaps in the right CSS file.

Semantic Page Flushing

Rather than waiting until the server generates the entire page and then sending everything at once, we send the page to the client in three semantically meaningful chunks, which enables the browser to start rendering the page and requesting static resources more quickly.

  1. The first chunk includes the page header and search box, and is sent before we even request search results from the backend. This enables the browser to begin downloading static resources while our server is still processing the search request.
  2. The second chunk includes all the visible page content and ads, but no JavaScript. This enables the user to instantly begin scanning and interacting with their search results before the browser downloads and executes any Javascript code.
  3. The final chunk includes the JavaScript that adds rich but non-critical functionality like Search Assist and Search Pad.

The net effect is that the user sees the page loading and can begin interacting with it much sooner. By sending the browser the info it needs to download static components as early as possible, we also reduce overall round-trip time.

Note that the old design also used semantic page flushing, but not quite as aggressively. The key difference is that in the previous design, the first chunk only included markup up to the <head>. By refactoring how our backend logic works, we were able to push the chunk down into the <body> and include key visual components such as the page header and search box. By getting the visual markup started down the wire, this creates the perception that the page is loading, that "something is happening."

Lazy Loading

The core JavaScript and CSS used on the SRP now loads in two distinct chunks. The first chunk includes only the bare minimum CSS and JavaScript required to render 100% of search result pageviews, so that the base experience loads as quickly as possible. The second chunk includes additional (but heavy) functionality such as Search Assist and Search Pad. We also load additional chunks of CSS and JavaScript for shortcuts and other dynamic features only as necessary, ensuring that we never waste time loading code that we're unlikely to need.

As a search site, we can get away with heavy use of lazy-loading because the search experience is such a fundamental user experience on the web. When a user requests a search page, they typically scan and click very quickly. As long as we make that core experience as fast as possible, we can defer other components for later. If your site has some other usage paradigm, you have to be more careful; you can't lazy load components that the user wants to interact with right away.

Designers and Engineers Agree: Performance First!

Beyond the technical considerations listed above, perhaps the most important factor was our philosophy that performance is everybody's problem, at all stages. Our frontend engineering team started thinking about and planning for performance even before the designs were past the rough draft stage. This enabled us to give early feedback to our User Experience Design (UED) department and work closely with them as they refined their designs.

Our designers had already taken into account many performance concepts they had learned over the years, such as sprite optimization. However, the new design uses gradients far more heavily than the previous design, which can get expensive — particularly if you stretch the gradients vertically or horizontally across the page.

Fortunately, our designers brought these considerations to us early, and we were able to brainstorm with them about how to use graphical components more efficiently. Once our designers understood some of the techniques we wanted to use and some of the limitations we had, they were able to knock out some absolutely gorgeous designs that still fit within our performance constraints. After those initial meetings, at each stage in the design our UED team would ping the performance engineers to ensure that we stayed within our performance targets. This close collaboration helped keep us from having to be reactive about performance.

In other words, new tricks and performance techniques only get you so far. Thanks to countless hours of hard work by individual designers and engineers, the new Yahoo! Search Page delivers far more functionality and design components in an even faster package. And we're still working hard with our designer colleagues to make the search experience even more fast and engaging over the coming weeks and months. If you have any questions about Yahoo! Search and frontend performance, we welcome your feedback!

Ryan Grove
Yahoo! Frontend Engineer

Stoyan Stefanov
Yahoo! Performance Engineer

Venkateswaran Udayasankar
Yahoo! Performance Engineer