← Back to portfolio

A Quick Survey of Website Performance Optimization Techniques and Methods

Published on

Introduction


To companies, a website is a business in and of itself. One that thrives and grows with a solid foundation, a resilient infrastructure, a deep understanding of its audience and its needs, a good suite of products, and an efficient way of showcasing, creating, and delivering them.

To users, a site is a product. A message. Part of a living ecosystem that’s a means and method of communicating and delivering this message. To make sure that content and services are effectively and efficiently served, it’s important to thoroughly understand what is being delivered, and how. In the end, a user sees only that outer layer; if what is communicated is not intuitively and quickly presented, they will move on to another site.

Whether creating a site from the ground up or improving one that is already established, there are many tools at a designer’s disposal to iteratively analyze delivery and fine-tune a user’s experience.  Nothing should be taken for granted in terms of performance at any stage of site’s development lifecycle; it’s import to stay on top of the evaluation curve even after deployment.

Sites like WonderNetwork, Loader, Load Impact, BlazeMeter, and Google Pagespeed provide external multi-platform emulators, site stress-testers and performance analyzers, while server-side plugins for WordPress sites (perfmatters, CAOS, Query Monitor, WP Performance Analyzer, and F3 for example) assist in identifying bottlenecks and streamlining performance. On the client side, Google Chrome provides analytics in the ‘Network’ tab of its developers tools.

Once these issues have been identified, the question becomes: how does one fix them? Collected below are some ways to tackle these improvements and increase overall site responsiveness.

Image Optimization


According to the http archive (httparchive.org/interesting.php), as of March 1, 2018, roughly 78% of all content served (on average) was from media, with nearly half being from images alone. It’s apparent then that content management is a critical driver of a site’s performance.  Proprietary software (e.g., TinyPNG and JPEGmini) is available to produce lightweight, lossless compression of images. These savings can be extended by taking advantage of responsive images and using HTML srcset and sizes attributes. Since there’s no reason to serve up large images for displays that are too small to accommodate them, media queries allow designers to pick and choose the right image for the job, controlling when and where they are going to be displayed to the user. By tamping down file sizes and dynamically choosing the optimal image size for the display, bandwidth usage is reduced and the overall response times are increased.

Images though are not the only data delivered to a page; HTML, CSS and scripts necessary to render the site need to be transported as well. Whatever can be done to reduce the size of these files and increase the efficiency of their access will help the page run faster. Minification and gzip compression are two ways to accomplish this. Minification is a process by which unnecessary or redundant data (white space, new line, comments, unused or redundant code, etc.) are stripped from HTML, Javascript and CSS. These resources can either be preprocessed or reduced on the fly. Some utilities include HTMLMinifier, CSSNano, csso and UglifyJS. Enabling gzip compression on the site’s server will take this even further. According to developers.google.com, gzip-ing data can reduce the size of transferred assets by as much as 90%.

Render Blocking Resources


Another critical piece of the puzzle comes from an understanding of how the browser combines all of these elements (HTML, CSS, scripts, media, web fonts, etc.) to render the site. Anything that disrupts this orderly flow is referred to as a render blocking resource. Fortunately, there are some basic structural changes that can be implemented to increase efficient loading. For CSS, reducing the size and number of files (minification and concatenation), properly calling styles (such as placing at the top of the page), and utilizing media queries all go a long way to helping this issue. Javascript also benefits from the same procedures, though additional savings can be found through load synchronization with other resources. Since pages load sequentially, timing is key, and that’s why it’s important to careful consider how script references are handled, especially since they can take a long time to load. Unless a script is necessary to set up behavior for other files down the road, the best place for script references is at the bottom of the page, just above the closing body tag. To ensure that script loading doesn’t interfere with the loading of the DOM (and so also that the scripts have full access to all components on the page), there are two tags, async and defer, which can be assigned to script references. The async attribute allows a resource to continue loading in the background and then makes it available as soon as it finishes loading. The only downside to this is when the sequence of scripts is essential for proper behavior; since smaller files will load first, the order of the script references may be different than is intended. However, if rigid script ordering is necessary, the defer attribute is the best option, since it ensures that scripts are loaded in the order specified in the page.

Caching, HTTP Requests and Latency


Most sites are not static entities; the intent is for a user to stick around, and that usually means interacting with content. To that extent, it’s important to look at the frequency and type of HTTP requests and see if redundancy and latency can be addressed to improve the experience for the user. For mostly-static assets (i.e., ones that change less often than the frequency of the user’s access to the site), browser caching is a key tool. This information is communicated from the server through HTTP headers at the time of page load and informs the browser about the ‘freshness’ of content, how and when to fetch new versions, and where to store it. Currently, there are four different types of cache header attributes: cache-control, pragma, expires, and validators. Of these, cache-control and validators are the more modern, commonly-used types. Cache-control allows every resource (and not just the page) to define its own caching policy and provides details on the conditions and duration of the storage. Validators, specifically Etag’s, allow for efficient resource updates by assigning unique tokens to assets that indicate whether or not they’ve been changed since the last query window; if they have, a request for download can proceed, if they haven’t the page can simply access a cached copy of the content.

Infrastructure and CDN’s


Lastly, providing the site with the strength and versatility to serve the content is just as important as performance optimization. A fast web host is important for smaller sites and engaging a content delivery network (CDN) for larger, content-heavy sites, even more so. CDN’s reduce response latency by distributing static content (scripts, images, media files, etc.) to geographically diverse nodes and taking load off of the main origin server for the site. Additionally, many CDN’s provide the optimizations noted above right out of the box (caching, gzip and file compression, content management, etc.), so a lot of the heavy lifting can be offloaded to them.

In the end, sites operate in a dynamic environment and it’s important to continually stay ahead of the performance curve by monitoring and implementing solutions. The methods listed above offer a good set of points to begin exploring site optimization with the ultimate goal of providing the best user experience possible.