Enhance your website Performance

What makes web sites slow?

Whenever talk comes to the speed of web sites the biggest trick usually advertised is to cut down on the file size of everything (this also leads to endless – and fruitless – discussions about the size of JavaScript libraries. In reality, there are many more factors that play a part in the initial response time of a web page:

  • The file size of the HTML document
  • The file size of the dependencies in the document (scripts, images, multimedia elements)
  • The complexity of the HTML (simpler pages are easier to render for the browser)
  • The speed of the connection of the user
  • The speed of third party servers as content may be pulled and included from them
  • The response time of the DNS servers resolving the domains and pointing you to these other servers
  • The responsiveness and speed of the visitors’ computer (how busy is the machine with other tasks – as that impedes on the rendering time of the browser)
  • The responsiveness of the server

These are the technical parts of the equation. Then there is also the human factor. Web pages are considered to be not fully loaded until they show up and don’t “jump around” or “have no loading images”.

Things to do to make web sites faster

There are some well-known general best practices you can follow to overcome some of these technical and human factors and ensure a quick response web site:

  • Optimize all the HTML and dependencies as much as you can without losing quality (this can include stripping the HTML documents of any comments and superfluous linebreaks, which should be part of the publication process. In order to keep sites maintainable you still need those in the source documents)
  • Reduce dependencies by using the least amount of file includes (collate several scripts into one include, use CSS sprite techniques to load all images at once)
  • Make sure that you don’t include third-party content from their servers: set up a script that caches RSS feeds locally and use that one instead. The benefit is not only that you don’t have to deal with the DNS server delays but you are also independent of the other server should it go down.
  • If possible, define dimensions for images and their container elements. This will ensure that the first rendering of the page will be correct and there won’t be any “jumping around” when the images are loading.
  • Include large dependencies such as massive scripts at the end of the document, as this means that the rest of the page gets shown before the browser loads them. Large JavaScript includes in the head of the document mean that the browser waits with rendering until they are loaded.

Best practices vs. special speed requirements

Unfortunately some of these tricks clash with what we consider best practices in web development. Cutting down on the number of included files for example impedes maintainability of the product. In order to make it as easy as possible to maintain the look and feel of a site with different pages (home, articles, archive…) it does make sense to keep the different styles in own includes and only add them to the pages that really use them. You could have one base CSS include and then one for the homepage, one for articles and so on.

The same applies to scripting – keeping methods that do the same job in their own JavaScript includes makes maintenance a lot easier, as you know immediately where to find a certain method without having to scan the whole script. Furthermore, adding scripts inside the body of the document is dirty as it mixes the web development layers structure and behaviour.

Luckily there are technical solutions for most of these problems.

Using single includes for several style sheets or scripts

One solution, written by Edward Eliot, is a PHP script that does the job of collating several scripts or CSS style sheets into a single file. In the case of JavaScript it even cuts down on the size of the script using Douglas Crockford’s JSmin. The script is dead easy to use and will cache the collated file for you until you change one of the files included in it. This means that your files are automatically packed, cached and the include file updated when you change them. You get the best of both maintenance and speed without having to change anything by hand.

Mission almost possible: tackling the onload problem

One other really big issue is that unless you embed your scripts in the body of a document you’ll have to start them when the document has finished loading. This results in a slight delay, and can cause problems.

The delay is caused by the way browsers load, parse and render documents. If you call your scripts with the onload event on the window, all of these following steps will have to be finished:

  • HTML is parsed
  • External scripts/style sheets are loaded
  • Scripts are executed as they are parsed in the document
  • HTML DOM is fully constructed
  • Images and external content are loaded
  • The page is finished loading

In a lot of cases, this takes far too long and needs to happen a lot earlier. Many clever webdevelopers are tackling this issue and every so often a new answer to end the quest for a solution gets released. Most JavaScript libraries have an onAvailable or onDocumentReady event handler that starts the script as soon as parts of the document are loaded rather than the whole lot including images. In practical and admittedly hard-core testing with older browser and operating systems none of them really turn out to be bullet proof though. However, we are all on the case and with luck we’ll get there eventually.

For web applications where a premature activation of elements can result in a failure of the app this is absolutely vital. If your problem is of a cosmetic nature, there might be a workaround.

Avoiding the on-load problem with on-demand pulling of content

Most cosmetic on-load issues are caused by overloading the document with far too much content. This could be massive amounts of text displayed in a tabbed interface or a navigation that is four levels deep. With JavaScript enabled and executed without a glitch, this content can be navigated and displayed in a dynamic fashion and easily digestible chunks. When you turn off JavaScript and see the whole document unstyled it can become a real pain to find your way through it and that is never a good plan. This extra content also adds unnecessarily to the page weight of the initial load.

The solution is to use JavaScript to load the content only when the clever interface can be offered to the user. Users without JavaScript would get a plain vanilla version that only has the most necessary elements and content.

Which techniques you use to pull this extra content will depend on what you try to include. The easiest option is to use a dynamically generated script tag. This is an old trick that was used to pull in large JavaScript data sets or scripts on the fly when the page was loading:


function pull(){
  var s = document.createElement('script');
  s.type = 'text/javascript';
  s.src = 'largeJavaScriptBlock.js';
  document.getElementsByTagName('head')[0].appendChild(s);
}
window.onload = pull;

This trick can also be used to include output of APIs that support JSON, for example del.icio.us. As a JSON object is nothing but a chunk of JavaScript you can include this with a generated script tag when the document has already loaded and is displayed to replace an element with this content. The wrapper object Dishy allows you to do this easily. Another example is the unbobtrusive Flickr badge that uses the JSON output of Flickr to show your latest photo when JavaScript is available but only a link to them when it is turned off.

In order to include non-JavaScript content you can use Ajax or AHAH or Hijax or whatever you want to call Ajax without the XML part! An example for this would be the optional Ajax navigation which goes even further as it only loads the more complex interface when the visitor wants it.

Imaging trickery

The last idea stems from a time that may just have been before you even started developing for the web! Netscape, the ill-fated (but IMHO at that time better) competitor to Internet Explorer during the browser wars had a custom HTML attribute for images called ‘lowsrc’ which enabled you to define an image that was of much smaller file size than the real one and was loaded first and then covered by the real one while it was loading. This allowed you to give even users on ridiculously slow connections a preview of what there is to come.

You can re-use that idea and not embed large mood imagery in the page when it is loading initially but use more stylized, lighter images that get replaced with the others once the page has been loaded. Or you could go even further and only use background colours at first. You then use JavaScript and the DOM to load the real image when the document has finished loading and cover the preview with it.

This trick can also be immensely effective when you include lots and lots of smaller images from several servers (like gravatars for example) as these are not likely to be cached. Simply use a placeholder graphic initially and replace them with dynamically created images when the page has loaded.

You may also like...