A Few More Notes on HTTP

Nov 05, 2007 15:20

Saturday, I attended BarCampMontreal3, which was quite fun. I figured that I should really practice my presentation skills, so Thursday, when I found out it was this Saturday (not the next one as I has thought!), I had to find something to talk about.

I figured there would be a lot of web developers in the audience, and having noticed that a lot of web application platforms tend to disable many HTTP features that helped the web scale to the level it has today, I thought I could share a few tips on how to avoid busting bandwidth caps, deliver a better user experience and overall try to avoid getting featured on uncov.

It was well received, mostly (see the slides), although it felt a bit like a university lecture for some (maybe the blackboard Keynote theme didn't help, and I was also one of the few with a strictly educational presentation that was also technical). Marc-André Cournoyer writes that just one simple trick visibly improved his loading time, so it's not just for those who get millions of visitors! Since at least one person thought that, I guess I should clarify or expand on a few things...

When running a small web site, there are two things we are after: fast loading time, and keeping our bandwidth usage low (if you're small, you probably don't have the revenue to pay for big pipes).

The best thing possible is, of course, for your server not to get a request at all. This is actually quite easy to do, and is accomplished by telling the client some amount of time that it can just assume that the resource it asked will not change. This is done by having a Cache-Control header with a "max-age" directive, like this (the number is in seconds):

Cache-Control: max-age=3600
This used to be done with the "Expires" header in previous versions of HTTP, but as it is error-prone, it is best to avoid it if you are generating these headers yourself (or you can use a well-known library to do it for you).

The main problem with this approach is that we live in a fast-moving world, and we want things to be as up-to-date as possible. If the home page of a news site had the Cache-Control header I just gave, the load would be greatly diminished, but so would the usefulness of the site! But there are some things that do not change all that often, CSS and JavaScript files, for example.

But there is another approach that leverages caching without compromising the freshness, cache validation. Here, the idea is that the web server gives out a small bit of information that is then used by the client to validate its cache. If the client has the resource already, it can perform a "conditional GET", where the server will only return the data if it is deemed invalid. If the data cached by the client is still valid, the server replies with a "not modified" status code (304, if you need to know), and does not return any data. There is still the cost of a round-trip to the server, but this technique can help cut down on bandwidth (as well as database usage, if you do it right) quite significantly.

This "small bit of information" can be either a last modification date, or an "entity tag" (ETag), which is literally a small string of your own choosing (note that both can be used at the same time, if you prefer). The last modification date is the one most people find the easiest to understand, but depending on your application, coming up with a last modification date could be difficult or less desirable. For example, a wiki application might only have a "latest version number" for a given wiki page, and would need a separate database query or an SQL join to get the modification date itself. In this case, the wiki application could use the version number as an entity tag to accomplish the same thing.

This is the most difficult to implement, because it can require changing the implementation of your application. What you need to do is cut the handling of a request in two: the header part, and the content part. In the header part, you need to generate the Last-Modified or the ETag (or both), and then, you compare those with the ones sent by the client. If they match what the client sent, you can simply skip generating the content entirely and return a "304 Not Modified" response status instead. If they do not match, then you keep going the normal way.

I heard that Ruby on Rails now has automatic support for ETag, which it generates by doing an MD5 digest of the rendered content. While this is better than nothing (it will definitely save on bandwidth), it is a bit brittle (if you have something like "generated at
" in your page, say), and it has already expended all the effort of generating a page, only to throw it away at the end. Ideally, generating the Last-Modified or the ETag would only require a fraction of the effort of generating the whole page. But still, even this naive implementation will save you possibly significant amounts of bandwidth!

Another technique is to make your content smaller, therefore needing less bandwidth to send it. This can be done with various tricks, varying from making your CSS and JavaScript smaller (for example, using Douglas Crockford's JSMin), to enabling on-the-fly compression on your web server.

A very good resource to learn more on this is on Yahoo! Exceptional Performance page, which has a list of rules to follow, and even have an easy to use tool (based on the excellent Firebug to tell you how your page is doing, based on those rules (their rule about ETags is a bit incorrect, though, as it usually only applies if you have a cluster of web servers, and can be fixed in a better way than just turning them off). They in fact made a presentation similar in spirit to mine at the last Web 2.0 Expo, of which they have a video on their site (slides). They even wrote a book on this subject!

syndicated, tech, scalability, barcampmontreal, barcampmontreal3, database, montreal, barcamp

Previous post Next post
Up