Improving site performance with cookie-free domains en

By ACM on Sunday 25 April 2010 12:15 - Comments (4)
Category: Browsers, Views: 18.705

Tweakers.net implemented Lighttpd as a means to off-load trivial static requests from our Apache-servers a few years ago. When we did that, we had to make a distinction in the url's for those static elements (layout-images, css, javascript) and dediced to use a seperate domain, allowing that domain to stay free of cookies, rather than a subdomain. So we introduced 'tweakimg.net' instead of 'images.tweakers.net' or something like that.

A while later we also introduced the reverse proxy Varnish to further off-load "almost always static" content from our Apache-servers, allowing both caching of content on local disks and memory (to the server) and a further reduction in the number of requests handled by Apache and the number of reads from the NFS-server. To further improve the user-experience we, again, used a cookie-free domain. This time we introduced 'ic.tweakimg.net'.
This way we can trivially allow our requests to be separated over the three servers, just write the correct url in the HTML-code and the user's browser will do the rest. We don't have to do Layer7 loadbalancing or similar tricks.
Obviously, the reverse proxy could also be used to serve out all that static content and thus saving an extra dns-lookup, but having an additional hostname allows more parallel requests, so it probably doesn't matter much which we choose. It'll depend on the amount of images loaded via ic.tweakimg.net. Which can be quite a few on some pages, like a product listing or in forum topics with a lot of comments.

Yesterday, a user suggested that using cookie-free domains was completely useless. At first he didn't really seem to understand where the potential gains where made. In contrast to most optimizations this one is aimed at the request, rather than the response. By not having to send a bunch of cookies, the amount of data in the request can significantly be reduced.
In my personal case it will save 747 bytes, but for most much less active visitors that will probably be somewhere around 280 bytes. And if the cookies produced by Google Analytics aren't placed yet, it is further reduced to about 80 bytes. This is per request, to completely load the index-page of tweakers.net you'll need 66 resources from tweakimg.net and ic.tweakimg.net.
With current broad-band users in The Netherlands it is still very common to be limited to a 1Mbit upload speed. So you'll end up uploading 16,5KB additional data or in my case 49,3KB. In a best-case scenario that would be 0,165 seconds or even 0,493 seconds of wasted time.

Obviously, sending smaller packets isn't really linear compared to larger packets. There are fixed and semi-fixed components to that. Although, the requests are also reduced to fit in the 576 byte MTU some of our visitors' modems use. So in that case, the gain can actually be super-linear.

Anyway, I took 4 different resources, a very small and a larger image and a small and a large CSS-file, and benchmarked them using ab. My connection is a normal consumer ADSL-line with 20Mbit download and 1Mbit upload.
I copied all headers my browser sent to request those resources and varied the cookie-size, ab was put in Keep-Alive-mode and made 10 requests to the same resource each batch. Here are the results, the times are in milliseconds and represent the average time per request:

ResourceCookies1st batch2nd batch3rd batch4th batchAverageDifference
Large imageNone45,4845,8345,545,0245,46
22484 bytesSmall53,2747,5854,0754,4252,336,87
Large57,3356,8357,4857,3857,2511,8
Small imageNone24,924,9325,1125,0925,01
70 bytesSmall27,4227,5527,6827,0627,432,42
Large31,331,4331,4731,2831,376,36
Large cssNone33,233,6133,633,4933,47
10010 bytesSmall35,2134,935,1935,2335,131,66
Large39,0539,2639,4639,3939,295,81
Small cssNone26,1626,1126,3526,1926,2
989 bytesSmall27,9428,1128,2328,2728,141,94
Large32,8332,0432,0132,4632,336,13


The last two columns contain the average of averages and the difference to the no-cookie case. I later redid the first large image, and it appears our servers vary a bit with that resource. I also saw increases of only 2.7 and 6.36ms, so I've ignored the larger 5.81 and 11.8ms in the results above in the rest of the article.

As you can see, simply changing the cookie-size can make quite a difference. The small cookies add at least 1,66ms and in this case at most 2,71ms to each request. With 66 requests, that results in 0,11 to 0,17 seconds additional time. With the 5,81ms to 6,36ms for the larger cookies its even 0,38 to 0,42 seconds extra. Although small, these numbers are actually starting to be perceivable.

So does it matter to have cookie-free domains for static content? Well, as with all these kind of changes, it may help and varies per pageview. If all those resources are in your browser's cache, it won't make much difference anymore. Perhaps the reduced request-size allows the local storage to be slightly more efficient, but that's it. But if you hit f5, most responses are tiny "304 Not Modified" and then the overhead of the cookie-size is a relatively large part of those request-response pairs.

So it will may help a little, but we see no adverse effect, so why not use the cookie-free domain rather than some subdomain to distinguish between the various servers. For the internal set-up it doesn't introduce any additional difficulty, the Apache, Varnish and Lighttpd are all capable of handling this situation trivially.

We have introduced multiple effects though with tweakimg.net and ic.tweakimg.net, not just cookie-free domains. The additional domains allows a browser to open more connections.
Furthermore, Lighttpd and Varnish are both slightly faster than Apache with these kind of requests and for many of the requests that are now handled by ic.tweakimg.net, we skip some additional php-processing. Besides, both are configured more friendly towards keeping requests alive, thus introducing more potential gain while allowing us to tighten the configuration on Apache.
And on top of that, there is the gain of not having to upload that cookie-data. Some additional caching layers (like local proxies) may profit a lot from the fact that non-cookie requests can be sent to other users as well, whereas requests with cookies are normally considered private resources for a single user.