This worked about as well for me as it did for Firefox, when I implemented it a few years ago, after reading the YSlow presentations. I've said it three times on HN but I'll say it a fourth: watch one of the YSlow presentations, get their checklist, go down and tick off items on it. It is the easiest money you'll make in your life. Look at the improvements here: combining uncombined JS files and inlining CSS. These are tweaks that you can have coded, tested, and live in ten minutes or less.
As an aside: It is a little disturbing to me that I'm ahead of Mozilla on the Internet technology adoption curve.
Inspired by this comment, I just spent 8 hours re-optimizing one of my sites. I managed to cut loading time in half. We'll see if that has any effect on sales...
This would depend on your goals. If first load speed is paramount, for example the landing page for Firefox or on google.com, then go with the inline. If you anticipate (or, ahem, have measured) multi-page interactions from your users and can deal with a mild hit on the first page, minify+combine+gzip one CSS file.
I'm very much in group #2... everywhere except my landing pages.
The short answer is no. Even an extremely intense, CSS heavy page might have say 16 kilobytes of CSS. After gzipping, that should turn into about 3 kilobytes. 3 kilobytes, inline with the rest of your content, is utterly negligible and caching that would probably classify as a micro-optimization, even though every instinct is telling you to put it in a nice, cacheable stylesheet.
It feels dirty not to cache it, but if you look at a download waterfall and see how much time it takes to send that separate http request (while blocking the display of the page) it is well worth it.
So what you would need is a bit of logic that applies your site-wide css to your landing page html on the server side and sends that to the browser, either on demand or by statically producing a transformed html page. Does anybody know about something like that? "html css server" are not really the terms to enter into a search engine if you want specific results...
There's probably not anything like that because it's a fairly rare situation. A particular company is only going to have one page like that, it'd be easier just to make the custom css for that page.
No, you really just want to use inline CSS. Is it really worth all of the trouble just to cache 3 kilobytes of data (probably 20 milliseconds of download time)?
That is what I said, yes, except that I wanted a tool that does the inlining. But as another comment said, that tool would probably not worth the hassle, too.
I'm not totally sure what you mean by tool. Whatever framework you're using surely can include an external file, and if not, even Apache itself can handle this via SSI. It really depends what your web app looks like.
Not if it's the first time that they've visited your site, and in the case that Mozilla are optimising for (a single landing page for people on IE who might want to switch to FF) the first impression counts. Apparently by quite a lot if 15% is anything to go by :)
It's clear that a benefit was seen. Moreover, it's clear that in other cases (shopping at Amazon, searching at Google), more responsive servers mean an increase in traffic, since people are free to shop / browse more, and it keeps people's interest.
In the case of a single site with a single product / purpose (ignoring Mozilla's other products), is there a generally agreed on explanation for what is going on here? Do people really change their mind about something as big as changing their web browser because of a delay of one-two seconds? Are these people marginal users and unlikely to actually finish installing it, or likely to abandon the browser after a single use?
I'd like a peek into the psychology of the marginal people in a scenario like this.
I often conclude that the people running websites are doofuses, and thus their product may be a no-good product, if the website doesn't meet the universal usability standard of speedy page views. That's a simple quality proxy heuristic.
Finally! It looks like someone actually did some simple math to determine if their results were statistically meaningful. I wish more people would include a section like this when they talk about conversion rates.
Running a one-sided Student’s t-test with a means difference of 14%, our experimental data yields a P value of 0.000051. This means that there is only a 0.0051% chance that we would obtain a 14% (or greater) improvement if the real effect wasn’t at least this large.
Err... Student's t doesn't assign a probability to P(measured>=14 | actual<14), it assigns one to P(measured>=14 | actual=14) under the assumption of equal variances (which here means actual=0, so clearly not applicable).
I figure that the actual effect is 16.05% +- 0.17%. And that's simply by assuming 145k is close enough to infinity, and for IE.html only.
That is, unless I'm mistaken about this, since I grok only the fairly basic stuff. Which is certainly plausible in and of itself.
I don't know if this talk was ever made public but I heard at Google that under some circumstances cutting a few hundred milliseconds of latency can double click-through, conversions, etc., or even more. I do occasionally see Google people encouraging web developers to make their pages faster at conferences and such. Personally I would be very concerned about the money I was losing if I had a web site where the 95th worst percentile of my paying customers had to wait more than two seconds (from the client's perspective) for a pageload. I'd buy a bigger database machine, spread my servers to more geographical regions, whatever it took to try to bring that latency down.
Google's Marissa Mayer has presented their page loading results a few times publicly: in one unplanned experiment, about .5 seconds in marginal load time caused number of user searches (and, by extension, clicks on ads and Google revenue) to decline by 20%. That was the Web 2.0 Conference in 2006.
I'd buy a bigger database machine, spread my servers to more geographical regions, whatever it took to try to bring that latency down.
Why make life hard. Buying a bigger database server costs actual money and time. Getting out the YSlow checklist and knocking two to three things off of it costs nothing and no significant amount of engineer time.
Gzip CSS/JS files: four lines in your Apache config, warm restart, done. Collapsing CSS/JS together: 16 characters (:cache => true) in Rails, warm restart, done. Spriting CSS images: open a web page, type a bit, copy/paste what it tells you to to development, push to staging, verify it works, done.
Moreover, as compared to buying a DB or spreading your servers, these are virtually guaranteed to actually work. Many, many of the suggestions that come under the heading "performance improvements" do not actually work because they address things that are not problems. (For example, unless you have data which convincingly demonstrates differently, the vast majority of websites can assume that the web stack is not a problem. Optimizing your code within the web stack is generally a hideously expensive waste of time.)
And forget a/b testing - Google's Optimizer provides full multivariate testing, FREE.
We paid $10k a month to license a hosted multivariate test engine just 3 years from Optimost that was a piece of junk in comparison to that free tool . . .
My comment was more along the lines that a page that promotes your service should load reliably rapidly, because people who are kept waiting by page loading may not stay around to convert to a user of your product, even if it is free. The comparisons the Firefox team show with the webpages of Brand X products made me think "Ouch!" because I am a user of Firefox myself.
I've been using Firefox since ... well, since Mozilla 1.2. [1] It makes me sad and nostalgic, but I have to admit that Chrome kicks ass at sheer page-rendering speed. I still use Firefox for testing and troubleshooting (Firebug is amazing), but I'm increasingly using Chrome for straight browsing.
Some of us are still using Firefox out of pure loyalty. I'm painfully loyal (That's why I'm still watching 24 and Heroes) but I'm thinking seriously in Chrome.
I have a firefox shirt and backpack, I can't turn back now!
But really, Chrome is nice (and I occasionally use it), but it is still lacking extensions that I use on my main machine. Some of them (such as noscript) I hear wont be possible with Chrome's extension model.
The Mozilla people seem to be quite good with the stats. I think they've looked at the whole user "funnel" (i.e. from website to download to install to continued use) and focused on the parts they can get the biggest wins. Note the blog title: Blog of Metrics
Though perhaps they've got a blindspot in that they're more comfortable making changes in desktop code and so started there.
Mozilla had a lot of people focused on other areas who were trying to do ad-hoc analysis on the side. That is why our team was formed. The metrics team set about to measure and improve user interaction with certain critical pages like this IE landing page, and we discovered that it hadn't received the same level of attention that certain other pages had in regards to optimization.
The front page loads with 10 HTTP requests on an empty cache, and doesn't have any user-perceptible blockage while waiting on elements. (Pulled this straight out of YSlow.)
Improvement is overkill, but if you wanted to:
1) Creates images{1,2}.ycombinator.com and split y18.gif, s.gif, and grayarrow.gif, and your JS and CSS files across the three domains. This will cause older browsers to load them all in parallel. (The HTTP spec suggests having no more than 2 requests to any one domain open at a time, so you end up with stairstep loading graphs on browsers which are spec-compliant, such as IE6. Browsers which have a more pragmatic view towards compliance with published specifications, such as Firefox 3, will do 8 ~ 10 requests in parallel.)
Note the balancing act in that every domain you add is another DNS query that needs to get resolved once. Ideally you want to keep it to about 4 domains or less.
2) Put far future expires headers on your static assets. You can bust caches by using the Rails-y filename.js?timestampOfLastModificationOfCode method.
But, again, this site is probably the fastest one I use on a regular basis, even from braindead clients like my Kindle. I wouldn't guess the marginal improvement is worth the expensive engineer time.
This is good advice, but not optimal. Using multiple domains in this case will actually slow down the site because of the additional DNS lookups -- only do that if you are serving up a ton of images.
a) The CSS file should be inline.
b) The images should be done away with entirely by including them as a data uri.
c) For IE6-7 (data uris not supported) the fallback image should be sprited.
d) For bonus points (if you want HN to be served nearly instantly, globally) splurge for an application caching CDN provider like Akamai.
The first request is slow because lazy loading means news has to load stuff from disk. That's why the second request is much faster.
I could fix this by only loading stuff the moment I need to display it. Right now I load enough to generate 7 pages of threads, but people rarely click on the More link, so I'm dragging a lot of items into memory unnecessarily.
I'll try fixing this when I have time. Unfortunately I have to write a talk right now, and this is going to require turning some code inside out.
I'm not sure the "landing page" optimisation is all that relevant to HN. However:
Aside from the threads issue everyone else has mentioned, I've noticed that apart from the main HTML, the co2stats tracker is the only part of the site that doesn't have proper cache-control headers and thus incurs a query every time (resulting in a 200 OK, not a 304 Not Modified, say); worse, the javascript portion can also take a while to load (up to 2 seconds), even though the returned content never actually seems to change. The co2stats image also never is cached. You should probably prod them to fix that (they're YC funded, right?).
It depends how demanding the requests are. Just as one example, when I request the list of my own threads (to be sure to respond to replies I may have missed through regular browsing of the site), I OFTEN find that that times out. This is probably more likely to happen to a user with many subscribed threads than to one with few, but I see it happen a lot. I can hardly ever successfully request my list of submitted threads.
But perhaps that is just me. My use pattern here is at the extreme end of the distribution.
I don't consider myself a particularly frequent poster, but I see the same frequent timeouts when trying to load my threads page. It's the most important page for me personally, because I come here for the conversation with a highly intelligent group more than anything. Nothing else compares to HN's simplicity and community.
The comment history and user pages have been very slow lately. I guess it depends on how you define "bug," but at some point slow results cross the threshold, and I think we're there.
Not that I/we don't appreciate everything you do. I mean, this whole site is an act of charity so I guess I expect some bugs.
I'd call the combination of using server-side continuations for form handling (rather pointlessly) and the server needing to be restarted constantly (from memory leaks?) a pretty egregious bug. Forms shouldn't become invalidated because your software is unreliable.
When replying, often I'm returned to a random page instead of the one in the whence parameter. After making this comment, I was redirected to /threads?id=pg instead of to /item?id=1243793
The markup in comment threads is extremely broken in Mobile Safari -- font sizes change randomly, sometimes some of the vote arrows will randomly be huge or teeny-tiny.
Using Image() to generate GET requests for the vote links is at the very least stupid if not a bug, you should be making a POST using XHR -- and then you could get the new vote total in the response instead of mindlessly incr/decr-ementing the number in the DOM.
There's the whole "Unknown or expired link" when you wait long enough before clicking "Next" (x?fnid=foo). I understand the cause behind it, but I prefer links that work.
For most of the uses for me it is a good speed. However, there are a few activities that during most times of the day time out. If I look to see a user's submissions, that will frequently time out. Occasionally, checking on my 'threads' will also time out.
People ALWAYS have something more important to do than wait for your site to load, always. It's not that your site is unimportant, it's that your users' personal lives are more important to each of those users. But don't take my word for this; this is a well replicated finding of usability research in hypertext environments that go back before the development of the World Wide Web.
As an aside: It is a little disturbing to me that I'm ahead of Mozilla on the Internet technology adoption curve.