The front page loads with 10 HTTP requests on an empty cache, and doesn't have any user-perceptible blockage while waiting on elements. (Pulled this straight out of YSlow.)
Improvement is overkill, but if you wanted to:
1) Creates images{1,2}.ycombinator.com and split y18.gif, s.gif, and grayarrow.gif, and your JS and CSS files across the three domains. This will cause older browsers to load them all in parallel. (The HTTP spec suggests having no more than 2 requests to any one domain open at a time, so you end up with stairstep loading graphs on browsers which are spec-compliant, such as IE6. Browsers which have a more pragmatic view towards compliance with published specifications, such as Firefox 3, will do 8 ~ 10 requests in parallel.)
Note the balancing act in that every domain you add is another DNS query that needs to get resolved once. Ideally you want to keep it to about 4 domains or less.
2) Put far future expires headers on your static assets. You can bust caches by using the Rails-y filename.js?timestampOfLastModificationOfCode method.
But, again, this site is probably the fastest one I use on a regular basis, even from braindead clients like my Kindle. I wouldn't guess the marginal improvement is worth the expensive engineer time.
This is good advice, but not optimal. Using multiple domains in this case will actually slow down the site because of the additional DNS lookups -- only do that if you are serving up a ton of images.
a) The CSS file should be inline.
b) The images should be done away with entirely by including them as a data uri.
c) For IE6-7 (data uris not supported) the fallback image should be sprited.
d) For bonus points (if you want HN to be served nearly instantly, globally) splurge for an application caching CDN provider like Akamai.
The first request is slow because lazy loading means news has to load stuff from disk. That's why the second request is much faster.
I could fix this by only loading stuff the moment I need to display it. Right now I load enough to generate 7 pages of threads, but people rarely click on the More link, so I'm dragging a lot of items into memory unnecessarily.
I'll try fixing this when I have time. Unfortunately I have to write a talk right now, and this is going to require turning some code inside out.
I'm not sure the "landing page" optimisation is all that relevant to HN. However:
Aside from the threads issue everyone else has mentioned, I've noticed that apart from the main HTML, the co2stats tracker is the only part of the site that doesn't have proper cache-control headers and thus incurs a query every time (resulting in a 200 OK, not a 304 Not Modified, say); worse, the javascript portion can also take a while to load (up to 2 seconds), even though the returned content never actually seems to change. The co2stats image also never is cached. You should probably prod them to fix that (they're YC funded, right?).
It depends how demanding the requests are. Just as one example, when I request the list of my own threads (to be sure to respond to replies I may have missed through regular browsing of the site), I OFTEN find that that times out. This is probably more likely to happen to a user with many subscribed threads than to one with few, but I see it happen a lot. I can hardly ever successfully request my list of submitted threads.
But perhaps that is just me. My use pattern here is at the extreme end of the distribution.
I don't consider myself a particularly frequent poster, but I see the same frequent timeouts when trying to load my threads page. It's the most important page for me personally, because I come here for the conversation with a highly intelligent group more than anything. Nothing else compares to HN's simplicity and community.
The comment history and user pages have been very slow lately. I guess it depends on how you define "bug," but at some point slow results cross the threshold, and I think we're there.
Not that I/we don't appreciate everything you do. I mean, this whole site is an act of charity so I guess I expect some bugs.
I'd call the combination of using server-side continuations for form handling (rather pointlessly) and the server needing to be restarted constantly (from memory leaks?) a pretty egregious bug. Forms shouldn't become invalidated because your software is unreliable.
When replying, often I'm returned to a random page instead of the one in the whence parameter. After making this comment, I was redirected to /threads?id=pg instead of to /item?id=1243793
The markup in comment threads is extremely broken in Mobile Safari -- font sizes change randomly, sometimes some of the vote arrows will randomly be huge or teeny-tiny.
Using Image() to generate GET requests for the vote links is at the very least stupid if not a bug, you should be making a POST using XHR -- and then you could get the new vote total in the response instead of mindlessly incr/decr-ementing the number in the DOM.
There's the whole "Unknown or expired link" when you wait long enough before clicking "Next" (x?fnid=foo). I understand the cause behind it, but I prefer links that work.
For most of the uses for me it is a good speed. However, there are a few activities that during most times of the day time out. If I look to see a user's submissions, that will frequently time out. Occasionally, checking on my 'threads' will also time out.