Hacker Newsnew | past | comments | ask | show | jobs | submit | xzyfer's commentslogin

The priority queue isn't the issue. In fact the priority queue is what kept our first paint times tanking because browsers prioritised render blocking resources instead of images.

The issue was due to the variance of image size. An image that is significantly larger than the page average will be loaded slower since all images get an equal share of bandwidth (priority). Adding sharding wouldn't help since the client only has a fixed amount of bandwidth to share and all images would still get the same share of it. Sharding could help if the bandwidth bottle neck was at the CDN but that's rarely going to be the case.


Browser do currently do this. H2 has two types of prioritisations: weighted, and dependency.

All browsers implement weighted resource prioritisation and weigh resources by content type. This is a hold over from what they do for HTTP/1 connections.

Firefox has dependency resources prioritisation. https://bitsup.blogspot.com.au/2015/01/http2-dependency-prio...

The spec purposely leave how these heuristics should work to the implementor. Things will change and implementations will diverge over time.

The server ultimately being in control means we can tell the server what resources are important for specific pages with absolute knowledge of the page.


Oh wow, that's cool. Do you know if servers currently support this? Would this mostly be useful on a network level or do you think it would also be useful for like trying to be more intelligent about scheduling?


It's hard to say for sure. Server implementations can vary wildly, make sure to test any implementation closely. I know from talking to CloudFlare that their implementation respects browser hints. Their implementation is also open source.


You read it correct.

By not moving our render blocking assets like CSS, JS and fonts over to the http/2 we rule out performance changes due to improvements to head of line block.

Our images were always on a separate hostname so the DNS lookup over is the same. We also did some initial benchmarking and found the new CDN to be more efficient than the old one.


This is correct. Visually completion will not be achieved until the entirety of the images within the viewport are downloaded.

However progressive jpegs could improve initial paint times. These are dynamic so each page would have it's own unique (although related) profile.


To the best of my knowledge you are correct about how CloudFlare works. For context this data was collected over the period about a month on real production pages with significant traffic.

The edges were well and truely primed.


although cloudflare doesn't manage caches on a per account basis. Each PoP has a single LRU cache that's used for all customers. In other words, even if you've primed it, your files may have been pushed out of the cache by a larger customer.

In order to know this hasn't occurred, you really have to check the hit rate cloudflare is reporting (for static files that rarely change, this should be near or at 100%)... and when you're doing side-by-side comparisons (like the speed index), you have to actually check the x-cache headers to verify that a cache miss hasn't occurred. Otherwise, you wouldn't actually know that a significant portion of traffic isn't being sent over http 1.1 (because of cache misses).


>Each PoP has a single LRU cache that's used for all customers.

Is this true for all tiers of paid accounts? Can someone from CF chime in here?


As recently mentioned by a CloudFlare employee in this post (https://news.ycombinator.com/item?id=11439582):

> We cache as much as possible, for as long as possible. The more requested a file, the more likely it is to be in the cache even if you're on the Free plan. Lots of logic is applied to this, more than could fit in this reply. But importantly; there's no difference in how much you can cache between the plans. Wherever it is possible, we make the Free plan have as much capability as the other plans.

This does not confirm the exact statement but at least points in this direction.


Thanks mate, glad you enjoyed it.


Totally. There are a bunch of ways to address the performance issue. As I alluded to at the end of the post there serious technology considerations when preprocessing so much image data.

We're currently looking at whether we can solve use IntersectionObserver for efficient lazy loading of images before the enter the viewport.


H2's single long lived connection means it's a contender to replace websockets. As a bonus you get to use HTTP semantics.


Server push at the edge is problem atm. Current push semantics require the HTML document say which resources to push. That's an issue if you're serving assets off a CDN domain.

Asset domains make less since with h2 from a performance perspective but there are still security concerns that need to addressed.


Good point if you're using push for page content that varies, like images in the the 99designs portfolio and gallery. That gets into dynamic caching territory.

As a first step, I'm focused on using push to cut latency between TTFB and processing of render-blocking static assets. Serving those from same domain as the base page, it should be easy for origin to supply edge with the list of resources push. Either in the HTML or with the `link` response header. It also means my critical assets are not longer behind a separate DNS lookup.

In the design gallery, this type of push approach could help you regain control of loading priority and get your fonts loading before that wall of images.


The priority queue isn't the issue. In fact the priority queue is what kept our first paint times tanking because browsers prioritised render blocking resources instead of images.

The issue was due to the variance of image size. An image that is significantly larger than the page average will be loaded slower since all images get an equal share of bandwidth (priority).

We could further improve first paint times by pushing render blocking resources but we'd need to be serving those resources off the 99designs domain (with current push implementations). This opens us up to a class security issues we avoid by having an asset domain i.e. types of reflected XSS and serving cookies on assets.

For now we'll wait for the webperf working group to address the limitations with server push semantics.


Interesting note on the impact of image size variation on queue, thanks for elaborating.

Serving those resources from the 99designs domain is worth a look. I considered the cookies and security trade offs as well. I found H2 compressed cookies enough to perform better than a separate cookieless domain for static assets, due to the DNS savings. DNS times can be bad at high percentiles. Reflected XSS addressed with a Content Security Policy. But I'm fortunate to have user base that supports CSP well.


There are discussions happening on how browsers can allow authors to resource prioritisation hints. I'm curious to see where it goes.

We'd ideally like to be able to say – "prioritise 10 images in the viewport". You hack it together relatively efficiently using IntersectionObserver now, but support isn't great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: