It really is no surprise that Google is not interested in this, since Google does not suffer from any of those problems which using SRV records for HTTP would solve. It’s only users which could more easily run their own web servers closer to the edges of the network which would benefit, not the large companies which has CDNs and BGP AS numbers to fix any shortcomings the hard way. Google has already done the hard work of solving this problem for themselves – of course they want to keep the problem for everybody else.
This is going to bite them big time in the end, because Google got large by indexing the Geocities-style web, where everybody did have their own web page on a very distributed set of web hosts. What Google is doing is only contributing to the centralization of the Web, the conversion of the Web into Facebook, which will, in turn, kill Google, since they then will have nothing to index.
They sort of saw this coming, but their idea of a fix was Google+ – trying to make sure that they were the ones on top. I think they are still hoping for this, which is why they won’t allow a decentralized web by using SRV records in HTTP/2.
You don't need SRV to have a decentralised web if you have working IPv6. Every site can have its own IP.
In both cases you have to re-deploy a lot of software, but the IPv6 rollout is at least already underway and has another driver in the form of the IPv4 shortage.
SRV records would help with load balancing and automatic failover – something you today need serious hardware and routing clout to pull off; something not even mid-sized companies can achieve. With SRV, these capabilities would be available to everybody.
Of the varied bizarrely negative posts this submission has earned, yours is the one that perplexes me most. How is HTTP/2 going to "bite" Google (even if we assume that Google is the only initiator of the HTTP/2 spec, while they're far from it)?
Is the web, right now, failing? Are those little sites somehow not existing? You seem to be positing that a new standard not supporting your pet feature is suddenly going to destroy the web, which seems a little hysterical.
Interesting to note that as a relatively small shop we've already seen huge benefits from SPDY (the precursor) because it allows us to serve from a single site in the US, offering a much, much better experience for our users in Singapore (the benefits to high latency connections are significantly higher than the results commonly reported).
I thought I expressed myself reasonably clearly. The web is currently slowly centralizing and turning into Facebook. Google could fight this trend by using SRV records for HTTP/2. If, otherwise, this trend continues, Google will eventually have nothing in the open web to index, and will therefore get no value from selling ads in their search service. Therefore, Google is doing itself a disservice by not using SRV records in HTTP/2.
I further speculated about the reasons Google might have for acting the way the do – i.e. not supporting SRV records. I proposed two reasons: First, SRV records do not solve any problem which Google has not already solved for themselves, and they do not want anybody else to simply get the solution for free, since that would devalue their previous investment. Second, Google assumed that they would be the ones that the Web would centralize into, and even since Facebook has appeared, Google created Google+ and tried to push it as a centralization point, using every pressure point they could muster (YouTube, etc.) to make this happen. If they could make this happen, they also would not benefit from a more decentralized web as would be enabled by SRV records in HTTP/2.
A new version of HTTP is a perfect (and, in fact, the only) opportunity to introduce SRV record usage into the HTTP protocol. The fact that they have chosen not to do this, even though SRV is essentially made for this – an extension of the MX record system into a generalized system for any and all protocols – requires an explanation.
https://news.ycombinator.com/item?id=8404788
It really is no surprise that Google is not interested in this, since Google does not suffer from any of those problems which using SRV records for HTTP would solve. It’s only users which could more easily run their own web servers closer to the edges of the network which would benefit, not the large companies which has CDNs and BGP AS numbers to fix any shortcomings the hard way. Google has already done the hard work of solving this problem for themselves – of course they want to keep the problem for everybody else.
This is going to bite them big time in the end, because Google got large by indexing the Geocities-style web, where everybody did have their own web page on a very distributed set of web hosts. What Google is doing is only contributing to the centralization of the Web, the conversion of the Web into Facebook, which will, in turn, kill Google, since they then will have nothing to index.
They sort of saw this coming, but their idea of a fix was Google+ – trying to make sure that they were the ones on top. I think they are still hoping for this, which is why they won’t allow a decentralized web by using SRV records in HTTP/2.