It's pretty "crazy" that all software that handles UTC needs to have an access to the internet (or software updates) every 6 months. You simply can't leave, say, a sensor somewhere for a year and expect it to record events in UTC.
Sure, you can record events in something else than UTC and convert it to UTC at a later time.
Unless you have some fancy hardware around running your clock you've got bigger issues; as a general rule you can't run a computer and expect the time to remain +/- 1 second of UTC.
NTP exists so we don't have to own this hardware. You can easily expect to be within milliseconds of UTC if this is working properly and you're syncing with stratum 1 servers.
The statement was that you needed fancy hardware to get < 1 sec accuracy to UTC which is patently false, network availability or not you can easily (without an atomic clock) get that level of accuracy.
You need fancy hardware to get that level of accuracy in any way that doesn't also automatically give you leap second announcements, which was the problem we were talking about. NTP announces them, GPS announces them, the various longwave time stations announce them...
Eh. That hardware $300 and requires a network connection too. If that's reasonable to do, is network access to distribute leap-second information really a problem?
(There may be cases where the answer is yes, but... we digress.)
Yes, they do. However, adding the second is not the default, so announcement that it's not going to be added is not as interesting. Also, I doubt "time libraries" "people" get their news from HN, on a day it could happen.
On the other hand, the HN public is one that should be acutely aware of leap second issues, because it's urgently necessary that all programmers should have some basic knowledge of this issue. So it's a good idea to draw attention to this a few times per year.