Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Originally time-to-live was to be measured in seconds, but it was modified to mean 'maximum number of hops to transit'.

This is https://datatracker.ietf.org/doc/html/rfc760#page-11 Jan 1980, a document by Jon Postel.

Maybe Don Hopkins can find out more about this, I'll page him.

edit2:

So, some more searching I found this article:

https://www.alertlogic.com/blog/where-is-ipv1-2-3-and-5/

Which claims that the split happened in 1978:

"At this point, TCP and IP were split, with both being versioned number 3 in the spring of 1978."

But I can't find any protocol spec for IPV3.

edit3:

https://www.rfc-editor.org/ien/ien41.pdf

has the TTL field, that's June of 1978, and it's IPV4.

IPV3 does not seem to have had a TTL field:

https://wander.science/articles/ip-version/



Here is a thread about TTL on the Internet History mailing list:

https://elists.isoc.org/pipermail/internet-history/2020-Sept...

A good summary from Jack Haverty:

https://elists.isoc.org/pipermail/internet-history/2020-Sept...

>My recollection, from the discussions when the IP4 header was being defined, was that TTL was included for two reasons:

>1/ we couldn't be sure that packets would never loop, so TTL was a last resort that would get rid of old packets. The name "Time To Live" captured the desire to actually limit packet lifetime, e.g., in milliseconds, but hops were the best metric that could actually be implemented. So TTL was a placeholder for some future when it was possible to measure time. TTL did not prevent loops, but it did make them less likely to cause problems.

>2/ TCP connections could be "confused" if an old packet arrived that looked like it belonged to a current connection, with unpredictable behavior. The TTL maximum at least set some limits on how long such a window of vulnerability could be open. Computers had an annoying tendency to crash a lot in those days, but took rather long times to reboot. So the TTL made it less likely that an old packet would cause problems. (Note that this did not address the scenario when a gateway crashed, was subsequently restarted possibly hours later, and sent out all the packets it still had in its queues.)

/Jack

https://elists.isoc.org/pipermail/internet-history/2020-Sept...

>I just realized that I should have also said:

>3/ Although TTL did not prevent loops, it was a mechanism that detected loops. When a packet TTL dropped to zero, an ICMP message (something like "TTL Time Exceeded") was supposed to be generated to tell the source of the failure to deliver. Gateways could also report such events to some monitoring/control center by use of SNMP, where a human operator could be alerted.

>4/ TTL was also intended for use with different TOS values, by the systems sending voice over the Internet (Steve Casner may remember more). The idea was that a packet containing voice data was useless if it didn't get to its destination in time, so TTL with a "fastest delivery" TOS enabled the sender to say "if you can't deliver this in 200 milliseconds, just throw it away and don't waste any more bandwidth". That of course wouldn't work with "time" measured in hops, but we hoped to upgrade soon to time-based measurements. Dave Mills was working hard on that, developing techniques for synchronizing clocks across a set of gateways (NTP was intended for more than just setting your PC's clock).

>I've noticed that researchers, especially if they're not involved in actually operating a network, often don't think about the need for tools to be used when things are not going well - both to detect problems and to take actions to fix the problem.

>Without that operational perspective, some of the protocol functions and header fields may not seem to be necessary for the basic operation of carrying user traffic. TTL is one such mechanism. Source route was another, in that it might provide a way to get control messages delivered to a misbehaving host/router/whatever that was causing routing failure. This involved using source routing as a somewhat "out of band" signal mechanism that might not be affected by whatever was going on at the time.

>All of this was considered part of the "Internet Experiment", providing instrumentation to monitor the experiment to see what worked and what didn't, and evolve into tools for use in dealing with problems in operational networks.

>At one point I remember writing, sometime in the mid 80s, a rather large document called something like "Managing Large Network Systems" for some government contract, where these kinds of tools were described. But I haven't been able to find it. Probably in a dusty warehouse somewhere....

>/Jack




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: