Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen no indication he doesn't think it's correct.

The Tilly-Odlyzko correction (the paper establishing it gives it as a "refutation" of Metcalfe's law) strikes me as eminently more sensible.

The cost function amendment is my own work.



Very cool. I think Tilly-Odlyzko did hint at cost interactions as well in their reference to spam. Curious why you agree with T-O in the nlog(n) valuation of networks, but judge that costs still grow quadratically? Surely not all negative interactions are equally negative to all members of a network? (an online bully doesn't bully everyone equally, etc.)


I've been kicking around the question of what the proper functional specification of the network cost function is. One rationalisation is that by specifying an absolute minimum cost function, you can set an upper bound on the size of the network.

The rationale for a constant cost function imposed, per node, on all nodes, is that simply by existing any node imposes some cost on others. This becomes a simple random probability function: there's some random probability in any given cycle that one node will interact in a negative way with any other.

A key realisation is that that negative interaction can literally be anything -- it's a lot easier to get things wrong than right. So odds are that any interaction with any other node is going to impose a cost.

Tossing this into a more concrete realm, I've been doing a lot of thinking on communications and media networks, and the rivalousness of attention. (On which there's a fairly interesting history I'm just starting to scratch, going back to Herbert Simon and Alvin Toffler's Future Shock, as well as before.) In the realm of "fake news" or cognates (propaganda, misdirection, misinformation, distraction, etc.), there's a question of how much news can the typical person process in a period? Say, a day, week, month, year, etc.

Most media publications have a top-10 story listing. I've found lists of the top-20 real and fake news stories posted around Facebook, and the drop-off in re-shares is 1-2 orders of magnitude from the top to bottom of the list. Discussions of "the big lie" (see Hitler and Goebbels, Nazi Germany) make clear that a small number of key points is key. Major news providers (WashPo, NYTimes, WSJ) produce ~150 - 300 original items daily. Newswire services (Reuters, AP, the French one (Presse ???) kick out about 2500 - 5000 pieces daily. Online media consumption (FB, mostly) is about 40-45 minutes daily, which you can divide amongst however many messages are processed (10? 100? -- that's 4 minutes, or 25 seconds, per message). Super-email consumers (Stephen Wolfram, Walt Mossberg) manage about 150-300 messages/day. The NY Times comment moderation team handles somewhat short of 800 comments/day, presumably over some fraction of an 8-hour workday. Attention per message is time divided by messages.

Which means that every message we see, regardless of quality, imposes some acquisition cost. And if we're on a network that streams and dumps messages at us without any filtering capability, then ... we're going to be overwhelmed.

And that's a minimum cost.

The network which retransmits the fewest bullshit, bogus, false, irrelevant, etc., messages, has the highest intrinsic attention value.

Now, on top of this, there are other costs, for which some variant of nlog(n) metrics probably apply: there is a small subset of sources whose content is actively malicious. The ability and capacity to filter against* that ... is useful. And this shows up in all sorts of contexts -- small portions of any population are the overwhelming sources of crime, disruption, corruption, etc., etc. The key problems are 1) identifying them and 2) doing something effective to limit their capacity to do harm.

(There are almost certainly second order and higher effects as well, that's ... another discussion.)

A few more examples of cost functions:

1. In workgroups, inter-member communications, coordination, resource conflict, management time, meeting time, etc., all impose costs. See Brooks, The Mythical Man-Month.

2. In cities, any number of problems arise from putting people in close proximity: noise, smells, disease, congestion, etc. By the 19th century, London, as an example, was killing off its inhabitants faster than they could breed. The only way to sustain (or grow) its population was by net in-migration from the countryside. I believe life expectency once moving to the city was on the order of 16 years. Installing a sewerage system (1850s) and freshwater supplies (roughly the same time) vastly improved on that. New York City saw similar problems for which there's a Department of Public Health graphic showing net mortality from about 1800 through ~2005. The biggest declines were 1850 - 1910. Everything since has been at best modest.

3. Email. Classic instance of message-acquisition conflicts, but also of a failure to standardise on conventions and message formats. I find the medium all-but-unusable presently. And that's before considering the absolute lack of security or authentication involved.

4. Computers. If nothing else, heat accumulation limits the density and scale of systems.

5. Mammals. From the smallest (a ~0.5g shrew) to largest (200+ tonne blue whale) there's a tremendous range of scale, and superlinear net scaling. But dumping heat becomes a problem. Ounce-for-ounce, the shrew has a far higher metabolism than the whale, but a whale's total metabolism is far higher -- on the order of a tractor-trailer rig. There've been some recent discoveries of odd organ-like features withing the whale mouth, including a tremendous vascularisation of the tongue. I suspect that may be a means of ridding excess body heat particularly during feeding activities, which are tremendously energy-intensive: ram-filling the mouth with water, then straining it out, for krill-feeders. The entire underside of the mouth baloons out massively during this process.

My theory is that in any dendritic or network structure, you can identify similar value and cost functions, though you might have to think for a while before turning them up. One of the more interesting questions for me is knowledge-as-network itself: the ability to come up with more individual models, and more complex models, creates value (better modeling, understanding, prediction, and control of reality), but, if my theory is correct, also imposes some costs. Transfer, maintenance, utilisation, and variances on the abilities of individuals (or AI systems?) on acquiring, utilising, and adapting those models may be part of that.


> I've seen no indication he doesn't think it's correct.

He explicitly told us this during a lecture. It was something to the effect of "I just made it up, it's an oversimplification, but it's good marketing."


OK, thanks. That is more substantial.

(And rings true.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: