Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Game Theory Calls Cooperation into Question (scientificamerican.com)
64 points by etr71115 on Feb 24, 2015 | hide | past | favorite | 28 comments


Pack animals like dogs naturally discourage fighting amongst themselves [1]. Strife weakens the pack.

Consider the effort it takes 1 person to build a house, or the amount of effort it would take to build an mp3 player alone from scratch.

Game theory would have you believe the optimal way to win at poker is to booby trap the card table and rob your competition.

Another theory suggests altruism ultimately trumps selfishness within species, as aggressive warlike creatures would kill each other off ad-infinitum, where as cooperation ultimately strengthens individuals beyond the sum of their parts. [2]

All things considered in earthly biology, the cooperation amongst all the various living cells and organs that make up life in a single creature shows something to the efforts of collaboration vs the race to the bottom of virus & parasitic behavior.

[1] https://www.youtube.com/watch?v=hstLdzCg6l8

[2] http://www.radiolab.org/story/103951-the-good-show/


I think the flaws in the thinking with these studies that try to test altruism, and other "odd" behaviours according to Game Theory is that we've become so far removed from where these behaviours originally evolved.

The aberration is that we live in densely populated areas where selfishness is a possible behaviour.

Our social-limit is estimated at around 500 people. Surnames developed around the 13th century in Britain, before that it was patronyms meaning we could recognise people via association. This is still common in rural areas, my wife's grandfather likes to travel to car shows, he's told people down in the US to just ask for him by name. He lives in a rural part of Canada and sure enough, people will reach him because everyone knows him.

Altruism works, because for the majority of human history the people you chose to help or not to help were also the people who would sooner or later face the same decision. One instance of uncooperative behaviour would render you persona non grata. Your neighbour needs help thatching his roof after a storm, do you help? Game Theory keeps saying "no, because you could steal his land!" reality says "yes, because if he doesn't die from exposure, which he most likely won't, then he won't help you round up your goats when your fence breaks and they'll eat all your crops and you'll likely starve to death in the winter".

The simple fact is most people are willing to do "favours" with no questions asked, and with no expectation of payment except "being owed one". The classic is helping a friend move. There's absolutely no reason to help someone move, until you need to so you earn the help in advance, and when someone breaks the trust the response is normally "I'm never helping that asshole again."


Thank you for your interesting comment.

May I ask for your definition of altruism?

Helping someone else, in part because you recognize that you may someday need his/her help, seems like rational egoism to me.

And even if I thought I would never see someone again, I might help them if I thought they were virtuous, simply because I think virtue should be recognized and rewarded.


Game theory is ultimately a drastic oversimplification (a model) of human agency. It happens to coincide with certain popular simplifications nicely, but ultimately it's flawed. In situations where it tries to prove either cooperation or competition is the superior behavior it often sets up its actors in such a way that it begs the question (the answer is baked into how the variables are set up and how the agents are made to optimize those variables). I wouldn't take it too seriously as an authority on ethical questions. It's most useful as a model to reason about average outcomes in a limited set of situations. It tends to get into trouble elsewhere [1]

[1]: http://www.amazon.com/The-Misbehavior-Markets-Financial-Turb...


I think you may be missing an aspect of this. The original result in game theory, implied that systems can be globally cooperative with selfish individuals. This has been used to justify Capitalism, much like the concept of evolution within social darwinism. If systems can be globally cooperative with selfish individuals, then we as individuals are doing our duty by being selfish. This is why a highly technical result within mathematics is so widely celebrated. This counter result undermines the first result - granted the first has been taken wholly out of context, but hopefully so will the second. But I'm not holding my breath...


Cancerous cells flourish as well.


That only works in situations with very strong group selection. See The Tradgedy of Group Selectionism: http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism...


How about "Cooperation Calls Game Theory into Question"?


"Data calls extremely simplistic model no one ever would have thought fully explained the domain into question"


I'm reminded of the 'cstross quip, "Libertarianism is like Leninism: a fascinating, internally consistent political theory with some good underlying points that, regrettably, makes prescriptions about how to run human society that can only work if we replace real messy human beings with frictionless spherical humanoids of uniform density."

EDIT: corrected the quote.


Right. In cases like this, you either a) have unmodeled dynamics or b) (much less likely) you found a superior strategy.

If b), try it out in the wild. If a), add assumptions that make the observed behavior optimal again, and see if those assumptions apply to the real world situation.


More like "Cooperation calls new game theory proposal into question but is compatible with the more widely accepted game theory proposals."


thank you.



Do these models take into account that it is sometimes beneficial for an individual to lose a game, and actually die in the process?

A howler monkey that warns others of approaching danger, but gets eaten in the process, may actually win genetically, if it saves the lives of its siblings, parents and offspring.


So lets say we are playing a game of soccer.

One team is one top pro player the other team is 11 toddlers.

Who is going to win? Pretty sure its the pro player. Its better to be 1 top player than 11 toddlers.

Now lets make the team 1 top player and 11 5 year olds.

Probably the top player will still win and its still better to be a top player than 11 5 year olds.

Now lets take one top player and 11 teenagers at age 14...

Game theory is never really about the game that is played but always about the players playing the game.



I would be highly interested in how this model incorporates appearances into it. In particular, if you can maintain the appearance of heavy sharing, without heavily sharing, that can land you closer to the benefits of not sharing, without the downsides.


That is essentially what the research is about. The "extortion" algorithm seems like it is playing nice, but cheats as much as it can get away with.


Hmm.... that makes sense. Though the naming is terrible. Extortion is a distinct thing from cheating. :(

edit: Well, I suppose saying it that way is terrible. It is cheating. It is not the same as all forms of covert cheating, though.


That's my life strategy and I'm in a very good company - politicians, bankers, ultra-rich :)


Game Theory lost all credibility when it advocated nuking Russia pre-emptively. Besides, most game theory studies don't even consider the possibility of self-destructive action, let alone evaluate whether they're actually advocating such a thing.


All game theory results depend entirely on your utility functions. If you set:

(value of Russian civilian lives) = 0

(value of the world after nuking Russia) > (value of the world where Russia takes over) * (probability of Russia taking over)

Then game theory would suggest first strike would be best solution.


But most people interpreted (value of the world after nuking Russia) to be quite low because Russia would automatically launch a retaliation, and the same vice versa from Russia's perspective. So with this utility game theory predicted the Mutually Assured Destruction standoff that so far has turned out correct.


Now did it? As far as I know, the whole MAD thing was based on precommitting to strike if struck, therefore negating the benefit of first strike for both parties.


Theories don't advocate things, only people do.


Whether a persons advocacy is consistent with the theory or not, however, is an important point. And so far as I know, Nash's claim that game theory supported a pre-emptive nuclear strike against the USSR was never claimed to be in any way flawed from a game-theoretic perspective.


No it doesn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: