Another approach for this is to explore the format through Apple's tools for building dictionaries – as they provide a "Dictionary Development Kit" in Xcode's downloadable "Additional Tools" package (which has documentation for the XML format and a bunch of scripts/binaries for building the bundle).
I wound up doing this a while ago for a similar toy project. After some poking around, it turned out that dictionary bundles are entirely supported by system APIs in CoreServices! The APIs are private, but Apple accidentally shipped a header file with documentation for them in the 10.7 SDK [1]. You can load a dictionary with `IDXCreateIndexObject()`, read through its indices with the search methods (and the convenient `kIDXSearchAllMatch`), and get pointers to its entry data with `IDXGetFieldDataPtrs()`.
It takes a bit of fiddling to figure out the structure (there are multiple indices for headwords, search keywords, cross-references, etc., and the API is a general-purpose trie library) and request the right fields, but those property lists in the bundle are there to help! (As the author of this article discovered, the entries are compressed and are proceeded with a 4-byte length marker.)
I have memories of using the Dictionary Development Kit to create custom dictionaries (I remember creating one for medical jargon) about ten years ago. (At that time custom dictionaries are placed in ~/Library/Dictionaries, and system dictionaries in /System/Library/Dictionaries, not some obfuscated path like now.)
To find the Kit in question, simply Google "com.apple.TrieAccessMethod" and you should find it online.
The definitive statement made by this article's headline isn't really supported by the evidence presented in the papers. Rather, the state of affairs seems to be that "loss aversion" has been the victim of incessant overgeneralisation. It's a very simple hypothesis about human behaviour that plays nicely into a lot of interesting (and therefore publishable) narratives. This has lead people to blindly accept the general hypothesis of loss aversion without enough critical investigation of its manifestation. The authors don't really refute "loss aversion" (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion"), but rather they refute the pop-psychology belief that it's a general principle of human behaviour.
That's a great summation. It seems as though there's confusion as to what constitutes loss aversion. IIRC, the original paper by Kahneman, Knetsch, and Thaler [0] talked about losing something you had. Meanwhile, the posted argument talks about whether someone is more or less likely to buy something if the price goes up or down. These are such different situations! The first is losing something you have, the second is deciding whether you want to trade some money for a thing.
"Loss aversion" as a cognitive bias is not just the desire to avert any loss.
If it really is a general cognitive bias, it will show up as a difference from expected statistics.
Imagine a held asset that has an even chance of going up or down. You'd expect to see about half of people sell it and half hold it. A cognitive bias would alter that ratio. If 75% of people sold it, and only 25% held it (despite even odds), you could say that there appears to be a bias at work.
A great example of cognitive bias at work in the real world is the Monty Hall 3 door riddle. Most people get this wrong even though the math is not hard.
But simply avoiding a predicted loss is not "loss aversion" as a cognitive bias. It's not even a bias at all; it's rational to avoid loss.
Are you reading the same article as me? As they mentioned:
> Loss aversion has been represented as a fundamental principle. Loss aversion is not understood as the idea that losses can or sometimes loom larger than gains, but that losses inherently, perhaps inescapably, outweigh gains. For example, Kahneman, Knetch, and Thaler (1990, p. 1326) describe loss aversion as "the generalization that losses are weighted substantially more than objectively commensurate gains." In a similar fashion, other researchers do not qualify the idea of loss aversion; Tversky and Kahneman (1986, p. S255) state that "the response to losses is more extreme than the response to gains;" and Kahneman and Tversky (1984, p. 342) state "the value function is … considerably steeper for losses than for gains."
The authors are refuting "loss aversion" as Kahneman et al is describing it.
> (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion")
Why do they need to? The paper isn't about trying to explain when losses or gains are most impactful; the paper is about whether or not there's a clear tendency.
Big words don't count as evidence. The article brings absolutely no new information to the table. Heck, I think this article's a clickbait.
The basic principle behind loss aversion is simple.
What's the primary motive behind an action - Running away or running towards? Prevention or gain.
For instance. Yesterday an article about American child care was on HN.
American parents are acting primarily to PREVENT injury, discomfort or death of their children. That's action motivated by loss aversion.
Japanese, maya parents still want safety for their children but independence of their kids is a primary motivator for their action. In other words gain.
I think the author is confused about something. I want more money and I don't want to lose the ones I have. Both feelings aren't mutually exclusive. However at the point of decision I could be swayed more by greed or by fear.
If a site, seller or investment is shady, fear wins. I'll protect myself. If not, greed or gain could win in that instance.
I could speed down towards a party one moment. And a near miss could make me reconsider and slow down. Both modes occurred on the same journey. No grammar by some clickbaity author would change that.
To me, loss aversion was amply illustrated by a "King of Cars" episode, a reality show at a car dealership. The manager would hand out $100 bills to the salesmen in the morning, with the proviso that if they sold a car that day, they got to keep the C note on top of their commission.
He'd found they worked much harder to retain the note once it was in their hands, than if he offered a bonus of $100 at the end of the day.
That's a good example, but there's many factors at play here. For instance:
- Loss aversion
- Trust (i.e. the manager believes in you): When we hear the word "bonus" we often think "that's something that happens 10% of the days". However, when the manager is giving you the money at the beginning of the day they're saying "I think you can do this today. I might as well give it you already." The manager very clearly shows that they believe in you, and they probably know what they're doing.
- The prize is visible: We know from many examples that humans become more motivated when they can physically see their prize. One part of this trick is that you have the note in your pocket. Maybe you even take it out a few times during the day.
There's a few ways to test what factor is most important. For instance, you would expect the trust-factor to fade over time because you'll realize that the manager gives you the note regardless of their faith/belief in that you can make it (there's nothing special about "this day" or "this employee"). You could also replace the $100 note with a more neutral coupon that says "$100 bonus". This makes the prize less visible, but we should still value it as $100. Or maybe there's a checkbox on a sheet inside the office which says "Tick off if bonus not reached". If the effect goes away, then the visibility-factor is stronger than the loss-aversion-factor.
This is my main beef with the pop culture around "loss aversion" (and other psychological terms): There's so many interesting things to discuss around it, but we so badly want to combine everything into one simple buzz word.
> The article brings absolutely no new information to the table.
Did you read the paper? It's not a paper that "brings new information to the table", it's a paper which presents recent experiments and tries to show that there is little scientific evidence of loss aversion.
> The basic principle behind loss aversion is simple.
Huh? I don't understand what you're saying? You're saying that "loss aversion" is simple, but then you give examples of losses not being universally more impactful than gains? What you're describing here is exactly the point of the authors: Both modes are important
> … swayed more by greed or by fear.
And you should be aware that "loss aversion" is very careful to not talk about the psychological process behind. Loss aversion is not about greed, fear or any feeling and/or instinct. Loss aversion is a measurable effect. None of the papers that claim that loss aversion is a general principle claims that "greed is stronger than love" or anything similar to that. In fact, they are very "chicken" and just shrug it away.
> Did you read the paper? … Huh? … you should be aware that …
I have no skin in this one but I would like to call this out: these comments make an argument combative. It pushes people up a tree and makes it hard to focus on the facts. Imagine user vezycash actually was swayed by your argument; how easy would it be for them to say, hey, you’re right? Pretty hard after all those comments, because it ties in their pride with their viewpoints and makes changing their point of view humiliating rather than enlightening. The conversation is now a battle, and admitting fault is losing face.
I’m calling this out now but by no means is it specific to you; it happens all the time. My request to anyone here is: please leave all those phrases out. “You should...”, “did you even...” etc. The argument works just as well without them. It makes it much easier for someone to say, hey, I guess you’re right! And isn’t that what we all want, in the end? ;)
Yeah, I see how this turns an argument combative and that wasn't my intention. I don't think there's anything wrong with vezycash's point of view, in fact I completely agree with most/all of his points :-)
There is something to be said about "Did you read the paper?" though. We have here an article where an author has published a rather large article (59 pages) and done a substantial amount of research (quoting over 80 other published papers). I don't expect everyone to read all of that, but I wish people were more upfront about whether they're talking generally about the topic or discussing the actual story.
Like, I honestly wonder "Did you read the paper?" not because I expect everyone to read the paper, but because it means we can have a more constructive discussion. If you haven't read the paper and is confused about what the author means then I can try to find quotations that better explain the author's opinion. Or maybe we can discuss the general topic (ignoring the story).
Agreed, especially when their point is that something is self-evident in the comments section of an article that specifically goes into great length to argue otherwise.
In this case, however, the person asked if the poster had read the underlying scientific paper - which is not the same as the linked popular press article. It's a legit question to help frame the discussion, though with tone issues that suggest a gentler way of asking would be helpful.
As complex as gravity and electricity are, the underlying principles are simple. Same with loss aversion. Money isn't the only or biggest motivator.
Take a good common example of loss aversion - admitting being wrong. Why do people find it difficult to admit that they are wrong?
What's at stake here? Reputation, respect, pride, even money.
100 scientists vs Einstein is a classic example of this. Pointless wars have been been fought because someone wouldn't admit being wrong. The Iraqi, Vietnam wars are good examples.
Your reaction to this issue is another.
Limiting loss aversion to just economic behavior betrays lack of understanding of the topic.
The arguments you made don't challenge what the article says one bit.
The loss aversion hypothesis is the hypothesis that given the choice between either of the following two scenarios:
- Having an object x and then risk losing it.
- Being offered an object x but risk not getting it.
people are more "motivated" by the first than by the second. The claim moreover is that:
- This is a universal motivator, which means it must explain "economic behavior" (which you mention) as well as anything
else. The fact that -- as the article says -- people prefer *keeping* a stock which is just as likely to lose in value as
to gain, is a *perfect* example to illustrate that it is *not* a universal motivator.
- That it is not rational. There are cases where losing something, like for instance money, is *truly* more damaging
than gaining the equivalent amount of money. For example, if I lost $100,000 it would be much more devastating than if I
gained $100,000 -- in this scenario, it's not a psychological *bias* but in fact a completely rational belief. This example
is in the article. You can not use examples like this one to argue in favour of "loss aversion", because there would be no
evidence of an irrational bias.
Also, the fact that you keep applying the "loss aversion" hypothesis as broadly as possible, to things like wars and arguments, suggests you're assuming it applies everywhere. What about the stock example, which is mentioned in the article? That ought to prove you wrong, no?
Loss aversion specifically refers to "people's tendency to prefer avoiding losses to acquiring equivalent gains". It's not just the general idea that people are averse to losses, it's about how people value gains and losses differently.
Thanks for the phrase "incessant overgeneralization" -- I didn't even realize I was looking for that. It seems that this is something the social sciences are inherently at risk of, given how closely the topics are to our everday lives.
In the hard sciences, for example, the problems that people work on are often less complex and more removed from everyday human life. I think this causes for much less over-generalization to occur.
Over-generalization is common beyond the social sciences as well. It is often found in the basis given for false dichotomies, which are not uncommon in discussions of how to write software. It is also apparent in many of the claims made by clickbait titles.
Their argument reminds me of climate change “skeptics” arguments that there is some sort of institutional bias towards papers that support climate change as a theory, which therefore is why all the papers and all the evidence support the climate change theory.
Is there a term for this kind of logical fallacy? It’s almost in ad hominem argument against an entire group
It's not a logical fallacy, though. There is just no evidence for it, and we understand intuitively how far-fetched it is. But it's certainly _possible_ that institutional bias explains those results. It just happens not to be the case.
If we're looking for a general logical fallacy, it might be something like, "Using the mere fact of theoretical possibility as a way to justify unlikely beliefs, or as a counter-argument to strong evidence." I'd love to know if there's a term for that. It comes up everywhere.
> And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).
That's the problem with any of these social theories. Losing what? Gaining what? We're lacking a clear definitions of terms. I know it's a trope, but it really is just so unscientific.
Yeah, it would be terrible if you read through 59 pages of well-cited, well-explained text and tried to understand what the author is saying. Might as well judge everything from one sentence in an online article written for popular audiences.
> He refuted himself right there, in the article.
That example is showing an example of Status Quo Bias that is completely orthogonal to losses/gains: People prefer inactivity to activity. If you construct an experiment where doing nothing constitutes the "loss" (e.g. keeping an item) and doing something is the "gain" (e.g. obtaining a new item) then you would expect people to prefer the first choice. For loss aversion to be a general principle you need to decouple it from the status quo bias.
Also, the title doesn't claim the evidence refutes it, just that it doesn't support it. Recalls the Carl Sagan quote, "Absence of evidence isn't evidence of absence."
"Social science" isn't a singular amorphous blob, and these methods aren't uniformly accepted.
Online surveys are certainly becoming more popular as they are significantly cheaper to conduct than the alternatives, and yield publishable results that garner media attention. There are peer reviewers that will be sympathetic to these issues, regardless of the method's robustness.
However, there are others that would say this reeks of dredging (p-hacking) in a very murky pool of data. Their "scepticism" rarely makes the New York Times (or a bestselling book), though.
This is a very nice review, but in practice I've found the K-S test to be much less useful than it initially appears:
1. Failing to reject the null hypothesis is not the same as accepting the null hypothesis. That is, concluding "these data are from some distribution X" is spurious.
2. There's a 'sweet-spot' for the amount of data. If you have too few samples, it's very easy to fail to reject; and if you have too many, it's very easy to reject (the chart at the bottom of the "Two Sample Test" section illustrates this).
3. The question "are these data from some distribution X?" is usually too strong. It's usually more informative to ask "can these data be modelled with some distribution X?"
Agree with you on all three, but specifically for 1., can you think of pathological pairs of distinct distribution that the test would often fail to reject?
The article says it's poor at detecting differences in the tails and much better at differences in the medians. So that's where I'd start to find problems.
Playing with the tails make all kind of mistakes possible, but that seems like a criticism that would apply to any attempt to identify a distribution based on sample.
One kilobyte per tick seems quite generous. All of the EURUSD tick data for 2013 from histdata.com (the source mentioned in the article) is only 515MB (~20GB for 40 years, ~824GB for 40 pairs).
I would say a "tick" comprises the timestamp (stored as a long int) and ten levels of the order book (bid price, ask price, bid size, ask size) each stored as a double-precision float or a long int, so that's
(1 + 4 * 10) * 8 = 328 bytes
per tick, so 1KB isn't far off. Obviously not every level changes on every tick, so there are opportunities for compression, that can be significant.
Note that the "tick data" from histdata.com gives you prices sampled every 1 second (so not every tick) for the top level of the order book, and doesn't give you any size information at all.
New Objective-C features are introduced with new versions of the compiler (and Xcode tools/SDK), but each feature may be deployed on different versions of iOS/OS X (depending on how much support they need from the underlying runtime/operating system). This table breaks it down: https://developer.apple.com/Library/mac/releasenotes/Objecti...
It looks like this is a re-invention of agvtool(1)---which ships with Xcode, will synchronise updates across all versioned targets in a project, and has options for differentiating marketing vs. development numbers and generating source files.
The node packing format he describes sounds a bit like a LOUDS tree [1], which stores the structure of a tree as a bit array (each node as a '1' for each child, plus a '0'---for a total of 2n-1 bits for a tree of n nodes), and the data in a separate packed array. It can't represent the node-deduplication (nodes with multiple parents), but I think it gives comparable compression: for the full word list of 3,213,156 nodes, the tree structure is 6,426,311 bits (0.76MB), plus 3,213,156 bytes of character data---for 3.83MB total.
The downside is that traversing the tree is a series of linear bit-counting operations---which can be painfully show without a bit of pre-caching.
Something they don't mention until after you've signed up: the micro plan only lasts for two years. I assume any private repositories will become locked if you don't pay for a subscription after that (as with regular accounts).
In comparison with BitBucket (not to advocate, but they offer a comparable service): the restrictions they waive for academic accounts are done so permanently.
I wound up doing this a while ago for a similar toy project. After some poking around, it turned out that dictionary bundles are entirely supported by system APIs in CoreServices! The APIs are private, but Apple accidentally shipped a header file with documentation for them in the 10.7 SDK [1]. You can load a dictionary with `IDXCreateIndexObject()`, read through its indices with the search methods (and the convenient `kIDXSearchAllMatch`), and get pointers to its entry data with `IDXGetFieldDataPtrs()`.
It takes a bit of fiddling to figure out the structure (there are multiple indices for headwords, search keywords, cross-references, etc., and the API is a general-purpose trie library) and request the right fields, but those property lists in the bundle are there to help! (As the author of this article discovered, the entries are compressed and are proceeded with a 4-byte length marker.)
[1] https://github.com/phracker/MacOSX-SDKs/blob/master/MacOSX10...