Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would absolutely love to see a double-blind randomized experiment with controls for YC applications. In one batch, randomly and secretly assign CONTROL or EXPERIMENT to each application (in whatever proportion you are comfortable with). Of the CONTROL batch, do regular reviews. For the EXPERIMENT batch, randomly accept the same percentage of groups as the control. Then have people outside of the reviewers try to label each group correctly at different stages and see how they do. These people could be: YC alums, YC staff not on the interview loop, VCs making investments, outside experienced CEOs, successful startup founders.

For full double-blindness you could even do interviews with the EXPERIMENT group, then use a program to automatically either use the interview result or not. This might be aggravating for reviewers, though, seeing their work wasted by random choices.



How can you really do any experiments on this kind of shit, when the real winning investments exist in front of a zipf curve?

You get one or two huge winners out of hundreds, and they are so huge that they are bigger than all the others combined. The investors care most about getting these rare winners, and if YC does any kind of controlled study on who becomes these winners, I don't believe there will be any statistical significance to speak of.


If there is no statistical significance then it means that there is no difference in picking randomly and doing interviews for the rare winners.

Since the winners are what they are after they can omit the interview process.


That's not what "no statistical significance" means.


Sorry, statistically speaking the difference in results of interviewing and not interviewing are insignificant.

I have a feeling that eventually there will be a Vangaurd of startups that will massively outperform traditional VC/incubators due to reduced management fees similar to index funds vs. regular funds.


On the one hand, it's probably true that experts overweight their own ability to pick winners. My guess is, PG is likely overestimating the ability of YC staff to actually identify good patterns simply because it's easy to draw conclusions from just a few datapoints but it's hard to draw statistically significant conclusions that hold up out of sample. However, I doubt that a startup index fund will outperform the best VCs and incubators simply because good deal flow is key, whereas in publicly traded markets, everyone has access to all deals. Having said that, I have no doubt that a VC index fund would outperform many of the VCs that are not that good


There's also a question of how do you define the index fund. The S&P is pretty non-ambiguous. How do you define which startups are in the index? By the time they're definable as a real business, they're no longer early-stage startups.


So pg's years of experience getting better than random results is irrelevant and you'll only believe him if he does a long experiment (that would cost him lots of money if it's not just luck) to prove something he already believes?


It's unclear whether he's getting better than random results. You're assuming the consequent.

It's well-known that the most successful startup in a group of startups is likely to earn more in profit than all the rest combined. We're here on HN talking about PG because YC has been the most successful of all incubators. However, there's always going to be one incubator that significantly outperforms all the others as a result of the distribution of startup success.

I don't know enough about how well PG's investments have done to make informed judgments, but only a very select few people are likely to have that info. I suspect those people have also done their due diligence to figure out if YC is a value-add compared to other incubators (it should be possible to do this without a convoluted double blind experiment), but anyone who's not them really can't say for sure that that's the case.


Generally, and especially with science, the way things work is that you prove something before believing it.


This is oversimplified. In many cases, it is the conjunction of a reasonably strong prior belief and the absence of preexisting "proof" that motivates a scientist (or mathematician) to try to prove something in the first place.

The key is to perform proper Bayesian updates in the face of evidence in either direction; if you do, as long as your prior wasn't totally insane it doesn't really matter where you started.


Thankfully for us, business isn't science.


Launching a successful startup is luck BUT selecting entrepreneurs/startups who have luck is no random. You can definitively find some metrics to evaluate the current luck of a startup and expect that the founder has enough skill to continue to maximize luck in the future.


It'd be unfair to great startups in the experiment batch -- not that it's pg's job to make life fair for people, but it'd put a big dent in YC's reputation. Why spend time flying out there if there's a chance your interview doesn't even matter?


You can still do a pretty similar experiment. For instance, startups 1-50 get into yc like normal. Startups 51-100, together with 50 randomly selected startups get grouped together and tagged as "the bubble group". Meaning that they were on the bubble of getting into yc. Pretty high honors. However, they don't get any access, consultation, etc. to yc itself. Then after a few months you unblind which were highly selected and which were random, to see who does better.


You could also accept groups with good interviews in the experiment batch, but not include them in the experiment.


But then they're effectively excluding the best-interviewing groups from the random experiment.


Can't you pick the best interviews, then hold your experiment with random groups as long as you mix in a the data from the interview groups in such a way to build a representative sample?


We would only be able to conclude that it's unfair for "great start ups" if the experiment showed that random acceptances performed worse than the interview process.


If it were just luck then the companies rejected by YC should statistically perform as well as those selected for interviews which should perform as well as those offered spots in a given batch.


Sorry, not an expert and I have never been an entrepreneur so forgive me if I am misunderstanding, but doesn't YC give them money? In order for this statement to be true either (1) having startup seed money has no bearing on success or (2) YC gives the same money to people it rejects as to people it accepts.


The amount of capital provided by YC is trivial...a rounding error relative to the value of a successful company. But more relevant is that YC almost certainly has some meaningful data on the companies it rejects.

So PG's claim could very well be supported by proprietary information.


The amount of capital provided by YC is trivial.

Never been accepted to YC, but I gather that the value of access to YC's demo day and a strong network of investors and alumni is far from trivial.


Trivial once the company is successful, but isn't the purpose of the money to ensure that the founders don't have to give up and get "proper" jobs before the company succeeds?


But then you're acknowledging that startup success is not solely dependent on luck...


I think you're interpreting the idea of luck incorrectly. Luck in this context is simply the phenomenon of being one of the start ups with high success. To say that success depends solely on luck means that the success of any startup is simply rand(). That doesn't mean all start ups are equally successful.


Giving selected startups money confounds two factors: the selection criteria and the extra funding. This means that any differences in the success rate can't simply be assigned to the selectors choosing wisely.


Unless being a part of YC adds value which I suspect it does. Impossible to really test this unless YC would provide the same level of support and resources to other companies.


Depends on what "luck" entails. Success is measured by something--for the sake of argument, let's sake acquisition price. But there have to be proximal factors; to be a bit reductive, no one is arguing that large companies pick randomly for the world of startups and them for millions of dollars. Perhaps the most immediate proximal factor is number of users or revenue or...whatever. You wouldn't say, "if it was just luck, companies would preform just as well whether they had users or not".

So the question with luck is how far down the causal ladder you start considering things inputs (i.e. things which may or may not influence the outcome) instead of outputs (i.e. things which might be the result of those inputs or of luck). So with YC, is that getting or not getting into YC is an input variable that we could check against success, or is getting a YC spot one of those proximal factors, and the luck was just in getting that, at least in part?

So the "is it pure luck?" question isn't very well defined, but the GP's argument might still be useful in answering it. Moreover--larger questions of luck aside--it would be helpful in determining whether the YC selection process is actually useful. But probably only if they didn't tell anyone about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: