Update: Matthew (the other cofounder) and I got Guesstimate to a stage we were happy with. After a good amount of work it seemed like several customers were pretty happy with it, but there weren't many obvious ways of making a ton more money on it, and we ran out of many of the most requested/obvious improvements. We're keeping it running, but it's not getting much more active development at this time.
Note that it's all open source, so if you want to host it in-house you're encouraged to do so. I'm also happy to answer questions about it or help with specific modeling concerns.
Right now I'm working at the Future of Humanity Insitute on some other tools I think will compliment Guesstimate well. There was a certain point where it seemed like many of the next main features would make more sense in separate apps. Hopefully, I'll be able to announce one of these soon.
Are you able to apply global correlations to all the variates?
One of the triggers for the financial crisis in '08 was that the Monte Carlo pricers assumed the various risks were much less correlated than they actually were.
For example, they largely assumed that it was unlikely for many mortgages or underlying MBS securities to simultaneously default (low correlation). This is how many AAA rated CDO securities ended up trading at 50%+ discounts.
IMHO, any multivariate Monte Carlo analysis that doesn't show your sensitivity to correlation is essentially useless, since your answers may change completely.
In the second example model (https://www.getguesstimate.com/models/316), Fermi estimation for startups, you would expect many of the inputs (deals at Series A, B, C, amount raised per deal) in real life to be highly correlated with each other since they all depend on 'how well is VC in general doing right now?'
The final estimate of 'Capital Lost in Failed Cos from VC' has a range of 22B to 39B, this seems way too low. The amount of VC money lost during a crisis (like in '01) can easily be an order of magnitude more.
I'd definitely agree that correlations can be a really big deal, especially in very large models like that one.
Guesstimate doesn't currently allow for correlations as you're probably thinking of them. However, if two nodes are both functions of a third base node, then they will both be correlated with each other. You can use this to make somewhat hacky correlations in cases where there isn't a straightforward causal relationship.
Implementing non-causal correlations in an interface like this is definitely a significant challenge. It could introduce essentially another layer to the currently 2-dimensional grid. It's probably the feature I'd most like to add, but the cost was too high so far.
I think Guesstimate is really ideal for smaller models, or for the prototyping of larger models. However, if you are making multi-million dollar decisions with hundreds of variables and correlations, I suggest more heavyweight tools (either enterprise Excel plugins or probabilistic programming).
Thanks for explaining your thought process, I read your other replies and it's agree that many decisions are being made without any formal probabilistic model at all. There's a lot of value in sitting down and working out how things might be related to each other.
> where there isn't a straightforward causal relationship
One way to interpret a global pairwise correlation is simply that the person building the model is being systematically biased in one direction—either being too pessimistic or optimistic. This is a 'non-causal' relationship but often the biggest contributor to variance between the model and the real world.
Philosophically, this is a bit like the difference between 538's modeling approach and Princeton Election Consortium's for the 2016 election—the former gave Hillary a 2/3 chance of winning, while the latter ascribed a ~99% chance.
The risk of leaving modeling error out is that you'll end up with much more confidence than is called for—it feels very different to come up with a point estimate (I'll save $10k this year) vs. a tight range (I'll save 9k-11k this year), if the true range is much wider.
In the former case you know your point estimate may be very far off, but in the latter you may be tempted to rely on an estimate for variance that too low.
> It could introduce essentially another layer to the currently 2-dimensional grid
You could probably get away with doing almost all of this automatically for the user as long as the decide on what the 'primary' output is:
- For every input, calculate whether it's positively or negatively correlated with the output
- Apply a global rank correlation to all the inputs with all the standard techniques, flipping the signs found above as appropriate
- Report what the output range looks with a significant positive correlation (usually the negative correlation case isn't as interesting)
You can arbitrarily (rank-)correlate any variables of any distribution using copulas as an intermediary.
So basically, draw a multivariate correlated standard normal with the intended correlation. Transform the standard normal draws into quantiles of the standard normal (this works the marginal distributions are also normal). Now you have X and Y quantiles for your target distributions and you can draw from them.
The distributional transformations will slightly attenuate the correlation, and the choppier the distributions the more attenuation you'll get. Additionally, while you can do this for 3+ variables at once, there are constraints on the possible sigma matrix describing n-dimensional correlations.
If the variables have no fixed distribution you can use the eCDF of the actual data, so it's even possible to import e.g. population and income data and produce a permutation that gives you the correlation desired.
I agree that it is fairly difficult to do this if you have arbitrary DGPs with complex interdependencies in them--if you correlate on variables A and B, then it is difficult to guarantee the observed correlation between f(A) and g(B) and vice versa--but still you can provide a lot of utility with the copula method.
In my experience, many consequential business decisions aren't even made with probability distributions, let alone probabilistic models with realistic correlations. I would generally encourage people who are comfortable with more advanced probabilistic systems to use them.
You can choose from a few distributions (normal, lognormal, uniform) in the main editor, or you can type many others using the function editor. The sidebar describes all of them.
I’ve used the product to “guesstimate” a few things like quality of life with a higher paying job with longer commute (not worth it!) and starting a business. Love how intuitive and clean the UI is and how it puts probability estimation at my fingertips, in simple, human language.
Great work; this looks awesome.
I am wondering what products would look like, if hardware engineers applied this to the modeling of future products.
At my startup valispace.com for now we only allow for a simple propagation of worst-case values (gaussian distribution or worst-case stacking), but I think that specially for early design phases it would be of huge help and foresee problems in complex projects early on.
Do you know of anyone using guesstimate for hardware engineering purposes?
When this was originally on HN, I was impressed and signed up as a paying subscriber. Over time, I noticed development had stopped and the tiny things that I used to work around started to annoy me more. It's still a neat platform but I cancelled my subscription last year.
On the question of "what are other ways of doing MC analysis", there are two approaches.
The first is to use Excel apps like Oracle Crystal Ball or @Risk. These are aimed at business analysists. They're pretty expensive, but also quite powerful.
The other option is to use probabilistic programming languages. Stan and PYMC3 are probably the best now, but hopefully, some others will become much better in the next few years.
That said, this is a pretty small space. The main "business competitor" is probably people just using google sheets or Excel without distributions to make models.
Maybe it's just the short video and the FAQ, but I found it particularly difficult to find information about the distributions involved and how to choose that.
I imagine there a bunch of cases where the defaults would not work like you're trying to do error propagation (all normal distributions) or you're trying to compute interval arithmetic.
Is it the case that if you input a range which span multiple orders of magnitude then you get lognormal rather than normal?
I might not be exactly the target audience, but I would appreciate a more in-depth of the math and heuristics involved
Generally, we recommend lognormal distributions for estimated parameters that can't be negative. This works when you span multiple orders of magnitude, though it's possible you may want an even more skewed distribution (which is unsupported).
I may be able to make a much longer video introduction some-time soon.
I saw this a couple of years ago, when it was just a project. Now that there's a price, how did you guys decide on a price? How did you find your first customers? For a broadly applicable tool, how did you know where to start looking?
We initially had a lot of uncertainty on how to price it but wanted to experiment with more users rather than fewer, with the premise that if it were very successful we could scale up.
I think if I were to start again or spend much time restructuring it, I'd probably focus a lot more on enterprise customers. That would be quite a bit of work though, so I don't have intentions of doing that soon.
Does this permit Bayesian inference? e.g. looks like graphical probabilistic programming (hooking up various distributions and performing inference), except the key missing component is the ability to observe values for any given distribution beyond the prior.
I'm developing a similar open-source app for statistical modeling and inference in the browser: https://statsim.com. You can create probabilistic models and then infer their parameters using algorithms such as MCMC or Hamiltonian Monte Carlo. The app is still in beta but it might be useful. Some models: https://github.com/statsim/models
I love that it was a no BS signup and start using. Super clean and easy. It would be great to be able to show data on GIS as well - effectively showing the outcomes geographic representations. Ill see if the data I was looking to work with today will work with this tool meaningfully.
I've used Palisade @Risk quite a bit, but for my use case, most of the time I feel like I'm taking a Lamborghini to the comer store. This is perfect for someone like me who is more of a "casual" estimator of things modelling with probability.
God dammit. This sort of thing pisses me off. Here I am, on vacation, waiting for my family to wake up. What better way to spend my time but pursue HN. I happen upon something like this. Something so damned useful that I have no choice but investigate.
You can copy & paste an array of samples and Guesstimate will sample from that cluster. For instance, try pasting the following into the value field of a cell:
[1,1,1,2,2,2,2,3,3,3,3,3,4,4,4,4,4,7,7,7,7]
You can use tools like distshaper6 to generate arbitrary distributions, then copy the samples into Guesstimate.
I would love to see this idea translated into event planning/calendaring. Probabilistic party planning. I want to see what might be happening tonight in addition to what is definitely happening.
"If 5 people show up at my house tomorrow evening, I'll hold a poker night." 10 people were invited and 4 of them RSVP yes and 2 of them RSVP no. It looks like there's a 95% chance I'm holding a poker night tomorrow.
"The X team has a monthly meeting on the 1st, never fail. They haven't decided on the location yet, just that it's on the North Side." As the team members pick possible locations, the possible locations appear more distinct until one is chosen.
It wasn’t obvious from the landing page but can you link estimates from different models? It would be super cool to directly import variables and their estimates from other models.
Please do so. The UI is all open-source react, so you may be able to copy some components directly if you wanted. I'd be happy to help people out with this if you have requests.
I proposed writing something like this while working for DuPont's Encirca platform. Years later still little to no adoption in the farm IT field of these models.
You can create them easily too -- can name the individually, can assign names from existing tables and so on. You can have constants too, that is, they don't have to point to any cell [1]. It is a godsend when working with bigger tables having lots of formulas.
Another thing I like is that you can do simple statistical reckoning for it. For my job, I often have to benchmark something several hundred times with or without a patch applied. It can be bit difficult to put "on average x% faster" in context when the benchmark is noisy, but Guesstimate allows you to answer questions like "assuming somebody ran one run of this benchmark with the patch, and one run without it, what's the expected range of performance improvement that they'd see?" with the actual numbers that you get out of the benchmark:
https://www.getguesstimate.com/models/11850
I think technically the mode is one of the middle buckets, it looks slightly taller to me.
Anyways, it's a histogram, so the x-axis is split into buckets. The bar all the way on the left is likely some range of hours from 0 to whatever the bucket size is
I've heard of it being used in a few classes. There was one estimation session with one group of what I remember to be 8th-graders. Honestly, I really don't think you need to be great at statistics to understand the fundamental concepts.
It's ~2019 which means JS has been around for 23 years, so turn JS on or stop complaining -- and it's pretty absurd to comment on the laziness of a person based on their decision to use a defacto-available technology that you chose to disable. Actually, I'd like to hear how you'd build this application without Javascript and have it meet users' expectations of modern web apps.
It sounds like two guys made this. Its hard for two guys to be experts enough to make a product like this and also happen to be up on the current state of making web sites.
I say this because I am working on a product in my spare time and I'm surprised to see how many different areas of expertise are required. (My web site is terrible. And good documentation is tough.)
Update: Matthew (the other cofounder) and I got Guesstimate to a stage we were happy with. After a good amount of work it seemed like several customers were pretty happy with it, but there weren't many obvious ways of making a ton more money on it, and we ran out of many of the most requested/obvious improvements. We're keeping it running, but it's not getting much more active development at this time.
Note that it's all open source, so if you want to host it in-house you're encouraged to do so. I'm also happy to answer questions about it or help with specific modeling concerns.
Right now I'm working at the Future of Humanity Insitute on some other tools I think will compliment Guesstimate well. There was a certain point where it seemed like many of the next main features would make more sense in separate apps. Hopefully, I'll be able to announce one of these soon.