This doesn’t look fixed to me (possible I’m seeing an older cached version?). I no longer see negative numbers in the summary statistics, but you’re still dividing by things involving normal distributions—these have a small chance of being extremely small or even negative. That in turn means that the expectation of the eventual distribution is undefined.
Empirically I think this is happening, because:
(i) the sampling seems unstable—refreshing the page a few times gives me quite different answers each time;
(ii) the “sensitivity” tool in Guesstimate suggests something funny is going on there (but I’m not sure exactly how this diagnostic tool works, so take with some salt).
To avoid this, I’d change all of the normal distributions that you may end up dividing by to log-normals.
I think this has removed the pathology. There’s still more variation in this number, but that comes from more uncertainty about amount of senior staff time needed. If the decision-relevant question under consideration is “how many of these could we do sequentially?” then this uncertainty is appropriate to weight like this.
That part is now fixed, but it doesn’t look like it contributed meaningfully to the end calculation.
This doesn’t look fixed to me (possible I’m seeing an older cached version?). I no longer see negative numbers in the summary statistics, but you’re still dividing by things involving normal distributions—these have a small chance of being extremely small or even negative. That in turn means that the expectation of the eventual distribution is undefined.
Empirically I think this is happening, because: (i) the sampling seems unstable—refreshing the page a few times gives me quite different answers each time; (ii) the “sensitivity” tool in Guesstimate suggests something funny is going on there (but I’m not sure exactly how this diagnostic tool works, so take with some salt).
To avoid this, I’d change all of the normal distributions that you may end up dividing by to log-normals.
Okay, I’ve now done this.
Let me know if you think the model is better and I can update the post.
re (1), that is true because Guesstimate uses a Monte Carlo method with 5K samples I think.
re (2), I don’t know how to read the sensitivity outputs well, but nothing looks weird to me. Could you explain?
I think this has removed the pathology. There’s still more variation in this number, but that comes from more uncertainty about amount of senior staff time needed. If the decision-relevant question under consideration is “how many of these could we do sequentially?” then this uncertainty is appropriate to weight like this.
Thanks. I updated the post accordingly.