Hmm, I’m not sure you need to make an argument about the direction of the bias. Maybe I should be specifically mentioning imprecision due to potential irrationality?
The way I’m thinking about it, is that 80K have used some frameworks to come up with quantitative scores for how pressing each cause area is, and then ranked the cause areas by the point estimates.
But our imagined confidence intervals around the point estimates should be very large and presumably overlap for a large number of causes, so we should take seriously the idea that the ranking of causes would be different in a better model.
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K’s model.
Also sorry I accidentally used the word important instead of pressing! Will correct this.
Agree that this methodology of point estimates can be overconfident in what the top causes are, but I’m not sure if that’s their methodology or if they’re using expected values where they should. Probably someone from 80k should clarify, if 80k still believes in their ranking enough to think anyone should use it?
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K’s model.
Also agree with this sentence.
My issue is that the summary claims “probabilities derived from belief which aren’t based on empirical evidence [...] means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of “top” cause areas.” This is a claim that we should be less confident than 80k’s cause prio.
When someone has a model, you can’t always say we should be less confident than their model without knowing their methodology, even if their model is “probabilities derived from belief which aren’t based on empirical evidence”. Otherwise you can build a model where their model is right 80% of the time, and things are different in some random way 20% of the time, and then someone else takes your model and does the same thing, and this continues infinitely until your beliefs are just the uniform distribution over everything. So I maintain that the summary should mention something about using point estimates inappropriately, or missing some kinds of uncertainty; otherwise it’s saying something that’s not true in general.
“80K have a cause prioritisation model with wide confidence intervals around point estimates, but as individual EAs, our personal cause prioritisation models should have wider confidence around point estimates than in 80K’s model.”
What I meant to communicate is:
“80K have a cause prioritisation model with wide confidence intervals around point estimates, and individual EAs should 1) pay more attention to the wide confidence intervals in 80K’s model than they are currently and 2) have wide confidence intervals in their personal cause prioritisation model too.”
Hmm, I’m not sure you need to make an argument about the direction of the bias. Maybe I should be specifically mentioning imprecision due to potential irrationality?
The way I’m thinking about it, is that 80K have used some frameworks to come up with quantitative scores for how pressing each cause area is, and then ranked the cause areas by the point estimates.
But our imagined confidence intervals around the point estimates should be very large and presumably overlap for a large number of causes, so we should take seriously the idea that the ranking of causes would be different in a better model.
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K’s model.
Also sorry I accidentally used the word important instead of pressing! Will correct this.
Agree that this methodology of point estimates can be overconfident in what the top causes are, but I’m not sure if that’s their methodology or if they’re using expected values where they should. Probably someone from 80k should clarify, if 80k still believes in their ranking enough to think anyone should use it?
Also agree with this sentence.
My issue is that the summary claims “probabilities derived from belief which aren’t based on empirical evidence [...] means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of “top” cause areas.” This is a claim that we should be less confident than 80k’s cause prio.
When someone has a model, you can’t always say we should be less confident than their model without knowing their methodology, even if their model is “probabilities derived from belief which aren’t based on empirical evidence”. Otherwise you can build a model where their model is right 80% of the time, and things are different in some random way 20% of the time, and then someone else takes your model and does the same thing, and this continues infinitely until your beliefs are just the uniform distribution over everything. So I maintain that the summary should mention something about using point estimates inappropriately, or missing some kinds of uncertainty; otherwise it’s saying something that’s not true in general.
I think you’re interpreting my summary as:
“80K have a cause prioritisation model with wide confidence intervals around point estimates, but as individual EAs, our personal cause prioritisation models should have wider confidence around point estimates than in 80K’s model.”
What I meant to communicate is:
“80K have a cause prioritisation model with wide confidence intervals around point estimates, and individual EAs should 1) pay more attention to the wide confidence intervals in 80K’s model than they are currently and 2) have wide confidence intervals in their personal cause prioritisation model too.”