Sometimes I see criticisms of EA that argue, “Historically, groups of white people deciding the direction of the future hasn’t been great for groups who aren’t represented in that decision-making process.”
The responses I see to this are usually something like, “Don’t worry about it, we’re altruists.” But I feel like this would be a good opportunity to take the outside view and do some proper forecasting.
Can you elaborate on the criticism? There have been a ton of bad decisions made by all kinds of groups affecting all kinds of other groups who have not been involved in the decision making process. The most charitable argument, I can come up with, is something like this:
Group X has acted badly in some way.
EA is sufficiently similar to Group X.
Sufficiently similar groups are likely to act the same.
C: EA is likely to act badly in some way.
So group X needs to be specified and “white people” seems far too general.
I agree, except I think stage 1 implies something more like “Group X acts badly in about 80% of examples of Situation Y.”
I think the criticism tends to be something like “white people” or “rich white men”, which I agree is very vague. I’m really keen we get better at predicting how likely EA is to screw up in particular ways by finding a better reference class.
Sometimes I see criticisms of EA that argue, “Historically, groups of white people deciding the direction of the future hasn’t been great for groups who aren’t represented in that decision-making process.”
The responses I see to this are usually something like, “Don’t worry about it, we’re altruists.” But I feel like this would be a good opportunity to take the outside view and do some proper forecasting.
Can you elaborate on the criticism? There have been a ton of bad decisions made by all kinds of groups affecting all kinds of other groups who have not been involved in the decision making process. The most charitable argument, I can come up with, is something like this:
Group X has acted badly in some way.
EA is sufficiently similar to Group X.
Sufficiently similar groups are likely to act the same.
C: EA is likely to act badly in some way.
So group X needs to be specified and “white people” seems far too general.
I agree, except I think stage 1 implies something more like “Group X acts badly in about 80% of examples of Situation Y.”
I think the criticism tends to be something like “white people” or “rich white men”, which I agree is very vague. I’m really keen we get better at predicting how likely EA is to screw up in particular ways by finding a better reference class.