Thanks for sharing! We have some differing views on this which I will focus on—but I agree with much of what you say and do appreciate your thoughts + engagement here.
Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply “Oh, you care for the future of all humans, and even animals? That’s suspicious – we’re definitely going to apply extra scrutiny towards you.” Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,”Are EAs following democratic processes and why does their funding come from very few sources?” is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren’t spending their time lobbying against those larger issues. But there might be other explanations for their focus—any given individual’s sphere of influence, tractability, personal identity, and others can all be factors that contribute here.
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that “it’s important to have good institutions” is something EA owes to outsiders is what seems weird to me. Doesn’t this framing kind of suggest that EAs couldn’t motivate themselves to try their best if it weren’t for “institutional safeguards.”
“It’s important to have good institutions” is clearly something that “serious EAs” are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren’t taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned.
What a depressing view of humans, that they can only act according to their stated ideals if they’re watched at every step and have to justify themselves to critics!
I think this goes too far, I don’t think this is the claim being made. The standard is just “would better systems and institutional safeguards better align EA’s stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?”. My guess is you probably agree with this though?
Either way, I don’t think anyone in EA, nor “EA” as a movement, has any obligation to engage in great detail
I guess if someone’s impression of EA was “group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future”
Nitpick: while I agree that it would be a strawman, it isn’t the only scenario for outsiders to be concerned. There are also people who disagree with some longtermists vision of the future, there are people who think EA’s general approach is bad, and it could follow that those people will think $$ on EA causes are poorly spent and should be spent in [some different way]. There are also people who think EA is a talent drain away from important issues. Of course, this doesn’t interact with the extent to which EA is “obligated” to respond, especially because many of these takes aren’t great. I agree that there’s no obligation, per se. But the claim is “outsiders are permitted to ASK you to fix your problems”, not that you are obligated to respond (though subsequent sentences RE: “I can demand” or “you should” might be a source of miscommunication).
I guess the way I see it is something like—EA isn’t obligated to respond to any outsider criticism, but if you want to be taken seriously by these outsiders who have these concerns, if you want buy-in from people who you claim to be working with and working for, if you don’t want people at social entrepreneurship symposiums seriously considering questions like “Is the way to do the most good to destroy effective altruism?”, then it could be in your best interest to take good-faith criticisms and concerns seriously, even if the attitude comes across poor, because it likely reflects some barrier in you achieving your goals. But I think there probably isn’t much disagreement between us here.
Thanks for sharing! We have some differing views on this which I will focus on—but I agree with much of what you say and do appreciate your thoughts + engagement here.
It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren’t spending their time lobbying against those larger issues. But there might be other explanations for their focus—any given individual’s sphere of influence, tractability, personal identity, and others can all be factors that contribute here.
“It’s important to have good institutions” is clearly something that “serious EAs” are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren’t taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned.
I think this goes too far, I don’t think this is the claim being made. The standard is just “would better systems and institutional safeguards better align EA’s stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?”. My guess is you probably agree with this though?
Nitpick: while I agree that it would be a strawman, it isn’t the only scenario for outsiders to be concerned. There are also people who disagree with some longtermists vision of the future, there are people who think EA’s general approach is bad, and it could follow that those people will think $$ on EA causes are poorly spent and should be spent in [some different way]. There are also people who think EA is a talent drain away from important issues. Of course, this doesn’t interact with the extent to which EA is “obligated” to respond, especially because many of these takes aren’t great. I agree that there’s no obligation, per se. But the claim is “outsiders are permitted to ASK you to fix your problems”, not that you are obligated to respond (though subsequent sentences RE: “I can demand” or “you should” might be a source of miscommunication).
I guess the way I see it is something like—EA isn’t obligated to respond to any outsider criticism, but if you want to be taken seriously by these outsiders who have these concerns, if you want buy-in from people who you claim to be working with and working for, if you don’t want people at social entrepreneurship symposiums seriously considering questions like “Is the way to do the most good to destroy effective altruism?”, then it could be in your best interest to take good-faith criticisms and concerns seriously, even if the attitude comes across poor, because it likely reflects some barrier in you achieving your goals. But I think there probably isn’t much disagreement between us here.