I’ve also updated over the last few years that having a truth-seeking community is more important than I previously thought—basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.
I’m actually not sure about this logic. Can you expand on why EA having insufficient skill to “navigate power dynamics around AI” implies “our comparative advantage will need to be truth-seeking”?
One problem I see is that “comparative advantage” is not straightforwardly applicable here, because the relevant trade or cooperation (needed for the concept to make sense) may not exist. For example, imagine that EA’s truth-seeking orientation causes it to discover and announce one or more politically inconvenient truths (e.g. there are highly upvoted posts about these topics on EAF), which in turn causes other less truth-seeking communities to shun EA and refuse to pay attention to its ideas and arguments. In this scenario, if EA also doesn’t have much power to directly influence the development of AI (as you seem to suggest), then how does EA’s truth-seeking benefit the world?
(There are worlds in which it takes even less for EA to be shunned, e.g., if EA merely doesn’t shun others hard enough. For example there are currently people pushing for EA to “decouple” from LW/rationality, even though there is very little politically incorrect discussions happening on LW.)
My own logic suggests that too much truth-seeking isn’t good either. Would love to see how to avoid this conclusion, but currently can’t. (I think the optimal amount is probably a bit higher than the current amount, so this is not meant to be an argument against more truth-seeking at the current margin.)
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don’t think that EA’s impact comes from arguing for highly controversial ideas; and I’m not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
Thanks for the clarification. Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you’re adopting the framing of “comparative advantage” too much in a situation where the idea doesn’t work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying “We’re better scholars than we are soldiers. Let’s pursue our comparative advantage and reallocate our defense budget into our universities.”
Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?
Of course this is all a spectrum, but I don’t believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn’t be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.
I mean, why not? Less-wrong “rationality” isn’t foundational to EA, it’s not even theaccepted school of criticial thinking.
For example, I personally come from the “scientific skepticism” tradition (think Skeptics Guide to the Universe, Steven Novella, James Randi, etc...), and in my opinion, since EA is simply scientific skepticism applied to charity, scientific skepticism is the much more natural basis for criticial thinking in the EA movement than LW.
I’m actually not sure about this logic. Can you expand on why EA having insufficient skill to “navigate power dynamics around AI” implies “our comparative advantage will need to be truth-seeking”?
One problem I see is that “comparative advantage” is not straightforwardly applicable here, because the relevant trade or cooperation (needed for the concept to make sense) may not exist. For example, imagine that EA’s truth-seeking orientation causes it to discover and announce one or more politically inconvenient truths (e.g. there are highly upvoted posts about these topics on EAF), which in turn causes other less truth-seeking communities to shun EA and refuse to pay attention to its ideas and arguments. In this scenario, if EA also doesn’t have much power to directly influence the development of AI (as you seem to suggest), then how does EA’s truth-seeking benefit the world?
(There are worlds in which it takes even less for EA to be shunned, e.g., if EA merely doesn’t shun others hard enough. For example there are currently people pushing for EA to “decouple” from LW/rationality, even though there is very little politically incorrect discussions happening on LW.)
My own logic suggests that too much truth-seeking isn’t good either. Would love to see how to avoid this conclusion, but currently can’t. (I think the optimal amount is probably a bit higher than the current amount, so this is not meant to be an argument against more truth-seeking at the current margin.)
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don’t think that EA’s impact comes from arguing for highly controversial ideas; and I’m not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
Thanks for the clarification. Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you’re adopting the framing of “comparative advantage” too much in a situation where the idea doesn’t work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying “We’re better scholars than we are soldiers. Let’s pursue our comparative advantage and reallocate our defense budget into our universities.”
This part seems reasonable.
Of course this is all a spectrum, but I don’t believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn’t be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.
I mean, why not? Less-wrong “rationality” isn’t foundational to EA, it’s not even the accepted school of criticial thinking.
For example, I personally come from the “scientific skepticism” tradition (think Skeptics Guide to the Universe, Steven Novella, James Randi, etc...), and in my opinion, since EA is simply scientific skepticism applied to charity, scientific skepticism is the much more natural basis for criticial thinking in the EA movement than LW.