The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don’t think that EA’s impact comes from arguing for highly controversial ideas; and I’m not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
Thanks for the clarification. Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you’re adopting the framing of “comparative advantage” too much in a situation where the idea doesn’t work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying “We’re better scholars than we are soldiers. Let’s pursue our comparative advantage and reallocate our defense budget into our universities.”
Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?
Of course this is all a spectrum, but I don’t believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn’t be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don’t think that EA’s impact comes from arguing for highly controversial ideas; and I’m not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
Thanks for the clarification. Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you’re adopting the framing of “comparative advantage” too much in a situation where the idea doesn’t work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying “We’re better scholars than we are soldiers. Let’s pursue our comparative advantage and reallocate our defense budget into our universities.”
This part seems reasonable.
Of course this is all a spectrum, but I don’t believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn’t be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.