On the ‘hippies have too much agreeableness’ point—yes, you are totally right!!
On the ‘pinning down core int/a claims’ point. I agree that in general getting more precise about claims is good. But I have some caution around pushing to generate precise object-level claims that “define int/a”, in that you have to believe these claims to be part of it. One thing I feel towards EA is that it used to be about “the question” (how to do the most good), and created room for people to generate new answers to that question, but more recently it has become about “the answer” (this short list of career paths is how to do the most good). But I don’t think the cultural/structural locking in of those answers is good because we might be missing crucial considerations that will only become clear in the future.
I have some caution around pushing to generate precise object-level claims that “define int/a”, in that you have to believe these claims to be part of it.
Yeah, I phrased it badly when I said that the movement should be pinning down claims. I’m not suggesting that you use these claims to define membership. Indeed, even the framing of your original post feels too “we are a group defined by believing the same things” for my taste (as compared with, say, “we’re some collaborators with similar intellectual/emotional/ethical stances”).
But I’m excited about you (and the others you mention in this post) writing about the things you personally think the EA worldview gets wrong, ideally not just engaging with how the movement turned out in practice but the broken philosophical assumptions that led to practical mistakes.
As one example, EAs constantly use “value-aligned” as a metric of who to ally with. But it seems pretty plausible to me that SBF was extremely value-aligned with most of the stated philosophical principles of EA. The problem was that he wasn’t value-aligned with the background ethics of society that EA mostly takes for granted. Understanding this deeply enough I think would lead to you reconceptualize the whole concept of “value-aligned” towards things more reminiscent of int/a (in a way that would then have implications for e.g. what moral theories to believe, what alignment targets to aim AIs at, etc).
Thanks Richard!
On the ‘hippies have too much agreeableness’ point—yes, you are totally right!!
On the ‘pinning down core int/a claims’ point. I agree that in general getting more precise about claims is good. But I have some caution around pushing to generate precise object-level claims that “define int/a”, in that you have to believe these claims to be part of it. One thing I feel towards EA is that it used to be about “the question” (how to do the most good), and created room for people to generate new answers to that question, but more recently it has become about “the answer” (this short list of career paths is how to do the most good). But I don’t think the cultural/structural locking in of those answers is good because we might be missing crucial considerations that will only become clear in the future.
Yeah, I phrased it badly when I said that the movement should be pinning down claims. I’m not suggesting that you use these claims to define membership. Indeed, even the framing of your original post feels too “we are a group defined by believing the same things” for my taste (as compared with, say, “we’re some collaborators with similar intellectual/emotional/ethical stances”).
But I’m excited about you (and the others you mention in this post) writing about the things you personally think the EA worldview gets wrong, ideally not just engaging with how the movement turned out in practice but the broken philosophical assumptions that led to practical mistakes.
As one example, EAs constantly use “value-aligned” as a metric of who to ally with. But it seems pretty plausible to me that SBF was extremely value-aligned with most of the stated philosophical principles of EA. The problem was that he wasn’t value-aligned with the background ethics of society that EA mostly takes for granted. Understanding this deeply enough I think would lead to you reconceptualize the whole concept of “value-aligned” towards things more reminiscent of int/a (in a way that would then have implications for e.g. what moral theories to believe, what alignment targets to aim AIs at, etc).