My main criticism of this post is that it seems to implicitly suggest that “the core action relevant points of EA” are “work on AI or bio”, and doesn’t seem to acknowledge that a lot of people don’t have that as their bottom line. I think it’s reasonable to believe that they’re wrong and you’re right, but:
I think there’s a lot that goes into deciding which people are correct on this, and only saying “AI x-risk and bio x-risk are really important” is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on,
this post seems to frame your pitch as “the new EA pitch”, and it’s weird to me to omit from your framing that lots of people that I consider EAs are kind of left out in the cold by it.
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And I’m not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe this’ll change?
Personally, I’m not really a longtermist, but think it’s way more important to get people working on AI/bio stuff from a neartermist lens, so I’m pretty OK with optimising my outreach for producing more AI and bio people. Though I’d be fine with low cost ways to also mention ‘and by the way, global health and animal welfare are also things some EAs care about, here’s how to find the relevant people and communities’.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I’m unsure which side of the fence I’m on).
But I don’t want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think it’s a misleading depiction of the in-practice composition of the community,
I think it’s unfair to the people who aren’t convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.
I think there’s a lot that goes into deciding which people are correct on this, and only saying “AI x-risk and bio x-risk are really important” is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on
Can you say more about what you mean by this? To me, ‘there’s a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working on’ feels totally sufficient
It’s not enough to have an important problem: you need to be reasonably persuaded that there’s a good plan for actually making the problem better, the 1% lower. It’s not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it’s even worse than that: many people believe that incautious action will make things substantially worse, and there’s no easy road to identifying which routes are both safe and effective.
I also don’t think your argument is effective against people who already think they are working on important problems. You say, “wow, extinction risk is really important and neglected” and they say “yes, but factory farm welfare is also really important and neglected”.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
What argument do you think works on people who already think they’re working on important and neglected problems? I can’t think of any argument that doesn’t just boil down to one of those
I don’t know. Partly I think that some of those people are working on something that’s also important and neglected, and they should keep working on it, and need not switch.
My main criticism of this post is that it seems to implicitly suggest that “the core action relevant points of EA” are “work on AI or bio”, and doesn’t seem to acknowledge that a lot of people don’t have that as their bottom line. I think it’s reasonable to believe that they’re wrong and you’re right, but:
I think there’s a lot that goes into deciding which people are correct on this, and only saying “AI x-risk and bio x-risk are really important” is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on,
this post seems to frame your pitch as “the new EA pitch”, and it’s weird to me to omit from your framing that lots of people that I consider EAs are kind of left out in the cold by it.
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And I’m not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe this’ll change?
Personally, I’m not really a longtermist, but think it’s way more important to get people working on AI/bio stuff from a neartermist lens, so I’m pretty OK with optimising my outreach for producing more AI and bio people. Though I’d be fine with low cost ways to also mention ‘and by the way, global health and animal welfare are also things some EAs care about, here’s how to find the relevant people and communities’.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I’m unsure which side of the fence I’m on).
But I don’t want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think it’s a misleading depiction of the in-practice composition of the community,
I think it’s unfair to the people who aren’t convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.
Can you say more about what you mean by this? To me, ‘there’s a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working on’ feels totally sufficient
It’s not enough to have an important problem: you need to be reasonably persuaded that there’s a good plan for actually making the problem better, the 1% lower. It’s not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it’s even worse than that: many people believe that incautious action will make things substantially worse, and there’s no easy road to identifying which routes are both safe and effective.
I also don’t think your argument is effective against people who already think they are working on important problems. You say, “wow, extinction risk is really important and neglected” and they say “yes, but factory farm welfare is also really important and neglected”.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
Fair point re tractability
What argument do you think works on people who already think they’re working on important and neglected problems? I can’t think of any argument that doesn’t just boil down to one of those
I don’t know. Partly I think that some of those people are working on something that’s also important and neglected, and they should keep working on it, and need not switch.