My main criticism of this post is that it seems to implicitly suggest that âthe core action relevant points of EAâ are âwork on AI or bioâ, and doesnât seem to acknowledge that a lot of people donât have that as their bottom line. I think itâs reasonable to believe that theyâre wrong and youâre right, but:
I think thereâs a lot that goes into deciding which people are correct on this, and only saying âAI x-risk and bio x-risk are really importantâ is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on,
this post seems to frame your pitch as âthe new EA pitchâ, and itâs weird to me to omit from your framing that lots of people that I consider EAs are kind of left out in the cold by it.
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And Iâm not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe thisâll change?
Personally, Iâm not really a longtermist, but think itâs way more important to get people working on AI/âbio stuff from a neartermist lens, so Iâm pretty OK with optimising my outreach for producing more AI and bio people. Though Iâd be fine with low cost ways to also mention âand by the way, global health and animal welfare are also things some EAs care about, hereâs how to find the relevant people and communitiesâ.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though Iâm unsure which side of the fence Iâm on).
But I donât want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think itâs a misleading depiction of the in-practice composition of the community,
I think itâs unfair to the people who arenât convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.
I think thereâs a lot that goes into deciding which people are correct on this, and only saying âAI x-risk and bio x-risk are really importantâ is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on
Can you say more about what you mean by this? To me, âthereâs a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working onâ feels totally sufficient
Itâs not enough to have an important problem: you need to be reasonably persuaded that thereâs a good plan for actually making the problem better, the 1% lower. Itâs not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio itâs even worse than that: many people believe that incautious action will make things substantially worse, and thereâs no easy road to identifying which routes are both safe and effective.
I also donât think your argument is effective against people who already think they are working on important problems. You say, âwow, extinction risk is really important and neglectedâ and they say âyes, but factory farm welfare is also really important and neglectedâ.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
What argument do you think works on people who already think theyâre working on important and neglected problems? I canât think of any argument that doesnât just boil down to one of those
I donât know. Partly I think that some of those people are working on something thatâs also important and neglected, and they should keep working on it, and need not switch.
My main criticism of this post is that it seems to implicitly suggest that âthe core action relevant points of EAâ are âwork on AI or bioâ, and doesnât seem to acknowledge that a lot of people donât have that as their bottom line. I think itâs reasonable to believe that theyâre wrong and youâre right, but:
I think thereâs a lot that goes into deciding which people are correct on this, and only saying âAI x-risk and bio x-risk are really importantâ is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on,
this post seems to frame your pitch as âthe new EA pitchâ, and itâs weird to me to omit from your framing that lots of people that I consider EAs are kind of left out in the cold by it.
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And Iâm not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe thisâll change?
Personally, Iâm not really a longtermist, but think itâs way more important to get people working on AI/âbio stuff from a neartermist lens, so Iâm pretty OK with optimising my outreach for producing more AI and bio people. Though Iâd be fine with low cost ways to also mention âand by the way, global health and animal welfare are also things some EAs care about, hereâs how to find the relevant people and communitiesâ.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though Iâm unsure which side of the fence Iâm on).
But I donât want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think itâs a misleading depiction of the in-practice composition of the community,
I think itâs unfair to the people who arenât convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.
Can you say more about what you mean by this? To me, âthereâs a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working onâ feels totally sufficient
Itâs not enough to have an important problem: you need to be reasonably persuaded that thereâs a good plan for actually making the problem better, the 1% lower. Itâs not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio itâs even worse than that: many people believe that incautious action will make things substantially worse, and thereâs no easy road to identifying which routes are both safe and effective.
I also donât think your argument is effective against people who already think they are working on important problems. You say, âwow, extinction risk is really important and neglectedâ and they say âyes, but factory farm welfare is also really important and neglectedâ.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
Fair point re tractability
What argument do you think works on people who already think theyâre working on important and neglected problems? I canât think of any argument that doesnât just boil down to one of those
I donât know. Partly I think that some of those people are working on something thatâs also important and neglected, and they should keep working on it, and need not switch.