I think there’s a lot that goes into deciding which people are correct on this, and only saying “AI x-risk and bio x-risk are really important” is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work on
Can you say more about what you mean by this? To me, ‘there’s a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working on’ feels totally sufficient
It’s not enough to have an important problem: you need to be reasonably persuaded that there’s a good plan for actually making the problem better, the 1% lower. It’s not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it’s even worse than that: many people believe that incautious action will make things substantially worse, and there’s no easy road to identifying which routes are both safe and effective.
I also don’t think your argument is effective against people who already think they are working on important problems. You say, “wow, extinction risk is really important and neglected” and they say “yes, but factory farm welfare is also really important and neglected”.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
What argument do you think works on people who already think they’re working on important and neglected problems? I can’t think of any argument that doesn’t just boil down to one of those
I don’t know. Partly I think that some of those people are working on something that’s also important and neglected, and they should keep working on it, and need not switch.
Can you say more about what you mean by this? To me, ‘there’s a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working on’ feels totally sufficient
It’s not enough to have an important problem: you need to be reasonably persuaded that there’s a good plan for actually making the problem better, the 1% lower. It’s not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it’s even worse than that: many people believe that incautious action will make things substantially worse, and there’s no easy road to identifying which routes are both safe and effective.
I also don’t think your argument is effective against people who already think they are working on important problems. You say, “wow, extinction risk is really important and neglected” and they say “yes, but factory farm welfare is also really important and neglected”.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
Fair point re tractability
What argument do you think works on people who already think they’re working on important and neglected problems? I can’t think of any argument that doesn’t just boil down to one of those
I don’t know. Partly I think that some of those people are working on something that’s also important and neglected, and they should keep working on it, and need not switch.