Itâs not enough to have an important problem: you need to be reasonably persuaded that thereâs a good plan for actually making the problem better, the 1% lower. Itâs not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio itâs even worse than that: many people believe that incautious action will make things substantially worse, and thereâs no easy road to identifying which routes are both safe and effective.
I also donât think your argument is effective against people who already think they are working on important problems. You say, âwow, extinction risk is really important and neglectedâ and they say âyes, but factory farm welfare is also really important and neglectedâ.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
What argument do you think works on people who already think theyâre working on important and neglected problems? I canât think of any argument that doesnât just boil down to one of those
I donât know. Partly I think that some of those people are working on something thatâs also important and neglected, and they should keep working on it, and need not switch.
Itâs not enough to have an important problem: you need to be reasonably persuaded that thereâs a good plan for actually making the problem better, the 1% lower. Itâs not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio itâs even worse than that: many people believe that incautious action will make things substantially worse, and thereâs no easy road to identifying which routes are both safe and effective.
I also donât think your argument is effective against people who already think they are working on important problems. You say, âwow, extinction risk is really important and neglectedâ and they say âyes, but factory farm welfare is also really important and neglectedâ.
To be clear, I think these cases can be made, but I think they are necessarily detailed and in-depth, and for some people the moral philosophy component is going to be helpful.
Fair point re tractability
What argument do you think works on people who already think theyâre working on important and neglected problems? I canât think of any argument that doesnât just boil down to one of those
I donât know. Partly I think that some of those people are working on something thatâs also important and neglected, and they should keep working on it, and need not switch.