“I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world”
This admirably honest statement deserves more emphasis. As we know from medicine and international development and anywhere that does RCTs, it is really, really hard—even when the results of your actions are right in front of you—to know whether you have helped someone or harmed them. There are just too many confounding factors, selection bias, etc.
The long-termist AGI stuff has always struck me as even worse off in this respect. How is anyone supposed to know that the actions they take today will have a beneficial impact on the world decades from now, rather than making things worse? And given the premises of AGI alignment, making things worse would be utterly catastrophic for humanity.
This admirably honest statement deserves more emphasis. As we know from medicine and international development and anywhere that does RCTs, it is really, really hard—even when the results of your actions are right in front of you—to know whether you have helped someone or harmed them. There are just too many confounding factors, selection bias, etc.
The long-termist AGI stuff has always struck me as even worse off in this respect. How is anyone supposed to know that the actions they take today will have a beneficial impact on the world decades from now, rather than making things worse? And given the premises of AGI alignment, making things worse would be utterly catastrophic for humanity.