In some sense, I’m both happy and frustrated about the change.
I’m happy that EA recognized the importance of longtermism that is robust to distributional shift, and also recognized the importance of AI and AI Alignment. It takes some boldness to accept weird causes like this. In some sense, EA invented practical longtermism that is robust to distributional shift for people, which is equivalent to inventing longtermism for people.
I also worry about politicization of EA, even as I grudgingly admit the fully non-political era is over by default, and EA needs to gracefully recognize the new reality.
I think that theoretical arguments’ reliability can depend. Most will have problems when translated to the real world, but something like DALYs/QALYs is likely to work in real life. A similar point for AI is the Risks from learned optimization sequence here: https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB
I’m also frustrated at how little EA has in the Entrepreneur space, and too much arguing over the best thing before we do it. To put it another way, EA needs more executors, and 80,000 Hours should start prioritizing start-up people hiring. At the end of the day, EA needs to be able to actually do stuff and get results, not just become a intellectual hothouse.
EDIT: I no longer endorse growing the movement in people because I now think that Eternal September issues where a flood of new EAs permanently change the culture are a real risk, and there aren’t a lot of scalable opportunities right now.
In some sense, I’m both happy and frustrated about the change.
I’m happy that EA recognized the importance of longtermism that is robust to distributional shift, and also recognized the importance of AI and AI Alignment. It takes some boldness to accept weird causes like this. In some sense, EA invented practical longtermism that is robust to distributional shift for people, which is equivalent to inventing longtermism for people.
I also worry about politicization of EA, even as I grudgingly admit the fully non-political era is over by default, and EA needs to gracefully recognize the new reality.
I think that theoretical arguments’ reliability can depend. Most will have problems when translated to the real world, but something like DALYs/QALYs is likely to work in real life. A similar point for AI is the Risks from learned optimization sequence here: https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB
https://www.nature.com/ar
I’m also frustrated at how little EA has in the Entrepreneur space, and too much arguing over the best thing before we do it. To put it another way, EA needs more executors, and 80,000 Hours should start prioritizing start-up people hiring. At the end of the day, EA needs to be able to actually do stuff and get results, not just become a intellectual hothouse.
EDIT: I no longer endorse growing the movement in people because I now think that Eternal September issues where a flood of new EAs permanently change the culture are a real risk, and there aren’t a lot of scalable opportunities right now.