Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess
Yeah, FWIW, it’s mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.
Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the
Will circle back more tomorrow / when I’m off the flight!
Yeah, FWIW, it’s mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.