Thanks, this is a useful clarification. I think my original claim was unclear. Read as “very few people were thinking about these topics at the time when DGB came out”, then you are correct.
(I think) I had in mind something like “at the time when DGB came out it wasn’t the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is”.
I’m actually not fully confident in that second claim, but it does seem true to me.
AI alignment and existential risks have been key components from the very beginning. Remember, Toby worked for FHI before founding GWWC, and even from the earliest days MIRI was seen as an acceptable donation target to fulfill the pledge. The downweighting of AI in DGB was a deliberate choice for an introductory text.
Thanks, this is a useful clarification. I think my original claim was unclear. Read as “very few people were thinking about these topics at the time when DGB came out”, then you are correct.
(I think) I had in mind something like “at the time when DGB came out it wasn’t the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is”.
I’m actually not fully confident in that second claim, but it does seem true to me.
AI alignment and existential risks have been key components from the very beginning. Remember, Toby worked for FHI before founding GWWC, and even from the earliest days MIRI was seen as an acceptable donation target to fulfill the pledge. The downweighting of AI in DGB was a deliberate choice for an introductory text.
Thanks, that’s useful to know.