I’ve strong upvoted this piece because I think the analysis looks really good and it taught me something new. I really liked all of it except for the last few paragraphs, which seemed to suggest long-termism should be ~all of EA’s focus rather than about half.
Hey Khorton, I didn’t mean to imply that. I think the last paragraphs still stand as long as you assume that we’ll want some of the core of EA to work on unusual causes, rather than 100%.
I’ve strong upvoted this piece because I think the analysis looks really good and it taught me something new. I really liked all of it except for the last few paragraphs, which seemed to suggest long-termism should be ~all of EA’s focus rather than about half.
Hey Khorton, I didn’t mean to imply that. I think the last paragraphs still stand as long as you assume that we’ll want some of the core of EA to work on unusual causes, rather than 100%.