I think this post is interesting, while being quite unsure what my actual take is on the correctness of this updated version. I think I am worried about community epistemics in this world where we encourage people to defer on what the most important thing is.
It seems like there are a bunch of other plausible candidates for where the best marginal value add is even if you buy AI X- risk arguments eg. S risks, animal welfare, digital sentience, space governance etc. I am excited about most young EAs thinking about these issues for themselves.
How much do you weight the outside view consideration here of you suggesting a large shift in the EA community resource allocation, and then changing your mind a year later, which indicates the exact kind of uncertainty which motivates more diverse portfolios?
I think your point of people underrating problem importance relative to personal fit on the current margin seems true though and tangentially, my guess is the overall EA cause portfolio (both for financial and human capital allocation) is too large.
Yep, all sounds right to me re: not deferring too much and thinking through cause prioritization yourself, and then also that the portfolio is too broad, though these are kind of in tension.
To answer your question, I’m not sure I update that much on having changed my mind, since I think if people did listen to me and do AISTR this would have been a better use of time even for a governance career than basically anything besides AI governance work (and of course there’s a distribution within each of those categories for how useful a given project is; lots of technical projects would’ve been more useful than the median governance project).
I think this post is interesting, while being quite unsure what my actual take is on the correctness of this updated version. I think I am worried about community epistemics in this world where we encourage people to defer on what the most important thing is.
It seems like there are a bunch of other plausible candidates for where the best marginal value add is even if you buy AI X- risk arguments eg. S risks, animal welfare, digital sentience, space governance etc. I am excited about most young EAs thinking about these issues for themselves.
How much do you weight the outside view consideration here of you suggesting a large shift in the EA community resource allocation, and then changing your mind a year later, which indicates the exact kind of uncertainty which motivates more diverse portfolios?
I think your point of people underrating problem importance relative to personal fit on the current margin seems true though and tangentially, my guess is the overall EA cause portfolio (both for financial and human capital allocation) is too large.
Yep, all sounds right to me re: not deferring too much and thinking through cause prioritization yourself, and then also that the portfolio is too broad, though these are kind of in tension.
To answer your question, I’m not sure I update that much on having changed my mind, since I think if people did listen to me and do AISTR this would have been a better use of time even for a governance career than basically anything besides AI governance work (and of course there’s a distribution within each of those categories for how useful a given project is; lots of technical projects would’ve been more useful than the median governance project).