I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don’t currently seem to be overprioritized. I don’t think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I’d guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER’s work seems less theoretical. But you might still think there’s too much overall?
My impression is that there’s much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn’t the kind of thing you’re talking about though.
There’s a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to “meta” work.
Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.
My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.
First, yes, my overall point was about academic and theoretical work in general, and yes, as you pointed out, in large part this relates to how object level work on specific cause areas is undervalued relative to “meta” work—but I tried to pick even more concrete areas and organizations because I think that being more concrete was critical, even though it was nearly certain to have more contentious specific objections. And perhaps I’m wrong, and the examples I chose aren’t actually overvalued—though that was not my impression. I also want to note that I’m more concerned about trajectory rather than numbers—putting aside intra-EA allocation of effort, if all areas of EA continue to grow, I think many get less attention than they deserve at a societal level, I think that the theoretical work should grow less than other areas, and far less than they seem poised to grow.
And as noted in another thread, regarding work on infinite ethics and other theoretical work, I got a very different impression at the recent GPI conference—though I clearly have a somewhat different view of what EAs work on compared to many others since I don’t ever manage to go to EAG. (Which they only ever have over the weekend, unfortunately.) Relatedly, on rationality techniques, I see tons of people writing about them, and have seen people who have general funding pending lots of time thinking and writing about it, though I will agree there is less recently, but (despite knowing people who looked for funding,) no-one seems interested in funding more applied work on building out rationality techniques in curricula, or even analysis of what works.
Lastly, on your final point, my example was across the domains, but I do see the same when talking to people about funding for theoretical work on biosafety, compared to applied policy or safety work. But I am hesitant to give specific examples because the ones I would provide are things other people have applied for funding on, whereas the two I listed were things I directly worked on and looked for funding for.
I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don’t currently seem to be overprioritized. I don’t think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I’d guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER’s work seems less theoretical. But you might still think there’s too much overall?
My impression is that there’s much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn’t the kind of thing you’re talking about though.
There’s a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to “meta” work.
My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.
First, yes, my overall point was about academic and theoretical work in general, and yes, as you pointed out, in large part this relates to how object level work on specific cause areas is undervalued relative to “meta” work—but I tried to pick even more concrete areas and organizations because I think that being more concrete was critical, even though it was nearly certain to have more contentious specific objections. And perhaps I’m wrong, and the examples I chose aren’t actually overvalued—though that was not my impression. I also want to note that I’m more concerned about trajectory rather than numbers—putting aside intra-EA allocation of effort, if all areas of EA continue to grow, I think many get less attention than they deserve at a societal level, I think that the theoretical work should grow less than other areas, and far less than they seem poised to grow.
And as noted in another thread, regarding work on infinite ethics and other theoretical work, I got a very different impression at the recent GPI conference—though I clearly have a somewhat different view of what EAs work on compared to many others since I don’t ever manage to go to EAG. (Which they only ever have over the weekend, unfortunately.) Relatedly, on rationality techniques, I see tons of people writing about them, and have seen people who have general funding pending lots of time thinking and writing about it, though I will agree there is less recently, but (despite knowing people who looked for funding,) no-one seems interested in funding more applied work on building out rationality techniques in curricula, or even analysis of what works.
Lastly, on your final point, my example was across the domains, but I do see the same when talking to people about funding for theoretical work on biosafety, compared to applied policy or safety work. But I am hesitant to give specific examples because the ones I would provide are things other people have applied for funding on, whereas the two I listed were things I directly worked on and looked for funding for.