I claim that if you look at funding at what EA organizations are viewed as central—and again, GPI, FHI, CSER, and MIRI are all on the list, the emphasis on academic and intellectual work becomes clearer. I would claim the same is true for what types of work are easy to get funding for. Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.
I don’t see how this is a response to the comment. I think there is approximately ~1 FTE working on infinite ethics in EA . If infinite ethics is indeed, as you said in the main post, one of the four most interesting topics in the whole of EA and approximately no-one is working on it in EA, this is evidence that interestingness is not an important source of bias in the community.
Moreover, according to your argument we can know that fewer people should be working on infinite ethics in EA merely by knowing that the topic is interesting. This is very implausible.
Or take theoretical AI safety. I take it that your argument is that some (how many?) people should stop doing this work and we can know this only by virtue of knowing that the work is interesting. I can think of many arguments for not working on AI safety, but the fact that it is interesting seems a very weak one. I think interesting academic research on AI is easy to fund because the funders think (a) there is a decent chance we will all die due to AI in the next 20 years, and (b) this research might have some small chance of averting that. I find it hard to see how the fact that the research is interesting is an important source of bias in the decision to fund it.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
I agree with your other lines. But I think it’s inaccurate to model the bulk of EA efforts, particularly in longtermism, in terms of funding (as opposed to e.g. people).
“we can know that fewer people should be working on [each area I listed]”
I think you misread my claim. I said that “the number of people we need working on them should probably be more limited than the current trajectory”—EA is growing, and I think that it’s on track to put far too much effort into theoretical work, and will send more people into academia than I think is optimal.
”I take it that your argument is that some (how many?) people should stop doing this work”
I had a section outlining what the concrete outcome I am advocating for looks like.
To address the question about AI safety directly, my claim is that of the many people interested in doing this work, a large fraction should at least consider doing something a step more concrete—as a few concrete examples, ML safety engineering instead of academic ML safety research, or applied ML safety research instead of mathematical work on AI safety, or policy activism instead of policy research, or public communication instead of survey research. And overall, I think that for each, the former is less prestigious within EA, and under-emphasized.
I think the implications of your argument are (1) that these areas get too much interest already, and (2) these areas will get too much interest in the future, unless we make extra efforts relative to today, perhaps motivated by your post.
(1) doesn’t seem true of the areas you mention and this is particularly clear in the case of infinite ethics, where there is only 1 FTE working on it. To give an instructive anecdote, the other person I know of who was working on this topic in her PhD (Amanda Askell) decided to go and work for Open AI to do AI policy stuff.
The point also seems clear in relation to rationality tools given that the main org working on that (CFAR) doesn’t seem to operate any more.
There is more attention to theoretical AI stuff and to EA criticism. Taking your high-level EA criticism as an example, this is exclusively a side-hustle for people in the community spending almost all of their time doing other things. It is true that criticism gets lots of attention in EA (which is a strength of the community in my view) but it’s still a very small fraction of overall effort.
And, the fact that these topics are interesting seems like a very weak steer as to how much resources should go into them.
I’m explicitly saying that (1) is not my general claim—almost everything is under-resourced, and I don’t think we want fewer people in any of these areas, but given limited resources, we may want to allocate differently. My point, as I tried to clarify, was (2).
Regarding infinite ethics , it came up in several different presentations at the recent GPI conference, but I agree it’s getting limited attention, and on the other points, I don’t think we disagree much. Given my perception that we barely disagree, I would be interested in whether you would disagree with any of my concrete suggestions at the end of the post.
I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind, individual LTFF projects) but likely 1⁄3 or less of its budget at any given time. The overall point (all these are a small fraction of overall EA funds) is not affected.
I’ll just note that lots of what CSER does is much more policy relevant and less philosophical compared to the other orgs mentioned, and it’s harder to show impact for more practical policy work than it is to claim impact for conceptual work. That seems to be part of the reason EA funding orgs haven’t been funding as much of their budget.
(I think CSER has struggled to get funding for a some of its work, but this seems like a special case so I don’t think it’s much of a counter argument)
I think if this claim is true it’s less because of motivated reasoning arguments/status of interesting work, and more because object level research is correlated with a bunch of things that make it harder to fund.
I still don’t think I actually buy this claim though, it seems if anything easier to get funding to do prosaic alignment/strategy type work than theory (for example).
I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don’t currently seem to be overprioritized. I don’t think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I’d guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER’s work seems less theoretical. But you might still think there’s too much overall?
My impression is that there’s much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn’t the kind of thing you’re talking about though.
There’s a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to “meta” work.
Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.
My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.
First, yes, my overall point was about academic and theoretical work in general, and yes, as you pointed out, in large part this relates to how object level work on specific cause areas is undervalued relative to “meta” work—but I tried to pick even more concrete areas and organizations because I think that being more concrete was critical, even though it was nearly certain to have more contentious specific objections. And perhaps I’m wrong, and the examples I chose aren’t actually overvalued—though that was not my impression. I also want to note that I’m more concerned about trajectory rather than numbers—putting aside intra-EA allocation of effort, if all areas of EA continue to grow, I think many get less attention than they deserve at a societal level, I think that the theoretical work should grow less than other areas, and far less than they seem poised to grow.
And as noted in another thread, regarding work on infinite ethics and other theoretical work, I got a very different impression at the recent GPI conference—though I clearly have a somewhat different view of what EAs work on compared to many others since I don’t ever manage to go to EAG. (Which they only ever have over the weekend, unfortunately.) Relatedly, on rationality techniques, I see tons of people writing about them, and have seen people who have general funding pending lots of time thinking and writing about it, though I will agree there is less recently, but (despite knowing people who looked for funding,) no-one seems interested in funding more applied work on building out rationality techniques in curricula, or even analysis of what works.
Lastly, on your final point, my example was across the domains, but I do see the same when talking to people about funding for theoretical work on biosafety, compared to applied policy or safety work. But I am hesitant to give specific examples because the ones I would provide are things other people have applied for funding on, whereas the two I listed were things I directly worked on and looked for funding for.
I claim that if you look at funding at what EA organizations are viewed as central—and again, GPI, FHI, CSER, and MIRI are all on the list, the emphasis on academic and intellectual work becomes clearer. I would claim the same is true for what types of work are easy to get funding for. Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.
I don’t see how this is a response to the comment. I think there is approximately ~1 FTE working on infinite ethics in EA . If infinite ethics is indeed, as you said in the main post, one of the four most interesting topics in the whole of EA and approximately no-one is working on it in EA, this is evidence that interestingness is not an important source of bias in the community.
Moreover, according to your argument we can know that fewer people should be working on infinite ethics in EA merely by knowing that the topic is interesting. This is very implausible.
Or take theoretical AI safety. I take it that your argument is that some (how many?) people should stop doing this work and we can know this only by virtue of knowing that the work is interesting. I can think of many arguments for not working on AI safety, but the fact that it is interesting seems a very weak one. I think interesting academic research on AI is easy to fund because the funders think (a) there is a decent chance we will all die due to AI in the next 20 years, and (b) this research might have some small chance of averting that. I find it hard to see how the fact that the research is interesting is an important source of bias in the decision to fund it.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
I agree with your other lines. But I think it’s inaccurate to model the bulk of EA efforts, particularly in longtermism, in terms of funding (as opposed to e.g. people).
“we can know that fewer people should be working on [each area I listed]”
I think you misread my claim. I said that “the number of people we need working on them should probably be more limited than the current trajectory”—EA is growing, and I think that it’s on track to put far too much effort into theoretical work, and will send more people into academia than I think is optimal.
”I take it that your argument is that some (how many?) people should stop doing this work”
I had a section outlining what the concrete outcome I am advocating for looks like.
To address the question about AI safety directly, my claim is that of the many people interested in doing this work, a large fraction should at least consider doing something a step more concrete—as a few concrete examples, ML safety engineering instead of academic ML safety research, or applied ML safety research instead of mathematical work on AI safety, or policy activism instead of policy research, or public communication instead of survey research. And overall, I think that for each, the former is less prestigious within EA, and under-emphasized.
I think the implications of your argument are (1) that these areas get too much interest already, and (2) these areas will get too much interest in the future, unless we make extra efforts relative to today, perhaps motivated by your post.
(1) doesn’t seem true of the areas you mention and this is particularly clear in the case of infinite ethics, where there is only 1 FTE working on it. To give an instructive anecdote, the other person I know of who was working on this topic in her PhD (Amanda Askell) decided to go and work for Open AI to do AI policy stuff.
The point also seems clear in relation to rationality tools given that the main org working on that (CFAR) doesn’t seem to operate any more.
There is more attention to theoretical AI stuff and to EA criticism. Taking your high-level EA criticism as an example, this is exclusively a side-hustle for people in the community spending almost all of their time doing other things. It is true that criticism gets lots of attention in EA (which is a strength of the community in my view) but it’s still a very small fraction of overall effort.
And, the fact that these topics are interesting seems like a very weak steer as to how much resources should go into them.
I’m explicitly saying that (1) is not my general claim—almost everything is under-resourced, and I don’t think we want fewer people in any of these areas, but given limited resources, we may want to allocate differently. My point, as I tried to clarify, was (2).
Regarding infinite ethics , it came up in several different presentations at the recent GPI conference, but I agree it’s getting limited attention, and on the other points, I don’t think we disagree much. Given my perception that we barely disagree, I would be interested in whether you would disagree with any of my concrete suggestions at the end of the post.
I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind, individual LTFF projects) but likely 1⁄3 or less of its budget at any given time. The overall point (all these are a small fraction of overall EA funds) is not affected.
I’ll just note that lots of what CSER does is much more policy relevant and less philosophical compared to the other orgs mentioned, and it’s harder to show impact for more practical policy work than it is to claim impact for conceptual work. That seems to be part of the reason EA funding orgs haven’t been funding as much of their budget.
(I think CSER has struggled to get funding for a some of its work, but this seems like a special case so I don’t think it’s much of a counter argument)
I think if this claim is true it’s less because of motivated reasoning arguments/status of interesting work, and more because object level research is correlated with a bunch of things that make it harder to fund.
I still don’t think I actually buy this claim though, it seems if anything easier to get funding to do prosaic alignment/strategy type work than theory (for example).
I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don’t currently seem to be overprioritized. I don’t think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I’d guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER’s work seems less theoretical. But you might still think there’s too much overall?
My impression is that there’s much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn’t the kind of thing you’re talking about though.
There’s a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to “meta” work.
My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.
First, yes, my overall point was about academic and theoretical work in general, and yes, as you pointed out, in large part this relates to how object level work on specific cause areas is undervalued relative to “meta” work—but I tried to pick even more concrete areas and organizations because I think that being more concrete was critical, even though it was nearly certain to have more contentious specific objections. And perhaps I’m wrong, and the examples I chose aren’t actually overvalued—though that was not my impression. I also want to note that I’m more concerned about trajectory rather than numbers—putting aside intra-EA allocation of effort, if all areas of EA continue to grow, I think many get less attention than they deserve at a societal level, I think that the theoretical work should grow less than other areas, and far less than they seem poised to grow.
And as noted in another thread, regarding work on infinite ethics and other theoretical work, I got a very different impression at the recent GPI conference—though I clearly have a somewhat different view of what EAs work on compared to many others since I don’t ever manage to go to EAG. (Which they only ever have over the weekend, unfortunately.) Relatedly, on rationality techniques, I see tons of people writing about them, and have seen people who have general funding pending lots of time thinking and writing about it, though I will agree there is less recently, but (despite knowing people who looked for funding,) no-one seems interested in funding more applied work on building out rationality techniques in curricula, or even analysis of what works.
Lastly, on your final point, my example was across the domains, but I do see the same when talking to people about funding for theoretical work on biosafety, compared to applied policy or safety work. But I am hesitant to give specific examples because the ones I would provide are things other people have applied for funding on, whereas the two I listed were things I directly worked on and looked for funding for.