I don’t see how this is a response to the comment. I think there is approximately ~1 FTE working on infinite ethics in EA . If infinite ethics is indeed, as you said in the main post, one of the four most interesting topics in the whole of EA and approximately no-one is working on it in EA, this is evidence that interestingness is not an important source of bias in the community.
Moreover, according to your argument we can know that fewer people should be working on infinite ethics in EA merely by knowing that the topic is interesting. This is very implausible.
Or take theoretical AI safety. I take it that your argument is that some (how many?) people should stop doing this work and we can know this only by virtue of knowing that the work is interesting. I can think of many arguments for not working on AI safety, but the fact that it is interesting seems a very weak one. I think interesting academic research on AI is easy to fund because the funders think (a) there is a decent chance we will all die due to AI in the next 20 years, and (b) this research might have some small chance of averting that. I find it hard to see how the fact that the research is interesting is an important source of bias in the decision to fund it.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
I agree with your other lines. But I think it’s inaccurate to model the bulk of EA efforts, particularly in longtermism, in terms of funding (as opposed to e.g. people).
“we can know that fewer people should be working on [each area I listed]”
I think you misread my claim. I said that “the number of people we need working on them should probably be more limited than the current trajectory”—EA is growing, and I think that it’s on track to put far too much effort into theoretical work, and will send more people into academia than I think is optimal.
”I take it that your argument is that some (how many?) people should stop doing this work”
I had a section outlining what the concrete outcome I am advocating for looks like.
To address the question about AI safety directly, my claim is that of the many people interested in doing this work, a large fraction should at least consider doing something a step more concrete—as a few concrete examples, ML safety engineering instead of academic ML safety research, or applied ML safety research instead of mathematical work on AI safety, or policy activism instead of policy research, or public communication instead of survey research. And overall, I think that for each, the former is less prestigious within EA, and under-emphasized.
I think the implications of your argument are (1) that these areas get too much interest already, and (2) these areas will get too much interest in the future, unless we make extra efforts relative to today, perhaps motivated by your post.
(1) doesn’t seem true of the areas you mention and this is particularly clear in the case of infinite ethics, where there is only 1 FTE working on it. To give an instructive anecdote, the other person I know of who was working on this topic in her PhD (Amanda Askell) decided to go and work for Open AI to do AI policy stuff.
The point also seems clear in relation to rationality tools given that the main org working on that (CFAR) doesn’t seem to operate any more.
There is more attention to theoretical AI stuff and to EA criticism. Taking your high-level EA criticism as an example, this is exclusively a side-hustle for people in the community spending almost all of their time doing other things. It is true that criticism gets lots of attention in EA (which is a strength of the community in my view) but it’s still a very small fraction of overall effort.
And, the fact that these topics are interesting seems like a very weak steer as to how much resources should go into them.
I’m explicitly saying that (1) is not my general claim—almost everything is under-resourced, and I don’t think we want fewer people in any of these areas, but given limited resources, we may want to allocate differently. My point, as I tried to clarify, was (2).
Regarding infinite ethics , it came up in several different presentations at the recent GPI conference, but I agree it’s getting limited attention, and on the other points, I don’t think we disagree much. Given my perception that we barely disagree, I would be interested in whether you would disagree with any of my concrete suggestions at the end of the post.
I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind, individual LTFF projects) but likely 1⁄3 or less of its budget at any given time. The overall point (all these are a small fraction of overall EA funds) is not affected.
I’ll just note that lots of what CSER does is much more policy relevant and less philosophical compared to the other orgs mentioned, and it’s harder to show impact for more practical policy work than it is to claim impact for conceptual work. That seems to be part of the reason EA funding orgs haven’t been funding as much of their budget.
I don’t see how this is a response to the comment. I think there is approximately ~1 FTE working on infinite ethics in EA . If infinite ethics is indeed, as you said in the main post, one of the four most interesting topics in the whole of EA and approximately no-one is working on it in EA, this is evidence that interestingness is not an important source of bias in the community.
Moreover, according to your argument we can know that fewer people should be working on infinite ethics in EA merely by knowing that the topic is interesting. This is very implausible.
Or take theoretical AI safety. I take it that your argument is that some (how many?) people should stop doing this work and we can know this only by virtue of knowing that the work is interesting. I can think of many arguments for not working on AI safety, but the fact that it is interesting seems a very weak one. I think interesting academic research on AI is easy to fund because the funders think (a) there is a decent chance we will all die due to AI in the next 20 years, and (b) this research might have some small chance of averting that. I find it hard to see how the fact that the research is interesting is an important source of bias in the decision to fund it.
GPI, FHI, CSER and MIRI are a small fraction of overall funding in EA. CSER doesn’t get any EA money, and I think the budgets of FHI and GPI are in the low millions per year, compared to hundreds of millions of dollars per year in EA spending.
I agree with your other lines. But I think it’s inaccurate to model the bulk of EA efforts, particularly in longtermism, in terms of funding (as opposed to e.g. people).
“we can know that fewer people should be working on [each area I listed]”
I think you misread my claim. I said that “the number of people we need working on them should probably be more limited than the current trajectory”—EA is growing, and I think that it’s on track to put far too much effort into theoretical work, and will send more people into academia than I think is optimal.
”I take it that your argument is that some (how many?) people should stop doing this work”
I had a section outlining what the concrete outcome I am advocating for looks like.
To address the question about AI safety directly, my claim is that of the many people interested in doing this work, a large fraction should at least consider doing something a step more concrete—as a few concrete examples, ML safety engineering instead of academic ML safety research, or applied ML safety research instead of mathematical work on AI safety, or policy activism instead of policy research, or public communication instead of survey research. And overall, I think that for each, the former is less prestigious within EA, and under-emphasized.
I think the implications of your argument are (1) that these areas get too much interest already, and (2) these areas will get too much interest in the future, unless we make extra efforts relative to today, perhaps motivated by your post.
(1) doesn’t seem true of the areas you mention and this is particularly clear in the case of infinite ethics, where there is only 1 FTE working on it. To give an instructive anecdote, the other person I know of who was working on this topic in her PhD (Amanda Askell) decided to go and work for Open AI to do AI policy stuff.
The point also seems clear in relation to rationality tools given that the main org working on that (CFAR) doesn’t seem to operate any more.
There is more attention to theoretical AI stuff and to EA criticism. Taking your high-level EA criticism as an example, this is exclusively a side-hustle for people in the community spending almost all of their time doing other things. It is true that criticism gets lots of attention in EA (which is a strength of the community in my view) but it’s still a very small fraction of overall effort.
And, the fact that these topics are interesting seems like a very weak steer as to how much resources should go into them.
I’m explicitly saying that (1) is not my general claim—almost everything is under-resourced, and I don’t think we want fewer people in any of these areas, but given limited resources, we may want to allocate differently. My point, as I tried to clarify, was (2).
Regarding infinite ethics , it came up in several different presentations at the recent GPI conference, but I agree it’s getting limited attention, and on the other points, I don’t think we disagree much. Given my perception that we barely disagree, I would be interested in whether you would disagree with any of my concrete suggestions at the end of the post.
I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind, individual LTFF projects) but likely 1⁄3 or less of its budget at any given time. The overall point (all these are a small fraction of overall EA funds) is not affected.
I’ll just note that lots of what CSER does is much more policy relevant and less philosophical compared to the other orgs mentioned, and it’s harder to show impact for more practical policy work than it is to claim impact for conceptual work. That seems to be part of the reason EA funding orgs haven’t been funding as much of their budget.