I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
I know you are only claiming (2), but my point is that your argument implies (1). Simply, if there is a genuine bias towards interesting but not impactful work, why would it only kick in in the future but not so far after >10 years of EA?
If your claim is (2) only, this also seems false. The trajectory for infinite ethics is maybe 2-3FTE working on it 5 years or something? The trajectory for rationality tools seems like basically no-one will be working on that in the future; interest in that topic is declining over time.
I agree with the last section apart from the last paragraph—i think theoretical philosophy and economics are very important. I also think we have completely different reasons for accepting the conclusions we do agree on. I have not seen any evidence of an ‘interestingness bias’, and it plays no role in my thinking.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.