First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.
First, biases are fare more critical in the tails of distributions. For example, if we should optimally have 1% of humans alive today work on ML-based AI safety and 0.01% of humanity work on mathematical approaches to AI risk, or 0.001% work on forecasting time scales, and 0.0000001% work on infinite ethics, but the interestingness heuristic leads to people doing 50x as much work as is optimal on the second area in each pair, the first ten thousand EAs won’t end up overinvesting in any of them—but over time, if EA scales, we’ll see a problem.
On the specific topics, I’m not saying that infinite ethics is literally worthless, I’m saying that even at 1 FTE, we’re wasting time on it. Perhaps you view that as incorrect on the merits, but my claim is, tentatively, that it’s already significantly less important than a marginal FTE on anything else on the GPI agenda.
Lastly, I think we as a community are spending lots of time discussing rationality. I agree it’s no-one’s full time job, but it’s certainly a lot of words every month on lesswrong, and then far too little time actually creating ways of applying the insights, as CFAR did when building their curriculum, albeit not at all scalably. And the plan to develop a teachable curriculum for schools and groups, which I view as almost the epitome of the applied side of increasing the sanity waterline, was abandoned entirely. So we’re doing / have done lots of interesting theory and writing on the topic, and much too little of value concretely. (With the slight exception of Julia’s book, which is wonderful.) Maybe that’s due to something other than how it was interesting to people, but having spent time on it personally, my inside view is that it’s largely the dynamic I identified.