Previously I had wondered whether effective altruism was a “prediction-complete” problem—that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you’re willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don’t need to know how to predict everything—it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.
(There’s a flaw in this argument if calibration is domain-specific.)
Previously I had wondered whether effective altruism was a “prediction-complete” problem—that is, whether learning to predict things accurately should be considered a prerequisite for EA activity (if you’re willing to grant that the far future is of tremendous importance). But the other day it occurred to me that it might be sufficient to simply be well-calibrated. If you really are well calibrated, and it really does appear to be the case that if you say something is 90% probable it actually happens 90% of the time, then you don’t need to know how to predict everything—it should be sufficient to just look for areas where you are currently assigning a 90% probability to this being a good thing, and then focus your EA activities there.
(There’s a flaw in this argument if calibration is domain-specific.)