would switch to donating to GiveWell and working on AI alignment if they changed their beliefs on factual propositions about the scale and tractability of various problems.
Even this is an addition to your claimed values—the concept of tractability as understood in EA is only a proxy for impact, and often a bad one, not fitting cases where effect is non-linear. For example, if you’re a communist who believes the world would be much better after the revolution, obviously you’re going to have zero effect for most of the time, but then a large one if and when the revolution comes.
This exactly exemplifies the way that unconnected ideas creep in when we talk about being EA-aligned, even when we think it only means those two central values.
Even this is an addition to your claimed values—the concept of tractability as understood in EA is only a proxy for impact, and often a bad one, not fitting cases where effect is non-linear. For example, if you’re a communist who believes the world would be much better after the revolution, obviously you’re going to have zero effect for most of the time, but then a large one if and when the revolution comes.
This exactly exemplifies the way that unconnected ideas creep in when we talk about being EA-aligned, even when we think it only means those two central values.