To some significant extent, I just think choice of cause area is quite personal. Some people are longtermists, some aren’t. Some people think it’s good to reduce x-risk, some don’t etc.
I think I might actually agree with what you actually mean, but not with a natural interpretation of this sentence. “quite personal” sounds to me like it means “subjective, with no better or worse answers, like how much someone likes How I Met Your Mother.” But I think there may be “better or worse answers”, or at least more or less consistent and thought-out answers. And I think that what’s going on here is not simply subjectivity.
Instead, I’d say that choices of broad cause area seem to come down to a combination of factors like:
One’s implicit and explicit moral/normative/decision-theoretic views
E.g., would increasing the number of lives that occur be a good thing, if those lives are happy?
E.g., should people just maximise expected value? Even in Pascalian situations?
One’s implicit and explicit epistemological views
E.g., how much to trust inside- vs outside-views, chains of theoretical arguments vs empirical data, etc.
One’s implicit and explicit empirical views, sometimes on questions where it’s unusually hard to get evidence (meaning one must rely more on priors)
E.g., are we at the hinge of history?
The common assumptions, norms, social environments, etc. one is exposed to
I think people can explicitly discuss and change their minds about all of these things.
But that seems (a lot?) harder than explicitly discussing and changing one’s mind about priorities within a broad cause area. And I think this is partly because these between-cause-area stuff involves more differences in moral views, priors, etc. I think this is the sort of thing you might gesture at with “To some significant extent, I just think choice of cause area is quite personal”?
To return from my tangent to the post at hand: The activities involved in the stages of this model:
might inform relevant empirical views, but that effect might be limited by those views sometimes being unusually dependent on priors
might usually not really have much to say about relevant moral/normative/decision-theoretic views and relevant epistemological views.
And I think that that’s one way to explain why this model might not have much to say about between-cause-area decisions.
You’re right. “Personal” wasn’t the best choice of word, I’m going to blame my 11pm brain again.
I sort of think you’ve restated my position, but worded it somewhat better, so thanks for that.