(There’s a lot more I might want to say about this, and also don’t take the precise 80% too seriously, but FWIW:)[1]
When we do cause prioritization, we’re judging whether one cause is better than another under our (extreme) uncertainty. To do that, we need to clarify what kind of uncertainty we have, and what it means to do “better” given that uncertainty. To do that, we need to reflect on questions like:
“Should we endorse classical Bayesian epistemology (even as an ‘ideal’)?” or
“How do we compare actions’ ‘expected’ consequences, when we can’t conceive of all the possible consequences?”
You might defer to others who’ve reflected a lot on these questions. But to me it seems there are surprisingly few people who’ve (legibly) done so. E.g., take the theorems that supposedly tell us to be (/“approximate”?) classical Bayesians. I’ve seen very little work carefully spelling out why & how these theorems tell either ideal or bounded agents what to believe, and how to make decisions. (See also this post.)
I’ve also often seen people who are highly deferred-to in EA/rationalism make claims in these domains that, AFAICT, are straightforwardly confused or question-begging. Like “precise credences lead to ‘better decisions’ than imprecise credences” — when the whole question of what makes decisions “better” depends on our credences.
Even if someone has legibly thought a lot about this stuff, their basic philosophical attitudes might be very different from yours-upon-reflection. So I think you should only defer to them as far as you have reason to think that’s not a problem.
(There’s a lot more I might want to say about this, and also don’t take the precise 80% too seriously, but FWIW:)[1]
When we do cause prioritization, we’re judging whether one cause is better than another under our (extreme) uncertainty. To do that, we need to clarify what kind of uncertainty we have, and what it means to do “better” given that uncertainty. To do that, we need to reflect on questions like:
“Should we endorse classical Bayesian epistemology (even as an ‘ideal’)?” or
“How do we compare actions’ ‘expected’ consequences, when we can’t conceive of all the possible consequences?”
You might defer to others who’ve reflected a lot on these questions. But to me it seems there are surprisingly few people who’ve (legibly) done so. E.g., take the theorems that supposedly tell us to be (/“approximate”?) classical Bayesians. I’ve seen very little work carefully spelling out why & how these theorems tell either ideal or bounded agents what to believe, and how to make decisions. (See also this post.)
I’ve also often seen people who are highly deferred-to in EA/rationalism make claims in these domains that, AFAICT, are straightforwardly confused or question-begging. Like “precise credences lead to ‘better decisions’ than imprecise credences” — when the whole question of what makes decisions “better” depends on our credences.
Even if someone has legibly thought a lot about this stuff, their basic philosophical attitudes might be very different from yours-upon-reflection. So I think you should only defer to them as far as you have reason to think that’s not a problem.
Much of what I write here is inspired by discussions with Jesse Clifton.