I figure I fall into the “skeptical EA” camp, so let me try defending it. :) It’d be good to make progress on this issue so I appreciate your engaging with it! Here I’ll consider your two key steps in post 2.
To take your first step first:
“Highly skeptical EAs think you need strong evidence to prove that something works / is true / will happen.”
This would be self-refuting, as you say. But I don’t think most have quite as strong a position as that. After all, ‘provenness’ is a matter of degree. It’s more that we’re relatively negative about speculative activities.
It’s also worth distinguishing between:
(1) charities which definitely have a 1% chance of doing an enormous of good. For example, a charity which’d definitely do 101 times as much good as AMF if a 100-sided die were rolled and came up 100.
(2) charities which may have a 1% chance of doing an enormous amount of good, but lack robust evidence for this. E.g. they have no track record, no precedent, no broad expert/academic endorsement. But a back of the envelope calculation including some very rough guesses suggests that they have a 1% chance of doing 101 times as much good as AMF.
I’d give to (1) over AMF, but not (2).
To consider your third step:
“But the claim that it’s higher expected value to do the best ‘proven’ thing rather than a speculative/unproven thing (that on its face looks important, neglected and tractable) is itself unproven to a high standard of evidence.”
True. This would be a reductio ad absurdum of the claim that we should only ever believe ‘proven’ propositions (which we could perhaps define as those which an ideally rational agent in our position would have >90% credence in). But ‘skeptical EAs’ rarely claim anything so implausible.
The best epistemic approach on these issues is clearly far from proven, so we have no choice but to pick our best bet (adjusting for our uncertainty). It could still be the case, without inconsistency, that the best epistemic approach is to rate relatively speculative activities lower than the average EA does.
“Indeed, it’s a very hard claim to ever prove and would require a very large project involving lots of people over a long period of time looking at the average return on e.g. basic science research.”
This sort of look at the historical track record of different epistemic approaches does indeed seem the best approach. You’re right that the correct answer is far from 90% proven.
“(In my view, we don’t really know at the moment and should be agnostic on this issue.)”
If by ‘agnostic’ you mean ‘completely neutral’ (which you may very well not?) then I disagree. Some approaches seem better than others, and we should take our best bet.
they have no track record, no precedent, no broad expert/academic endorsement.
One problem is that interventions which do have these things are usually not neglected.
i.e. you get higher tractability but lower neglectedness.
Since both matter, it becomes unclear where on the spectrum to focus.
It seems unlikely you either want 100% neglectedness or 100% tractability (because then you’ll be ignoring the opposite factor), so my guess is somewhere in the middle.
I think this speaks in favor of looking at areas on the edge of consensus, where there’s emerging but not fully decisive evidence.
As you note, the next step is “what’s the robust evidence that AMF beats 2)”?
I think your response though is the right one overall.
It’s a difficult judgement call made with little evidence and we have to make our best bet.
I think people should be modest about their confidence in focusing on projects that look like 1), AMF, or 2). It wouldn’t take a lot of evidence to convince me one way or another, and I would advocate a mixed strategy between them among the community today.
By agnostic, I just mean thinking there’s a decent chance (10%+) any of these could be the best approach for someone, and so not using this difference as the key issue on which you judge projects.
By agnostic, I just mean thinking there’s a decent chance (10%+) any of these could be the best approach for someone, and so not using this difference as the key issue on which you judge projects.
That’s much more plausible than total neutrality! I agree that there’s no theoretical argument (that I know of) for thinking that (2) is very likely to be worse than AMF. So it all depends on what the best available candidates for (2) are. Perhaps people could make progress by discussing a real world example of (2). (Ideally this wouldn’t be an org anyone has ties to, to allow for especially neutral discussion of it.)
I figure I fall into the “skeptical EA” camp, so let me try defending it. :) It’d be good to make progress on this issue so I appreciate your engaging with it! Here I’ll consider your two key steps in post 2.
To take your first step first:
“Highly skeptical EAs think you need strong evidence to prove that something works / is true / will happen.”
This would be self-refuting, as you say. But I don’t think most have quite as strong a position as that. After all, ‘provenness’ is a matter of degree. It’s more that we’re relatively negative about speculative activities.
It’s also worth distinguishing between:
(1) charities which definitely have a 1% chance of doing an enormous of good. For example, a charity which’d definitely do 101 times as much good as AMF if a 100-sided die were rolled and came up 100.
(2) charities which may have a 1% chance of doing an enormous amount of good, but lack robust evidence for this. E.g. they have no track record, no precedent, no broad expert/academic endorsement. But a back of the envelope calculation including some very rough guesses suggests that they have a 1% chance of doing 101 times as much good as AMF.
I’d give to (1) over AMF, but not (2).
To consider your third step:
“But the claim that it’s higher expected value to do the best ‘proven’ thing rather than a speculative/unproven thing (that on its face looks important, neglected and tractable) is itself unproven to a high standard of evidence.”
True. This would be a reductio ad absurdum of the claim that we should only ever believe ‘proven’ propositions (which we could perhaps define as those which an ideally rational agent in our position would have >90% credence in). But ‘skeptical EAs’ rarely claim anything so implausible.
The best epistemic approach on these issues is clearly far from proven, so we have no choice but to pick our best bet (adjusting for our uncertainty). It could still be the case, without inconsistency, that the best epistemic approach is to rate relatively speculative activities lower than the average EA does.
“Indeed, it’s a very hard claim to ever prove and would require a very large project involving lots of people over a long period of time looking at the average return on e.g. basic science research.”
This sort of look at the historical track record of different epistemic approaches does indeed seem the best approach. You’re right that the correct answer is far from 90% proven.
“(In my view, we don’t really know at the moment and should be agnostic on this issue.)”
If by ‘agnostic’ you mean ‘completely neutral’ (which you may very well not?) then I disagree. Some approaches seem better than others, and we should take our best bet.
One problem is that interventions which do have these things are usually not neglected. i.e. you get higher tractability but lower neglectedness.
Since both matter, it becomes unclear where on the spectrum to focus.
It seems unlikely you either want 100% neglectedness or 100% tractability (because then you’ll be ignoring the opposite factor), so my guess is somewhere in the middle.
I think this speaks in favor of looking at areas on the edge of consensus, where there’s emerging but not fully decisive evidence.
As you note, the next step is “what’s the robust evidence that AMF beats 2)”?
I think your response though is the right one overall.
It’s a difficult judgement call made with little evidence and we have to make our best bet.
I think people should be modest about their confidence in focusing on projects that look like 1), AMF, or 2). It wouldn’t take a lot of evidence to convince me one way or another, and I would advocate a mixed strategy between them among the community today.
By agnostic, I just mean thinking there’s a decent chance (10%+) any of these could be the best approach for someone, and so not using this difference as the key issue on which you judge projects.
That’s much more plausible than total neutrality! I agree that there’s no theoretical argument (that I know of) for thinking that (2) is very likely to be worse than AMF. So it all depends on what the best available candidates for (2) are. Perhaps people could make progress by discussing a real world example of (2). (Ideally this wouldn’t be an org anyone has ties to, to allow for especially neutral discussion of it.)