As you mentioned elsewhere, “cause agnosticism” feels like an epistemic state, rather than a behavior. But even putting that aside: It seems to me that one could be convinced that labor is more useful for one cause than it is for another, while still remaining agnostic as to the impact of those causes in general.
Working through an example, suppose:
I believe there is a 50% chance that alternative proteins are twice as good as bed nets, and fifty percent chance that they are half as good. (I will consider this a simplified form of being maximally cause-agnostic.)
I am invited to speak about effective altruism at a meat science department
I believe that the labor of the meat scientists I’m speaking to would be ten times as good for the alternative protein cause if they worked on alternative proteins then it would be for the bed net cause if they worked on bed nets, since their skills are specialized towards working on AP.
So my payoff matrix is:
Talk about alternative proteins, which will get all of them working on AP: 12×2×10+12×12×1=9.25
Talk about bed nets, which will get all of them working on bed nets: 12×2×1+12×12×1=1.25
Talk about EA in general, which I will assume results in a fifty percent chance that they will work on alternative proteins and fifty percent chance that they work on bed nets: 12×2×(12×10+12×1)+12×12×(12×10+12×1)=6.88
I therefore choose to talk about alternative proteins
It feels like this choice is entirely consistent with me maintaining a maximally agnostic view about which cause is more impactful?
Thanks for the example. I agree that there’s something here which comes apart from cause-agnosticism, and I think I now understand why you were using “cause-general”.
This particular example is funny because you also switch from a cause-general intervention (talking about EA) to a cause-specific one (talking about AP), but you could modify the example to keep the interventions cause-general in all cases by saying it’s a choice between giving a talk on EA to (1) top meat scientists, (2) an array of infectious disease scientists, or (3) random researchers.
This makes me think there’s just another distinct concept in play here, and we should name the things apart.
As you mentioned elsewhere, “cause agnosticism” feels like an epistemic state, rather than a behavior. But even putting that aside: It seems to me that one could be convinced that labor is more useful for one cause than it is for another, while still remaining agnostic as to the impact of those causes in general.
Working through an example, suppose:
I believe there is a 50% chance that alternative proteins are twice as good as bed nets, and fifty percent chance that they are half as good. (I will consider this a simplified form of being maximally cause-agnostic.)
I am invited to speak about effective altruism at a meat science department
I believe that the labor of the meat scientists I’m speaking to would be ten times as good for the alternative protein cause if they worked on alternative proteins then it would be for the bed net cause if they worked on bed nets, since their skills are specialized towards working on AP.
So my payoff matrix is:
Talk about alternative proteins, which will get all of them working on AP: 12×2×10+12×12×1=9.25
Talk about bed nets, which will get all of them working on bed nets: 12×2×1+12×12×1=1.25
Talk about EA in general, which I will assume results in a fifty percent chance that they will work on alternative proteins and fifty percent chance that they work on bed nets: 12×2×(12×10+12×1)+12×12×(12×10+12×1)=6.88
I therefore choose to talk about alternative proteins
It feels like this choice is entirely consistent with me maintaining a maximally agnostic view about which cause is more impactful?
Thanks for the example. I agree that there’s something here which comes apart from cause-agnosticism, and I think I now understand why you were using “cause-general”.
This particular example is funny because you also switch from a cause-general intervention (talking about EA) to a cause-specific one (talking about AP), but you could modify the example to keep the interventions cause-general in all cases by saying it’s a choice between giving a talk on EA to (1) top meat scientists, (2) an array of infectious disease scientists, or (3) random researchers.
This makes me think there’s just another distinct concept in play here, and we should name the things apart.