I agree that that comment is highly relevant to this discussion.
I also agree with Schubert that CEA is a paradigm example of a cause-general organization, and I don’t think the things you’re discussing in your post are really about giving up cause-generality.
I think you are saying something like: running EA Global is cause-impartial (because the “resource” of attending EAG is impartial to the attendees cause), but not cause-general because attendance cannot be transferred between causes. Is that correct?
No, I’d say it’s cause-general (because the “resource” of attendance is not specific to the attendee’s cause), but not cause-agnostic if you take into account the cause areas people support in making admissions decisions. (You could hypothetically have a version of EAG where admission decisions were blind to attendees’ cause areas; in this case it would be cause-agnostic.) Some content at EAG is also cause-general, while other content is cause-decided.
Sorry, I wrote that based on overall impressions without checking back on the details, and I was wrong in some cases.
Curating content that’s about particular causes probably is giving up on cause generality (but definitely giving up cause agnosticism). Different admission bars for events isn’t giving up cause-generality. Cause-specific events could still be cause-general (e.g. if you had an event on applying general EA principles in your work, but aimed just at people interested in GHD), but in practice may not be (if you do a bunch of work that’s specifically relevant for that cause area).
Cause-specific events could still be cause-general (e.g. if you had an event on applying general EA principles in your work, but aimed just at people interested in GHD), but in practice may not be (if you do a bunch of work that’s specifically relevant for that cause area).
Cause-general investment have a wide scope: they can affect any cause.
A GHD event about general EA principles does not seem like it “can affect any cause.”
Or, I guess there is some trivial butterfly-effect sense in which everything can affect everything else, but it seems like a GHD conference has effects which are pretty narrowly targeted at one cause, even if the topics discussed are general EA principles.
My read: an event about general EA principles, considered as a resource, is, as Schubert puts it, cause-flexible: it could easily be adapted to be specialized to a different cause. The fact that it happens to be deployed, in this example, to help GHD, doesn’t change the cause-flexibility of the resource (which is a type of cause-generality).
I guess you could say that it was cause-flexible up until the moment you deployed it and then it stopped being cause-flexible. I think it’s still useful to be able to distinguish cases where it would have been easy to deploy it to a different cause from cases where it would not; and since we have cause-agnostic vs cause-decided to talk about the distinction at the moment of commitment am trying to keep “cause general” to refer to the type of resource rather than the way it’s deployed.
(I don’t feel I have total clarity on this; I’m more confident that there’s a gap between how you’re using terms and Schubert’s article than I am about what’s ideal.)
Different admission bars for events isn’t giving up cause-generality.
Why not? I think we agree (?) that EAG in its current form is ~cause-general. If we changed it so that the admission bar depends on the applicant’s cause, isn’t that making it less cause-general?
Less cause-agnostic (which is a property of what causes you’re aiming at), not less cause-general (which is a property of the type of resource which is being deployed; and remains ~constant in this example).
Ahhh ok, I think maybe I’m starting to understand your argument. I think you are saying something like: the “resources being deployed” at an event are things like chairs, the microphone for the speaker, the carpet in the room, etc. and those things could be deployed for any cause, making them cause general.
In my mind the resources being deployed are things like the invites, the agenda, getting speakers lined up, etc. and those are cause-specific.
I would maybe break this down as saying that the Marriott or whoever is hosting the event is a cause general resource, but the labor done by the event organizing team is cause specific. And usually I am speaking to the event organizing team about cause generality, not the marriott, which is why I was implicitly assuming that the relevant resources would become cause specific, though I understand the argument that the Marriott’s resources are still cause general.
Thanks, I think this is helpful towards narrowing in on whether there’s a disagreement:
I’m saying that the labour done by the organizing team in this case is still cause general. It’s an event on EA principles, and although it’s been targeted at GHD, it would have been a relatively easy switch early in the planning process to decide to target it at GCR-related work instead.
I think this would no longer be the case if there was a lot of work figuring out how to specialise discussion of EA principles to the particular audience. Especially if you had to bring a GHD expert onboard to do that.
You say that the labor is cause general because it could counterfactually have been focused on another cause, but would you say that the final event itself is cause general?
Would you say that a donation which is restricted to only be used for one cause is cause general because it could counterfactually have been restricted to go to a different cause?
I think figure 2 in Schubert’s article is important for my conception of this.
On question 1: I think that CEA has developed cause-general capacity, some of which (the cause-flexible) is then being deployed in service of different causes. No, I don’t think that the final event is cause-general, but I don’t think this undercuts CEA’s cause-generality (this is just the nature of cause-flexible investments being eventually deployed).
On question 2: I don’t think the donation itself is cause-general, but I’d look at the process that produced the donation, and depending on details I might want to claim that was cause-general (e.g. someone going into earning to give).
ok, if we agree that the final event is not cause general, then I’m not sure I understand the concern. Are you suggesting something like: where I say “community building projects must either:.. or break cause-generality” I instead say ”… or be targeted towards outputs (e.g. events) that break cause-generality?”
Hmm. I suppose I don’t think that “break cause-generality” is a helpful framing? Like there are two types of cause-general capacity: broad impact capacity and cause-flexible capacity. The latter is at some point going to be deployed to a specific cause (I guess unless it’s fed back into something general, but obviously you don’t always want to do that).
On the other hand your entire post makes (more) sense to me if I substitute in “cause-agnostic” for “cause-general”. It seemed to me like that (or a very close relative) was the concept you had in mind. Then it’s just obviously the case that all of the things you are talking about maybe doing would break cause-agnosticism, etc.
I’m very interested if “cause-agnostic” doesn’t seem to you like it’s capturing the important thing.
As you mentioned elsewhere, “cause agnosticism” feels like an epistemic state, rather than a behavior. But even putting that aside: It seems to me that one could be convinced that labor is more useful for one cause than it is for another, while still remaining agnostic as to the impact of those causes in general.
Working through an example, suppose:
I believe there is a 50% chance that alternative proteins are twice as good as bed nets, and fifty percent chance that they are half as good. (I will consider this a simplified form of being maximally cause-agnostic.)
I am invited to speak about effective altruism at a meat science department
I believe that the labor of the meat scientists I’m speaking to would be ten times as good for the alternative protein cause if they worked on alternative proteins then it would be for the bed net cause if they worked on bed nets, since their skills are specialized towards working on AP.
So my payoff matrix is:
Talk about alternative proteins, which will get all of them working on AP: 12×2×10+12×12×1=9.25
Talk about bed nets, which will get all of them working on bed nets: 12×2×1+12×12×1=1.25
Talk about EA in general, which I will assume results in a fifty percent chance that they will work on alternative proteins and fifty percent chance that they work on bed nets: 12×2×(12×10+12×1)+12×12×(12×10+12×1)=6.88
I therefore choose to talk about alternative proteins
It feels like this choice is entirely consistent with me maintaining a maximally agnostic view about which cause is more impactful?
Thanks for the example. I agree that there’s something here which comes apart from cause-agnosticism, and I think I now understand why you were using “cause-general”.
This particular example is funny because you also switch from a cause-general intervention (talking about EA) to a cause-specific one (talking about AP), but you could modify the example to keep the interventions cause-general in all cases by saying it’s a choice between giving a talk on EA to (1) top meat scientists, (2) an array of infectious disease scientists, or (3) random researchers.
This makes me think there’s just another distinct concept in play here, and we should name the things apart.
I agree that that comment is highly relevant to this discussion.
I also agree with Schubert that CEA is a paradigm example of a cause-general organization, and I don’t think the things you’re discussing in your post are really about giving up cause-generality.
No, I’d say it’s cause-general (because the “resource” of attendance is not specific to the attendee’s cause), but not cause-agnostic if you take into account the cause areas people support in making admissions decisions. (You could hypothetically have a version of EAG where admission decisions were blind to attendees’ cause areas; in this case it would be cause-agnostic.) Some content at EAG is also cause-general, while other content is cause-decided.
Thanks, could you give a specific example?
Sorry, I wrote that based on overall impressions without checking back on the details, and I was wrong in some cases.
Curating content that’s about particular causes probably is giving up on cause generality (but definitely giving up cause agnosticism). Different admission bars for events isn’t giving up cause-generality. Cause-specific events could still be cause-general (e.g. if you had an event on applying general EA principles in your work, but aimed just at people interested in GHD), but in practice may not be (if you do a bunch of work that’s specifically relevant for that cause area).
Not sure I understand this. Schubert’s definition:
A GHD event about general EA principles does not seem like it “can affect any cause.”
Or, I guess there is some trivial butterfly-effect sense in which everything can affect everything else, but it seems like a GHD conference has effects which are pretty narrowly targeted at one cause, even if the topics discussed are general EA principles.
My read: an event about general EA principles, considered as a resource, is, as Schubert puts it, cause-flexible: it could easily be adapted to be specialized to a different cause. The fact that it happens to be deployed, in this example, to help GHD, doesn’t change the cause-flexibility of the resource (which is a type of cause-generality).
I guess you could say that it was cause-flexible up until the moment you deployed it and then it stopped being cause-flexible. I think it’s still useful to be able to distinguish cases where it would have been easy to deploy it to a different cause from cases where it would not; and since we have cause-agnostic vs cause-decided to talk about the distinction at the moment of commitment am trying to keep “cause general” to refer to the type of resource rather than the way it’s deployed.
(I don’t feel I have total clarity on this; I’m more confident that there’s a gap between how you’re using terms and Schubert’s article than I am about what’s ideal.)
Why not? I think we agree (?) that EAG in its current form is ~cause-general. If we changed it so that the admission bar depends on the applicant’s cause, isn’t that making it less cause-general?
Less cause-agnostic (which is a property of what causes you’re aiming at), not less cause-general (which is a property of the type of resource which is being deployed; and remains ~constant in this example).
Ahhh ok, I think maybe I’m starting to understand your argument. I think you are saying something like: the “resources being deployed” at an event are things like chairs, the microphone for the speaker, the carpet in the room, etc. and those things could be deployed for any cause, making them cause general.
In my mind the resources being deployed are things like the invites, the agenda, getting speakers lined up, etc. and those are cause-specific.
I would maybe break this down as saying that the Marriott or whoever is hosting the event is a cause general resource, but the labor done by the event organizing team is cause specific. And usually I am speaking to the event organizing team about cause generality, not the marriott, which is why I was implicitly assuming that the relevant resources would become cause specific, though I understand the argument that the Marriott’s resources are still cause general.
Thanks, I think this is helpful towards narrowing in on whether there’s a disagreement:
I’m saying that the labour done by the organizing team in this case is still cause general. It’s an event on EA principles, and although it’s been targeted at GHD, it would have been a relatively easy switch early in the planning process to decide to target it at GCR-related work instead.
I think this would no longer be the case if there was a lot of work figuring out how to specialise discussion of EA principles to the particular audience. Especially if you had to bring a GHD expert onboard to do that.
Great, two more clarifying questions:
You say that the labor is cause general because it could counterfactually have been focused on another cause, but would you say that the final event itself is cause general?
Would you say that a donation which is restricted to only be used for one cause is cause general because it could counterfactually have been restricted to go to a different cause?
I think figure 2 in Schubert’s article is important for my conception of this.
On question 1: I think that CEA has developed cause-general capacity, some of which (the cause-flexible) is then being deployed in service of different causes. No, I don’t think that the final event is cause-general, but I don’t think this undercuts CEA’s cause-generality (this is just the nature of cause-flexible investments being eventually deployed).
On question 2: I don’t think the donation itself is cause-general, but I’d look at the process that produced the donation, and depending on details I might want to claim that was cause-general (e.g. someone going into earning to give).
ok, if we agree that the final event is not cause general, then I’m not sure I understand the concern. Are you suggesting something like: where I say “community building projects must either:.. or break cause-generality” I instead say ”… or be targeted towards outputs (e.g. events) that break cause-generality?”
Hmm. I suppose I don’t think that “break cause-generality” is a helpful framing? Like there are two types of cause-general capacity: broad impact capacity and cause-flexible capacity. The latter is at some point going to be deployed to a specific cause (I guess unless it’s fed back into something general, but obviously you don’t always want to do that).
On the other hand your entire post makes (more) sense to me if I substitute in “cause-agnostic” for “cause-general”. It seemed to me like that (or a very close relative) was the concept you had in mind. Then it’s just obviously the case that all of the things you are talking about maybe doing would break cause-agnosticism, etc.
I’m very interested if “cause-agnostic” doesn’t seem to you like it’s capturing the important thing.
As you mentioned elsewhere, “cause agnosticism” feels like an epistemic state, rather than a behavior. But even putting that aside: It seems to me that one could be convinced that labor is more useful for one cause than it is for another, while still remaining agnostic as to the impact of those causes in general.
Working through an example, suppose:
I believe there is a 50% chance that alternative proteins are twice as good as bed nets, and fifty percent chance that they are half as good. (I will consider this a simplified form of being maximally cause-agnostic.)
I am invited to speak about effective altruism at a meat science department
I believe that the labor of the meat scientists I’m speaking to would be ten times as good for the alternative protein cause if they worked on alternative proteins then it would be for the bed net cause if they worked on bed nets, since their skills are specialized towards working on AP.
So my payoff matrix is:
Talk about alternative proteins, which will get all of them working on AP: 12×2×10+12×12×1=9.25
Talk about bed nets, which will get all of them working on bed nets: 12×2×1+12×12×1=1.25
Talk about EA in general, which I will assume results in a fifty percent chance that they will work on alternative proteins and fifty percent chance that they work on bed nets: 12×2×(12×10+12×1)+12×12×(12×10+12×1)=6.88
I therefore choose to talk about alternative proteins
It feels like this choice is entirely consistent with me maintaining a maximally agnostic view about which cause is more impactful?
Thanks for the example. I agree that there’s something here which comes apart from cause-agnosticism, and I think I now understand why you were using “cause-general”.
This particular example is funny because you also switch from a cause-general intervention (talking about EA) to a cause-specific one (talking about AP), but you could modify the example to keep the interventions cause-general in all cases by saying it’s a choice between giving a talk on EA to (1) top meat scientists, (2) an array of infectious disease scientists, or (3) random researchers.
This makes me think there’s just another distinct concept in play here, and we should name the things apart.