Meta note: I feel a vague sense of doom about a lot of questions on the EA forum (contrasted with LessWrong), which is that questions end up focused on “how should EA overall coordinate”, “what should be the top causes” and “what should be part of the EA narrative?”
I worry about this because I think it’s harder to think clearly about narratives and coordination mechanisms that it is about object level facts. I also have a sense that the questions are often framed in a way that is trying to tell me the answer rather than help me figure things out.
And often I think the questions could be reframed as empirical questions without the “should” and “we” frames, which a) I think would be easier to reason about, b) would remain approximately as useful for helping people to coordinate.
“Is X a top cause area?” is a sort of weird question. The whole point of EA is that you need to prioritize, and there are only ever going to be a smallish number of “top causes”. So the answer to any given “Is this Cause X” is going to be “probably not.”
But, it’s still useful to curiously explore cause areas that are underexplored. “What are the tractable interventions of [this particular cause]?” is a question that you can explore without making it about whether it’s one of the top causes overall.
I also think this suggests something is going wrong. I’m guessing a lot of it is that people feel a need to justify posts as on-topic. If they post a thing because it seems interesting, confusing, exciting, etc., they’re likely to get challenged about why the post belongs on the EA Forum.
This means that EAs can’t talk about ideas and areas unless either (a) they’ve already been sufficiently well-explored by EAs elsewhere (e.g., in an 80K blog post or an Open Phil report) that there’s a pre-existing consensus this is an especially good thing to talk about; or (b) they’re willing to make the discussion very meta-oriented and general. (“Why don’t EAs care more about reducing rates of medical error?”, as opposed to “Hey, here’s an interesting study on things that mediate medical error rates!”)
This seems OK iff the EA Forum is only intended to intervene on a particular part of the idea pipeline — maybe the idea is for individuals and groups to explore new frontiers elsewhere, and bring them to the EA Forum once they’re already well-established enough that everyone can agree they make sense as an EA priority. In that case, it might be helpful to have canonical locations people can go to have those earlier discussions.
EA is more concerned with capital allocation than LessWrong, so this doesn’t seem surprising.
Being a “top cause area” is basically synonymous with “put EA capital towards this thing.”
“What are the tractable interventions of [this particular cause]?” is a question that you can explore without making it about whether it’s one of the top causes overall.
At root, we’ll only want to explore tractable interventions in cause areas that are plausible candidates for EA capital allocation, so I don’t think this framing sidesteps the issue.
I didn’t write a top level post but I sketched out some of the relevant background ideas here. (I’m not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)
Meta note: I feel a vague sense of doom about a lot of questions on the EA forum (contrasted with LessWrong), which is that questions end up focused on “how should EA overall coordinate”, “what should be the top causes” and “what should be part of the EA narrative?”
I worry about this because I think it’s harder to think clearly about narratives and coordination mechanisms that it is about object level facts. I also have a sense that the questions are often framed in a way that is trying to tell me the answer rather than help me figure things out.
And often I think the questions could be reframed as empirical questions without the “should” and “we” frames, which a) I think would be easier to reason about, b) would remain approximately as useful for helping people to coordinate.
“Is X a top cause area?” is a sort of weird question. The whole point of EA is that you need to prioritize, and there are only ever going to be a smallish number of “top causes”. So the answer to any given “Is this Cause X” is going to be “probably not.”
But, it’s still useful to curiously explore cause areas that are underexplored. “What are the tractable interventions of [this particular cause]?” is a question that you can explore without making it about whether it’s one of the top causes overall.
I also think this suggests something is going wrong. I’m guessing a lot of it is that people feel a need to justify posts as on-topic. If they post a thing because it seems interesting, confusing, exciting, etc., they’re likely to get challenged about why the post belongs on the EA Forum.
This means that EAs can’t talk about ideas and areas unless either (a) they’ve already been sufficiently well-explored by EAs elsewhere (e.g., in an 80K blog post or an Open Phil report) that there’s a pre-existing consensus this is an especially good thing to talk about; or (b) they’re willing to make the discussion very meta-oriented and general. (“Why don’t EAs care more about reducing rates of medical error?”, as opposed to “Hey, here’s an interesting study on things that mediate medical error rates!”)
This seems OK iff the EA Forum is only intended to intervene on a particular part of the idea pipeline — maybe the idea is for individuals and groups to explore new frontiers elsewhere, and bring them to the EA Forum once they’re already well-established enough that everyone can agree they make sense as an EA priority. In that case, it might be helpful to have canonical locations people can go to have those earlier discussions.
EA is more concerned with capital allocation than LessWrong, so this doesn’t seem surprising.
Being a “top cause area” is basically synonymous with “put EA capital towards this thing.”
At root, we’ll only want to explore tractable interventions in cause areas that are plausible candidates for EA capital allocation, so I don’t think this framing sidesteps the issue.
I have more thoughts but it’s sufficiently off topic for this post that I’ll probably start a new thread about it.
Please do expand this onto a top level post if you are able to!
I didn’t write a top level post but I sketched out some of the relevant background ideas here. (I’m not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)