Me too! The two broad categories of ideas I’ve had are basically 1. Key cause-prio debates- especially ones which have been had over many years and many posts, but haven’t really been summarised/ focused into one place (like those you list) 2. Debates about tactics/ methodology. For example: “We should invest more heavily in animal sentience research than corporate campaigns”. That’s a rough example, but the idea would be to do a debate where people would have to get fairly fine-grained in cost-effectiveness thinking before they could vote. I doubt we would get as much engagement, but engagement may be particularly valuable if the question is well posed.
5. the value of something like, how EA looks to outsiders? that seems to be the thing behind multiple points (2, 4, 7, and 8) in this which was upvoted, and i saw it other times this debate week (for example here) as a reason against the animal welfare option.
(i personally think that compromising epistemics for optics is one way movements … if not die, at least become incrementally more of a simulacrum, no longer the thing they were meant to be. and i’m not sure if such claims are always honest, or if they can secretly function to enforce the relevance of public attitudes one shares without needing to argue for them.)
Future debate week topics?
Global health & wellbeing (including animal welfare) vs global catastrophic risks, based on Open Phil’s classifications.
Neartermism vs longtermism.
Extinction risks vs risks of astronomical suffering (s-risks).
Saving 1 horse-sized duck vs saving 100 duck-sized horses.
I like the idea of going through cause prioritization together on the EA Forum.
Me too! The two broad categories of ideas I’ve had are basically
1. Key cause-prio debates- especially ones which have been had over many years and many posts, but haven’t really been summarised/ focused into one place (like those you list)
2. Debates about tactics/ methodology. For example: “We should invest more heavily in animal sentience research than corporate campaigns”. That’s a rough example, but the idea would be to do a debate where people would have to get fairly fine-grained in cost-effectiveness thinking before they could vote. I doubt we would get as much engagement, but engagement may be particularly valuable if the question is well posed.
5. the value of something like, how EA looks to outsiders? that seems to be the thing behind multiple points (2, 4, 7, and 8) in this which was upvoted, and i saw it other times this debate week (for example here) as a reason against the animal welfare option.
(i personally think that compromising epistemics for optics is one way movements … if not die, at least become incrementally more of a simulacrum, no longer the thing they were meant to be. and i’m not sure if such claims are always honest, or if they can secretly function to enforce the relevance of public attitudes one shares without needing to argue for them.)