I’m having trouble figuring out how to respond to this. I understand that it’s kind of an academic exercise to see how cause prioritization might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.
If we’re talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person “You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?”, their response is not going to depend on QALYs.
If we’re talking about how the EA movement should prioritize, the EA movement currently spends more on global health than on animal welfare and AI risk combined. It clearly isn’t even following near-termist ideas to their logical conclusion, let alone long-termist ones.
If we’re talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn’t that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn’t include that because it’s an annoying question that’s pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.
I’m not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we’re not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal’s Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.
I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like “I think the human race going extinct would be extra bad, even compared to many billions of deaths”. If you want to go beyond this kind of intuitive reasoning into real long-termism, I feel like you need extra work to answer the questions above that in general isn’t being done.
Thanks for your response. I agree that the goal should be trying to hold the conference in a way that’s best for the world and for EA’s goals. If I were to frame my argument more formally, it would be something like—suppose that you reject 1000 people per year (I have no idea if this is close to the right number). 5% get either angry or discouraged and drop out of EA. Another 5% leave EA on their own for unrelated reasons, but would have stayed if they had gone to the conference because of some good experience they had there. So my totally made up Fermi estimate is that we lose 100 people from EA each time we run a closed conference. Are the benefits of the closed conference great enough to compensate for that?
I’m not sure, because I still don’t understand what those benefits are. I mentioned in the post that I’d be in favor of continuing to have a high admissions bar for the networking app (or maybe just sorting networkers by promise level). You write that:
I think maybe our crux is that I don’t understand this impulse, beyond the networking thing I mentioned above. Is the concern that the unpromising people will force promising people into boring conversations and take up too much of their time? That they’ll disrupt talks?
My understanding is that people also sometimes get rejected from EAGx and there is no open admission conference, is this correct?