We wanted to focus on a specific and somewhat manageable question related to AI vs. non-AI cause prioritization. You’re right that it’s not the only important question to ask. If you think the following claim is true - ‘non-AI projects are never undercut but always outweighed’ - then it doesn’t seem like an important question at all. I doubt that claim holds generally, for reasons that were presented in the piece. When deciding what to prioritize, there are also broader strategic questions that matter—how is money and effort being allocated by other parties, what is your comparative advantage, etc. - that we don’t touch at all here.
If you think the following claim is true - ‘non-AI projects are never undercut but always outweighed’
Of course I don’t think this. AI definitely undercuts some non-AI projects. But “non-AI projects are almost always outweighed in importance” seems very plausible to me, and I don’t see why anything in the piece is a strong reason to disbelieve that claim, since this piece is only responding to the undercutting argument. And if that claim is true, then the undercutting point doesn’t matter.
When you say you doubt that claim holds generally, is that because you think that the weight of AI isn’t actually that high, or because you think that AI may make the other thing substantially more important too?
I’m generally pretty sceptical about the latter—something which looked like a great idea not accounting for AI will generally not look substantially better after accounting for AI. By default I would assume that’s false unless given strong arguments to the contrary.
I think there are probably cases of each. For the former, there might be some large interventions in things like factory farming or climate change (i) that could have huge impacts and (ii) for which we don’t think AI will be particularly efficacious or impactful.
For the latter, here are some cases off the top of my head. Suppose we think that if AI is used to make factory farming more efficient and pernicious, it will be via X (idk, some kind of precision farming technology). Efforts to make X illegal look a lot better after accounting for AI. Or, right now, making it harder for people to buy ingredients for biological weapons might be good bets but not great bets. It reduces the chances of bio weapons somewhat, but knowledge about how to create weapons is the main bottleneck. If AI removes that bottleneck, then those projects look a lot better.
We wanted to focus on a specific and somewhat manageable question related to AI vs. non-AI cause prioritization. You’re right that it’s not the only important question to ask. If you think the following claim is true - ‘non-AI projects are never undercut but always outweighed’ - then it doesn’t seem like an important question at all. I doubt that claim holds generally, for reasons that were presented in the piece. When deciding what to prioritize, there are also broader strategic questions that matter—how is money and effort being allocated by other parties, what is your comparative advantage, etc. - that we don’t touch at all here.
Of course I don’t think this. AI definitely undercuts some non-AI projects. But “non-AI projects are almost always outweighed in importance” seems very plausible to me, and I don’t see why anything in the piece is a strong reason to disbelieve that claim, since this piece is only responding to the undercutting argument. And if that claim is true, then the undercutting point doesn’t matter.
When you say you doubt that claim holds generally, is that because you think that the weight of AI isn’t actually that high, or because you think that AI may make the other thing substantially more important too?
I’m generally pretty sceptical about the latter—something which looked like a great idea not accounting for AI will generally not look substantially better after accounting for AI. By default I would assume that’s false unless given strong arguments to the contrary.
I think there are probably cases of each. For the former, there might be some large interventions in things like factory farming or climate change (i) that could have huge impacts and (ii) for which we don’t think AI will be particularly efficacious or impactful.
For the latter, here are some cases off the top of my head. Suppose we think that if AI is used to make factory farming more efficient and pernicious, it will be via X (idk, some kind of precision farming technology). Efforts to make X illegal look a lot better after accounting for AI. Or, right now, making it harder for people to buy ingredients for biological weapons might be good bets but not great bets. It reduces the chances of bio weapons somewhat, but knowledge about how to create weapons is the main bottleneck. If AI removes that bottleneck, then those projects look a lot better.