I think this argument is pretty wrong for a few reasons:
It generalizes way too far… for example, you could say “Before trying to shape the far future, why don’t we solve [insert other big problem]? Isn’t the fact that we haven’t solved [other big problem] bad news about our ability to shape the far future positively?” Of course, our prospects would look more impressive if we had solved many other big problems. But I think it’s an unfair and unhelpful test to pick a specific big problem, notice that we haven’t solved it, and infer that we need to solve it first.
Many, if not most, longtermists believe we’re living near a hinge of history and might have very little time remaining to try to influence it. Waiting until we first ended factory farming would inherently forgo a huge fraction of the time remaining on those views to make a difference.
You say “It is a stirring vision, but it rests on a fragile assumption: that humanity is capable of aligning on a mission, coordinating across cultures and centuries, and acting with compassion at scale.” but that’s not true/exactly; I don’t think longtermism rests on the assuption that the best thing to do is try to directly cause that right now (see the hinge of history link above). For example, I’m not sure how we would end factory farming, but it might require, as you allude to, massive global coordination. In contrast, creating techniques to align AIs might require only a relatively small group of researchers, and a small group of AI companies adopting research that is in their best interests to use. To be clear, there are longtermist-relevant interventions that might also require global and widespread coordination, but they don’t all require it (and the ones I’m most optimistic about don’t require it, because global coordination is very difficult).
Related to the above, the problems are just different, and require different skills and resources (and shaping the far future isn’t necessarily harder than ending factory farming; for example, I wouldn’t be surprised if cutting bio x-risk in half ends up being much easier than ending factory farming). Succeeding at one is unlikely to be the best practice for succeeding at the other.
(I think factory farming is a moral abomination of gigantic proportions, I feel deep gratitude for people who are trying to end it, and dearly hope they succeed.)
I think the post isn’t clear between the stances “it would make the far future better to end factory farming now” and “the only path by which the far future is net positive requires ending factory farming”, or generally how much of the claim that we should try to end factory farming now is motivated by the “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they will probably fail” vs. “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they are less valuable becuase the EV of the future is worse”
Anyway, working to cause humans to survive requires (or at least, is probably motivated by) thinking the future will be better that way. Not all longtermism is about that (see e.g. s-risk mitigation), and those parts are also relevant to the hinge of history question.
I am saying aligning AI is in the best interests of AI companies, unlike the situation with ending factory farming and animal ag companies, which is a relevant difference. Any AI company that could align their AIs once and for all for $10M would do it in a heartbeat. I don’t think they will do nearly enough to align their AIs (so in that sense, their incentives are not humanity’s incentives), given the stakes, but they do want to at least a little