That was in reference to both humanity and the EA movement, but it’s trivially true for the EA movement itself.
Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars.
We all know what that means for the total amount of future suffering. And yes, there will be some additional “flourishing” or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering.
People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I’m not even convinced they’re doing their own values a favor by endorsing the EA movement at this point.
Uh, what? Since when?