I’m not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It’s therefore quite clearly a school of thought within EA.
Also I have a minor(ish) bone to pick with your claim that “Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations.” Will MacAskill defines longtermism as the following:
Longtermism is the view that positively influencing the longterm future is a key moral priority of our time.
I’m not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It’s therefore quite clearly a school of thought within EA.
Also I have a minor(ish) bone to pick with your claim that “Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations.” Will MacAskill defines longtermism as the following:
There’s nothing in this definition about expected value or discounting. I will plug a post I wrote which explains that it has been suggested one can get a longtermist conclusion using a different decision theory than maximising expected value, just as one may still get a longtermist conclusion if one discounts future lives.