leading to around 200 million extra lives in expectation for each $100 spent. By contrast, the best available near-term-focused interventions save approximately 0.025 lives per $100 spent (GiveWell 2020).
What does ‘longtermism’ add beyond the standard EA framework of maximizing cost-effectiveness? It seems like a regular EA would support allocating funding to the intervention that saves more lives per dollar.
Although, they argue that longtermism goes through even if you accept person-affecting views:
Nevertheless, the case for strong longtermism holds up even on these views. [...] We can also affect the far future by (for example) guiding the development of artificial superintelligence
What does ‘longtermism’ add beyond the standard EA framework of maximizing cost-effectiveness? It seems like a regular EA would support allocating funding to the intervention that saves more lives per dollar.
Valuing “saves” lives that are already exist/likely to exist versus creating (or making it possible for others to create more lives)?
Perhaps that’s the main distinction in the deep assumptions/values.
Although, they argue that longtermism goes through even if you accept person-affecting views: