Although, they argue that longtermism goes through even if you accept person-affecting views:
Nevertheless, the case for strong longtermism holds up even on these views. [...] We can also affect the far future by (for example) guiding the development of artificial superintelligence
Valuing “saves” lives that are already exist/likely to exist versus creating (or making it possible for others to create more lives)?
Perhaps that’s the main distinction in the deep assumptions/values.
Although, they argue that longtermism goes through even if you accept person-affecting views: