My impression is that there is somewhat of a split on this issue. Note also that “person affecting” could involve caring about helping future people as well, to the extent they are likely to exist … just not caring about making more of them.
I have not seen any surveys specifically on this issue. In the 2019 EAS about 70% of ppl said they were utilitarian, but this doesn’t necessarily imply the total population view.
I think the fact that most EA donations go to present global health speaks somewhat against the majority being strictly total populationist. The limited advocacy for pronatalist policies may also be evidence against it.
But I’d really like to see this surveyed directly. Particularly with both abstract and concrete questions about (e.g.) whether EAs would be willing to make current/certain-to-exist ppl less happy in order to create more happy ppl. (And vice versa).
The limited advocacy for pronatalist policies may also be evidence against it.
In a context where we can have a cosmically vast future, if we avoid X-risk today, advocating for a few more people on earth tomorrow is totally missing the point. Pronatalism only makes sense when thinking total utilitarian but completely forgetting anything longtermist. If you were a total utilitarian, but had “here be dragons” on your map of the long term future, then pronatalism makes sense.
I agree that pronatalism might be of small consequence (to the total utilitarian who thinks the future could be vast and happy) relative to avoiding extinction risk.
My impression is that there is somewhat of a split on this issue. Note also that “person affecting” could involve caring about helping future people as well, to the extent they are likely to exist … just not caring about making more of them.
I have not seen any surveys specifically on this issue. In the 2019 EAS about 70% of ppl said they were utilitarian, but this doesn’t necessarily imply the total population view.
I think the fact that most EA donations go to present global health speaks somewhat against the majority being strictly total populationist. The limited advocacy for pronatalist policies may also be evidence against it.
But I’d really like to see this surveyed directly. Particularly with both abstract and concrete questions about (e.g.) whether EAs would be willing to make current/certain-to-exist ppl less happy in order to create more happy ppl. (And vice versa).
In a context where we can have a cosmically vast future, if we avoid X-risk today, advocating for a few more people on earth tomorrow is totally missing the point. Pronatalism only makes sense when thinking total utilitarian but completely forgetting anything longtermist. If you were a total utilitarian, but had “here be dragons” on your map of the long term future, then pronatalism makes sense.
I agree that pronatalism might be of small consequence (to the total utilitarian who thinks the future could be vast and happy) relative to avoiding extinction risk.