I imagine a longer analysis would include factors like: 1. If intense AI happens in 10 to 50 years, that could do inventing afterwards. 2. I expect that a very narrow slice of the population will be responsible for scientific innovations here, if humans do it. Maybe instead of considering the policies [increase the population everywhere] or [decrease the population everywhere], we could consider more nuanced policies. Related, if one wanted to help with animal welfare, I’d expect that [pro-natalism] would be an incredibly ineffective way of doing so, for the benefit of eventual scientific progress on animals.
I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.
I imagine a longer analysis would include factors like:
1. If intense AI happens in 10 to 50 years, that could do inventing afterwards.
2. I expect that a very narrow slice of the population will be responsible for scientific innovations here, if humans do it. Maybe instead of considering the policies [increase the population everywhere] or [decrease the population everywhere], we could consider more nuanced policies. Related, if one wanted to help with animal welfare, I’d expect that [pro-natalism] would be an incredibly ineffective way of doing so, for the benefit of eventual scientific progress on animals.
I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.