I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.
ben.smith
I think Richard is right about the general case. It was a bit unintuitive to me until I ran the numbers in a spreadsheet, which you can see here:
Basically, yes, assume that meat eating increases with the size of human population. But the scientific effort towards ending the need to meat eat also increases with the size of the human population, assuming marginal extra people are as equally likely to go into researching the problem as the average person. Under a simple model the two exactly balance out, as you can see in the spreadsheet.
I just think real life breaks the simple model in ways I have described below, in a way that preserves a meat-eater problem.
right—in that simple model, each extra marginal average person decreases the time taken to invent cultured meat at the same rate as they contribute to the problem, and there’s an exact identity between those rates. But there are complicating factors that I think work against assuring us there’s no meat-eater problem:
An extra person starts eating animals from a very young age, but won’t start contributing to the meat-eater problem until they’re intellectually developed enough to make a contribution (21 yers to graduate undergraduate, 25-30 to get a PhD).
There’s a delay between when they invent a solution and when meat eating can actually be phased out, though perhaps that’s implicitly built into the model by the previous point
I do concede that the problem is mitigated somewhat because if we expect cultured meat to take over within the lifetime of a new person, then their harm (and impact) is scaled down proportionately, but the intrinsic hedonic value of their existence isn’t similarly scaled down.
But it doesn’t sound as simple as just “there’s no meat-eater problem”.
Ok, I missed the citation to your source initially because the citation wasn’t in your comment when you first posted it. The source does say less insect abundance in land converted to agricultural use from natural space. So then what i said about increased agricultural use supports your point rather than mine.
Yes I think so.
Great point! Though I think it’s unless clear what the impact of more humans on wild terrestrial invertebrate populations is. Developed countries have mostly stopped clearing land for human living spaces. I could imagine that a higher human population could induce demand for agriculture and increased trash output which could increase terrestrial invertebrate populations.
Pro-natalist success would cause so much animal suffering it is not even a net-positive cause area
Reviving this old thread to discuss the animal welfare objection to pro-natalism that I think is changing my mind on pro-natalism. I’m a regular listener to Simone and Malcolm Collins’s podcast. Since maybe 2021 I’ve gone on an arc of first fairly neutral to then being strongly pro-natalist, third being pro-natalist but not rating it as an effective cause area, and now entering a fourth phase where I might reject pro-natalism altogether.
I value animal welfare and at least on an intellectual level I care equally about their welfare and humanity’s. For every additional human we bring into existence at a time in history where humans have never eaten more meat per capita, on expectation, you will get years or—depending on their diet—perhaps even hundreds of years of animal suffering induced by the additional consumer demand for more meat. This is known as the meat-eater problem, but I haven’t seen anyone explicitly connect it to pro-natalism yet. It seems like an obvious connection to make.
There are significant caveats to add:
this is not an argument against the value of having your own kids, who you then raise with appropriate respect for the welfare of other sentient creatures. While you can’t control their choices as adults, if you raise them right, your expectation they will cause large amounts of suffering will be substantially reduced, potentially enough to make it a net positive choice. However, pro-natalism as a political movement aimed at raising birthrates at large will likely cause more animal suffering outweighing the value of human happiness it will create.
In the long term, we will hopefully invent forms of delicious meat like cultured meat that do not involve sentient animal suffering. The average person might still eat some farmed meat at the time, but hopefully, with delicious cultured meat options available, public opinion may allow for appropriate animal welfare for farmed animals, such that those farmed animals’ lives are at least net positive. When that happens, pro-natalism might make more sense. But we don’t know when cultured meat will appear. It is possible that widespread adoption is several decades away, in a slower AGI timeline world or where some form of cultural or legal turn prevents the widespread adoption of cultured meat even if it is technically possible.
I anticipate some people will argue that more humans will make the long term future go well because in expectation this will create more people going into the long term. I think this is a reasonable position to take but I don’t find it convincing because of the problem of moral cluelessness: there is far too much random chaos (in the butterfly effect sense of the term) for us to have any idea what the effect of more people now will be on the next few generations.
I might make a top level post soon to discuss this, but in the meantime I’m curious if you have any clear response to the animal welfare objection to pro-natalism.
For US opportunities, consider entering the US diversity visa lottery before November 5, 2024--its free and easy!
Reducing global AI competition through the Commerce Control List and Immigration reform: a dual-pronged approach
You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.
For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.
We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn’t 10% or more.
You could model the distribution of your uncertainty with, say, a beta distribution of .
You might wonder, why b=100 and not b=200, or 101? It’s an arbitrary choice, right?
To which I have two responses:
You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.
While it is arbitrary, I claim that avoiding expected effects because we can’t make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there’s some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.
This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn’t (by default) include a weighting amongst the set.
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you’re back to using a single probability function. I think that’s probably what you should do. But that sounds to me indistinguishable from the naive response.
The idea of a “precise probability function” is in general flawed. The whole point of a probability function is you don’t have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That’s the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.
As Yann LeCun recently said, “If you do research and don’t publish, it’s not science.”
With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.
Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn’t—it has only been the default for several decades to a half century or so. It may not be the default in another half century.
If Trump still thinks AI is “maybe the most dangerous thing” I would be wary of giving up on chances to leverage his support on AI safety.
In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.
If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn’t give up on Republican support for AI safety yet.
ha I see. Your advice might be right but I don’t think “consciousness is quantum”. I wonder if you could say what you mean by that?
Of course I’ve heard that before. In the past when I have heard people say that before, it’s by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:
Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn’t give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.
There’s a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick’s. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it’s hard to imagine what role “quantum” would do.
It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn’t in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.
Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they’re related because ???? But the only thing they have in common, as far as I know, is that we don’t understand them. That’s not much of a reason to believe in a hypothesis that links them.
Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn’t seem so far off.
Elliot has a phenomenally magnetic personality and is consistently positive and uplifting. He’s generally a great person to be around. His emotional stamina gives him the ability to uplift the people around him and I think he is a big asset to this community.
TLDR: I’m looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research
Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I’m an experienced data scientist, social scientist, and human behavioral scientist.
Location/remote: Currently located on the West Coast of the USA. Willing to relocate to the Bay area for sufficiently high renumeration, or to Southern California or Seattle for just about any suitable role. Would relocate to just about anywhere including the USA east coast, Australasia, the UK, or China for a highly impactful role.
Availability & type of work: I finish work teaching at the University of Oregon around April, and if I haven’t found something by then, will be available again in June. I’m looking for full-time work from there or part time work in impactful roles for an immediate start.
Resume/CV/LinkedIn:
LinkedIn
Email/contact: benjsmith@gmail.com
Other notes: I don’t have strong preference for cause areas and would be highly attracted to roles reducing AI existential risk, or improving animal welfare and global health, or our understanding of the long-term future. I suspect my comparative advantage is in research roles (broadly defined) and in data science work; technical summaries for AI governance or Evals work might be a comparative advantage.
Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety
But I would guess that pleasure and unpleasantness isn’t always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.
This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there’s a certain kind of recurrent cortical processing of the signal which can loosely be described as “sensation”. I mean that very loosely; it even can include memories of physical events or semantic thought (which you might understand as a sort of recall of auditory processing). Without that recurrent cortical processing modeling the reward and learning process, probably all that midbrain dopaminergic activity does not get consciously perceived. Perhaps it does, indirectly, when the dopaminergic activity (or lack thereof) influences the sorts of sensations you have.
But I’m getting really speculative here. I’m an empiricist and my main contention is that there’s a live issue with unknowns and researchers should figure out what sort of empirical tests might resolve some of these questions, and then collect data to test all this out.
Fair enough.
My central expectation is that value of one more human life created is roughly about even with the amount of nonhuman suffering that life would cause (based on here https://forum.effectivealtruism.org/posts/eomJTLnuhHAJ2KcjW/comparison-between-the-hedonic-utility-of-human-life-and#Poultry_living_time_per_capita). I’m also willing to assume cultured meat is not too long away. Then the childhood delay til contribution only makes a fractional difference and I tip very slightly back into the pro natalist camp, while still accepting that the meat eater problem is relevant.