I weigh moral worth by degree of sentience based on neuron count as a rough proxy, which naturally tends to weigh helping an equal number of humans more than an equivalent number of any other currently known species.
But the evidence I’ve seen suggests you could help far more of almost any kind of animals (e.g., chickens) avoid suffering for the same amount of money.
Thanks for your justification! Hamish McDoodles also believed that neuron count weighting would make the best human welfare charities better than the best animal welfare charities. However, after doing a BOTEC of cage-free campaign cost-effectiveness using neuron counts as a proxy, he eventually ended up changing his mind:
ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts
So unless you have further disagreements with his analysis, using neuron count weighting would probably mean you should support allocating the 100M to animal welfare rather than global health.
Thank you for justifying your vote for global health!
One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.
So, the $5,000 to save a human life actually saves more than one human life. The world fertility rate is currently 2.27 per woman, but expected to decline to 1.8 by 2050 and 1.6 by 2100. Lets assume this trend continues at a rate of −0.2 per 50 years until eventually it reaches zero at 2500. Since it takes two people to have children, we halve these numbers to get an estimate of how many human descendents to expect from a given saved human life each generation.
If each generation is ~25 years, then the numbers will follow a series like 1.135 + 0.9 + 0.85 + 0.8 … which works out to 9.685 human lives per $5000, or $516.26 per human life. Human life expectancy is increasing, but for simplicity lets assume 70 years per human life.
70 / $516.26 = 0.13559 human life years per dollar.
So, if we weigh chickens equally with humans, this favours the chickens still.
However, we can add the neuron count proxy to weigh these. Humans have approximately 86 trillion neurons, while chickens have 220 million. That’s a ratio of 390.
0.13559 x 390 = 52.88 human neuron weighted life years per dollar.
This is slightly more than 41 chicken life years per dollar. Which, given my many, many simplifying assumptions, would mean that global health is still (slightly) more cost effective.
You haven’t factored in the impact of saving a life on fertility. Check out this literature review which concludes the following (bold emphasis mine):
I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child’s life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change.
Also you’re assuming neuron count should be used as proxies for moral weight but I’m highly skeptical that is fair (see this).
To respond to the comments so far in general, I’d say that my priors are that almost all lives, even highly unpleasant ones, are worth living, and that I tend to weigh moments of happiness much more than equivalent moments of suffering, as this avoids what I see as philosophically problematic implications such as suicide for chronically depressed people, or nuking the rainforest as a net positive intervention.
Given these biases, I tend to weigh much more heavily interventions like bednets that save lives that would otherwise not be lived, over things that only improve lives like most animal welfare interventions. Furthermore, at least some of the lives that are saved will have offspring and so the net impact of saving a life is actually much higher than just one life, but includes all potential descendents.
I do think animal welfare is important and that, all other things being equal, happier chickens is better than just barely life worth living chickens, but I consider the magnitude of this impact to be less than saving countless lives.
[i have the value that] almost all lives, even highly unpleasant ones, are worth living, and that I tend to weigh moments of happiness much more than equivalent moments of suffering, as this avoids what I see as philosophically problematic implications such as suicide for chronically depressed people, or nuking the rainforest as a net positive intervention.
do you mean that you chose this position because it avoids those conclusions? if so:
then the process you used was to select some (of many possible) moral axioms which lead to the conclusion you like.
i don’t think that would mean the axiom is your true value.
but if choosing axioms, you could instead just follow the conclusions you like, using an axiom such as “my morality is just complex [because it’s godshatter]”.
separately, the axiom you chose introduced a new ‘problematic’ conclusion: that someone in a mechanized torture chamber, who will be there for two more years, (during which their emotional state will mostly only change between depression and physical-harm-induced agony—maybe there will also be occasional happiness, like if another animal tries to comfort them), and then die without experiencing anything else—should be kept alive (or be created) in that situation instead of ceased to exist (or not be created), when these are the only choices.
that’s definitely something the universe allows one to prefer, as all moral preferences are. i’m just pointing it out because i think maybe it will feel immoral to you too, and you said you chose axioms to avoid problematic or immoral-feeling things.
in case it doesn’t feel wrong/‘philosophically problematic’ now, would it have before, before you started using this axiom, and so before your moral intuitions crystallized around it?
almost all lives, even highly unpleasant ones, are worth living
as i am a moral anti-realist, i cannot argue against a statement of what one values. but on priors about humans, i am not sure if you would actually want the world to be arranged in a way which follows this value, if you fully understood what it entails. have you spent time imagining, or experiencing, what it is like to live a life of extreme suffering? what it is like for it to be so bad that you desperately prefer nonexistence to it?
now, such lives could still be considered ‘worth it’ overall if they eventually get better or otherwise are considered meaningful somehow. but a life of just or almost just that? are you sure about that? and does this imply you would prefer to create a billion people whose lives last forever and almost only consist of depression/physical agony, if the only alternative was for them not to exist and no one happier to exist in their place—and if it does imply that, are you sure about that also? (maybe these fall under what ‘almost all’ doesn’t include for you, but then you’d also consider the lives of animals in mechanized torture facilities negatively worth living.)
(sometimes when humans ask, “are you sure about endorsing that”, there’s a subtext of social pressure, or of more subtly invoking someone’s social conformity bias so they will conform ~on their own. i do not mean it that way, i really mean it only as prompting you to consider.)
As someone who has experienced severe depression and suicidal ideation, I do have at least some understanding of what it entails. It’s my own experience that biases me in the way I described. Admittedly, my life has gotten better since then, so it’s not the same thing as a life of just extreme suffering though.
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it’s a tragedy.
But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can’t be expected to know. And I admit I’m not certain I’m right about this either.
It’s my own experience that biases me in the way I described.
can you explain how?
i believe extreme suffering had the opposite effect on me, making me become a suffering-focused altruist. i don’t actually understand how it could make someone ~not disvalue suffering. (related: ‘small and vulnerable’).
(i mean, i have guesses about how that could happen: like, maybe ~not disvaluing it was the only way to mentally cope with the vast scale of it. living in a world one believes to be evil is hard; easier to not believe it’s evil, somehow; have heard this is a reason many new animal-suffering-boycotters find it hard to continue having an animal-caring worldview.
or, maybe experiencing that level of suffering caused a buddhist enlightenment like thing where you realized suffering isn’t real, or something. though, happiness wouldn’t be real either in that case. i’m actually adjacent to this view, but it sure feels real for the animals, and i would still like to make the world be good for those who believe in it.)
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of “life-affirming” view as a way to keep on going.
There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live.
To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory’s results are intuitive is in a sense, a check on validity.
If that’s not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused.
I’m admittedly uncertain about how much to endorse such ideas, so I don’t try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles.
There’s the concept of a moral parliament that’s been discussed before. To simplify the decision procedure, I’d consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism.
I’ve also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hesitant to side with them.
I don’t expect any of these things to sway anyone else to change their mind, but hopefully you can understand why I have my rather eccentric and unorthodox views.
Given these biases, I tend to weigh much more heavily interventions like bednets that save lives that would otherwise not be lived, over things that only improve lives like most animal welfare interventions.
Huh? Even if you weigh moments of happiness much more, that doesn’t always support maximising the number of lives. To use a somewhat farcical model that I hope is nevertheless illustrative, wouldn’t you prefer to add two moments of happiness to someone’s life than to create a new life that only experienced one moment of happiness? If so, I don’t see why you’d conclude that bednets are better than welfare reforms under these assumptions.
I guess my unstated assumption is that if the lives of the chickens are already worth living, then increasing their welfare further will quickly run into the diminishing returns due to the law of diminishing marginal utility. Conversely, adding more lives linearly increases happiness, again, assuming that each life has at least a baseline level of happiness that makes the life worth living.
in regards to intelligence, we can question boththe extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;
many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and
there is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.
I use neuron counts as a very rough proxy for the information processing complexity of a given organism. I do make some assumptions, like that more sophisticated information processing enables more complex emotional states, things like memory, which compounds suffering across time, and so on.
It makes sense to me that sentience is probably on some kind of continuum, rather than an arbitrary threshold. I place things like photo-diodes on the bottom of this continuum and highly sophisticated minds like humans near the top, but I’ll admit I don’t have accurate numbers for a “sentience rating”.
I hold my views on neuron counts being an acceptable proxy mostly because of what I learned from studying Cognitive Science in undergrad and then doing a Master’s Thesis on Neural Networks. This doesn’t make me an expert, but it means I formed my own opinions and disagree with the RP post somewhat. I have not had the time to formulate substantive objections in a rebuttal however. Most of my posts on these forums are relatively low-effort.
I weigh moral worth by degree of sentience based on neuron count as a rough proxy, which naturally tends to weigh helping an equal number of humans more than an equivalent number of any other currently known species.
But the evidence I’ve seen suggests you could help far more of almost any kind of animals (e.g., chickens) avoid suffering for the same amount of money.
Thanks for your justification! Hamish McDoodles also believed that neuron count weighting would make the best human welfare charities better than the best animal welfare charities. However, after doing a BOTEC of cage-free campaign cost-effectiveness using neuron counts as a proxy, he eventually ended up changing his mind:
So unless you have further disagreements with his analysis, using neuron count weighting would probably mean you should support allocating the 100M to animal welfare rather than global health.
Thank you for justifying your vote for global health!
One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.
This contrasts to needing $5,000 to save a human life through top-ranked GiveWell charities.
So, the $5,000 to save a human life actually saves more than one human life. The world fertility rate is currently 2.27 per woman, but expected to decline to 1.8 by 2050 and 1.6 by 2100. Lets assume this trend continues at a rate of −0.2 per 50 years until eventually it reaches zero at 2500. Since it takes two people to have children, we halve these numbers to get an estimate of how many human descendents to expect from a given saved human life each generation.
If each generation is ~25 years, then the numbers will follow a series like 1.135 + 0.9 + 0.85 + 0.8 … which works out to 9.685 human lives per $5000, or $516.26 per human life. Human life expectancy is increasing, but for simplicity lets assume 70 years per human life.
70 / $516.26 = 0.13559 human life years per dollar.
So, if we weigh chickens equally with humans, this favours the chickens still.
However, we can add the neuron count proxy to weigh these. Humans have approximately 86 trillion neurons, while chickens have 220 million. That’s a ratio of 390.
0.13559 x 390 = 52.88 human neuron weighted life years per dollar.
This is slightly more than 41 chicken life years per dollar. Which, given my many, many simplifying assumptions, would mean that global health is still (slightly) more cost effective.
You haven’t factored in the impact of saving a life on fertility. Check out this literature review which concludes the following (bold emphasis mine):
Also you’re assuming neuron count should be used as proxies for moral weight but I’m highly skeptical that is fair (see this).
To respond to the comments so far in general, I’d say that my priors are that almost all lives, even highly unpleasant ones, are worth living, and that I tend to weigh moments of happiness much more than equivalent moments of suffering, as this avoids what I see as philosophically problematic implications such as suicide for chronically depressed people, or nuking the rainforest as a net positive intervention.
Given these biases, I tend to weigh much more heavily interventions like bednets that save lives that would otherwise not be lived, over things that only improve lives like most animal welfare interventions. Furthermore, at least some of the lives that are saved will have offspring and so the net impact of saving a life is actually much higher than just one life, but includes all potential descendents.
I do think animal welfare is important and that, all other things being equal, happier chickens is better than just barely life worth living chickens, but I consider the magnitude of this impact to be less than saving countless lives.
do you mean that you chose this position because it avoids those conclusions? if so:
then the process you used was to select some (of many possible) moral axioms which lead to the conclusion you like.
i don’t think that would mean the axiom is your true value.
but if choosing axioms, you could instead just follow the conclusions you like, using an axiom such as “my morality is just complex [because it’s godshatter]”.
separately, the axiom you chose introduced a new ‘problematic’ conclusion: that someone in a mechanized torture chamber, who will be there for two more years, (during which their emotional state will mostly only change between depression and physical-harm-induced agony—maybe there will also be occasional happiness, like if another animal tries to comfort them), and then die without experiencing anything else—should be kept alive (or be created) in that situation instead of ceased to exist (or not be created), when these are the only choices.
that’s definitely something the universe allows one to prefer, as all moral preferences are. i’m just pointing it out because i think maybe it will feel immoral to you too, and you said you chose axioms to avoid problematic or immoral-feeling things.
in case it doesn’t feel wrong/‘philosophically problematic’ now, would it have before, before you started using this axiom, and so before your moral intuitions crystallized around it?
as i am a moral anti-realist, i cannot argue against a statement of what one values. but on priors about humans, i am not sure if you would actually want the world to be arranged in a way which follows this value, if you fully understood what it entails. have you spent time imagining, or experiencing, what it is like to live a life of extreme suffering? what it is like for it to be so bad that you desperately prefer nonexistence to it?
now, such lives could still be considered ‘worth it’ overall if they eventually get better or otherwise are considered meaningful somehow. but a life of just or almost just that? are you sure about that? and does this imply you would prefer to create a billion people whose lives last forever and almost only consist of depression/physical agony, if the only alternative was for them not to exist and no one happier to exist in their place—and if it does imply that, are you sure about that also? (maybe these fall under what ‘almost all’ doesn’t include for you, but then you’d also consider the lives of animals in mechanized torture facilities negatively worth living.)
(sometimes when humans ask, “are you sure about endorsing that”, there’s a subtext of social pressure, or of more subtly invoking someone’s social conformity bias so they will conform ~on their own. i do not mean it that way, i really mean it only as prompting you to consider.)
As someone who has experienced severe depression and suicidal ideation, I do have at least some understanding of what it entails. It’s my own experience that biases me in the way I described. Admittedly, my life has gotten better since then, so it’s not the same thing as a life of just extreme suffering though.
What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it’s a tragedy.
But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can’t be expected to know. And I admit I’m not certain I’m right about this either.
can you explain how?
i believe extreme suffering had the opposite effect on me, making me become a suffering-focused altruist. i don’t actually understand how it could make someone ~not disvalue suffering. (related: ‘small and vulnerable’).
(i mean, i have guesses about how that could happen: like, maybe ~not disvaluing it was the only way to mentally cope with the vast scale of it. living in a world one believes to be evil is hard; easier to not believe it’s evil, somehow; have heard this is a reason many new animal-suffering-boycotters find it hard to continue having an animal-caring worldview.
or, maybe experiencing that level of suffering caused a buddhist enlightenment like thing where you realized suffering isn’t real, or something. though, happiness wouldn’t be real either in that case. i’m actually adjacent to this view, but it sure feels real for the animals, and i would still like to make the world be good for those who believe in it.)
from your other comment:
it still feels mysterious / that comment seems more like ‘what you prefer and uncertainty’ than ‘why / what caused you to have those preferences’
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of “life-affirming” view as a way to keep on going.
There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live.
To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
i see, thanks for explaining!
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
Sorry for the delayed response.
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
These words are very kind. Thank you.
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory’s results are intuitive is in a sense, a check on validity.
If that’s not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused.
I’m admittedly uncertain about how much to endorse such ideas, so I don’t try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles.
There’s the concept of a moral parliament that’s been discussed before. To simplify the decision procedure, I’d consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism.
I’ve also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hesitant to side with them.
I don’t expect any of these things to sway anyone else to change their mind, but hopefully you can understand why I have my rather eccentric and unorthodox views.
Huh? Even if you weigh moments of happiness much more, that doesn’t always support maximising the number of lives. To use a somewhat farcical model that I hope is nevertheless illustrative, wouldn’t you prefer to add two moments of happiness to someone’s life than to create a new life that only experienced one moment of happiness? If so, I don’t see why you’d conclude that bednets are better than welfare reforms under these assumptions.
I guess my unstated assumption is that if the lives of the chickens are already worth living, then increasing their welfare further will quickly run into the diminishing returns due to the law of diminishing marginal utility. Conversely, adding more lives linearly increases happiness, again, assuming that each life has at least a baseline level of happiness that makes the life worth living.
What do you think of RP’s work (mostly) against using neuron counts? From the summary:
(Also this more specific hypothesis.)
I use neuron counts as a very rough proxy for the information processing complexity of a given organism. I do make some assumptions, like that more sophisticated information processing enables more complex emotional states, things like memory, which compounds suffering across time, and so on.
It makes sense to me that sentience is probably on some kind of continuum, rather than an arbitrary threshold. I place things like photo-diodes on the bottom of this continuum and highly sophisticated minds like humans near the top, but I’ll admit I don’t have accurate numbers for a “sentience rating”.
I hold my views on neuron counts being an acceptable proxy mostly because of what I learned from studying Cognitive Science in undergrad and then doing a Master’s Thesis on Neural Networks. This doesn’t make me an expert, but it means I formed my own opinions and disagree with the RP post somewhat. I have not had the time to formulate substantive objections in a rebuttal however. Most of my posts on these forums are relatively low-effort.