An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.
Joseph_Chu
I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn’t be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.
My wife and I really, really liked The Good Place. I also got us a copy of How To Be Perfect and thought it was a decent read. Not particularly EA, but very balanced to consider all the major western schools of moral philosophy and give each a fair hearing. I do think it was a bit lacking in covering eastern schools of thought like the role-based ethics of Confucius, but I understand it was targeted towards an english speaking audience.
As a primer on ethics, it’s very approachable, but I do think it simplifies some things, and feels ever so slightly biased against consequentialism towards something like virtue ethics, but I’ll admit, I’m pro-Utilitarianism, and might myself be biased in the other direction.
From an EA perspective, it may not be the best introduction to us, as I believe there’s mention of EA, but it’s mostly the view that Peter Singer and his arguments are very demanding and perhaps unreasonably so, albeit, it’s a logical and important nudge towards caring and doing more (he hedges a lot in the book).
At the end of the day, the book shies away from deciding which moral theory is more correct, and as such is kinda wishy-washy, choose your own morality from a menu of possibilities, which somewhat disappointed me (but I also understand picking sides would be controversial). I’d still recommend the book to someone relatively unfamiliar with morality and ethics because it is a much friendlier introduction than say a moral philosophy textbook would be.
So, the $5,000 to save a human life actually saves more than one human life. The world fertility rate is currently 2.27 per woman, but expected to decline to 1.8 by 2050 and 1.6 by 2100. Lets assume this trend continues at a rate of −0.2 per 50 years until eventually it reaches zero at 2500. Since it takes two people to have children, we halve these numbers to get an estimate of how many human descendents to expect from a given saved human life each generation.
If each generation is ~25 years, then the numbers will follow a series like 1.135 + 0.9 + 0.85 + 0.8 … which works out to 9.685 human lives per $5000, or $516.26 per human life. Human life expectancy is increasing, but for simplicity lets assume 70 years per human life.
70 / $516.26 = 0.13559 human life years per dollar.
So, if we weigh chickens equally with humans, this favours the chickens still.
However, we can add the neuron count proxy to weigh these. Humans have approximately 86 trillion neurons, while chickens have 220 million. That’s a ratio of 390.
0.13559 x 390 = 52.88 human neuron weighted life years per dollar.
This is slightly more than 41 chicken life years per dollar. Which, given my many, many simplifying assumptions, would mean that global health is still (slightly) more cost effective.
In the interests of furthering the debate, I’ll quickly offer several additional arguments that I think can favour global health over animal welfare.
Simulation Argument
The Simulation Argument says that it is very likely we are living in an ancestor simulation rather than base reality. Given that it is likely human ancestors that the simulators are interested in fully simulating, other non-human animals are likely to not be simulated to the same degree of granularity and may not be sentient.
Pinpricks vs. Torture
This is a trolley problem scenario. It’s also been discussed by Eliezer Yudkowsky as the Speck of Dust in 3^^^3 People’s Eyes vs. One Human Being Tortured For 50 Years case. It’s also been analogously made in the famous short story The Ones Who Walk Away From Omelas by Ursula LeGuin. The basic idea is to question whether scope sensitivity is justified.
I’ll note that a way to avoid this is to adopt Maximin rather than Expected Value as the decision function, as was suggested by John Rawls in A Theory of Justice.
Incommensurability
In moral philosophy there’s a concept called incommensurability, that some things are simply not comparable. Some might argue that human and animal experiences are incommensurable, that we cannot know what it is like to be bat, for instance.
Balance of Categorical Responsibilities
There is in philosophies like Confucianism, notions like Filial Piety that support a kind of hierarchy of moral circles, such that family strictly dominates the state and so on. In the extreme, this leads to a kind of ethical egoism that I don’t think any altruist would subscribe to, but which seems a common way of thinking among laypeople and conservatives in particular. I don’t suggest this option but I mention it as an extreme case.
Utilitarianism in contrast tends to take the opposite extreme of equalizing moral circles to the point of complete impartiality towards every individual, the greatest good for the greatest number. This creates a kind of demandingness that would require us to sacrifice pretty much everything in service of this, our lives devoted entirely to something like shrimp welfare.
Rather than taking either extreme, it’s possible to balance things according to the idea that we have separate, categorical responsibilities to ourselves, to our family, to our nation, to our species, and to everyone else, and to put resources into each category so that none of our responsibilities are neglected in favour of others, a kind of meta or group impartiality rather than individual impartiality.
Yeah, I should probably retract the “we need popular support to get things done” line of reasoning.
I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is “plausible” and then give it some conditional weight, a kind of “humour the idea and give it the benefit of the doubt”. It’s kind of a technicality thing that I do because I’m personally very against outright lying, so I’ve developed a kind of alternative way of fudging to avoid hurt feelings and such.
This is likely related to the “spin” concept that I adopted from political debates. The idea of “spin” to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It’s something of a habit, and most probably epistemically highly questionable and something I should stop doing.
I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you’d been pessimistic you’d have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
I would almost certainly add an animal welfare charity to my charitable giving portfolio.
I previously had the Good Food Institute in the portfolio before financial challenges led me to trim it, so I might bring that back, or do some more research into the most effective animal welfare charity and add it alongside AMF and GiveDirectly as my primary contributions.
Given that it seems a solid majority of EAs on the forum seem to strongly favour animal welfare with very rigorous arguments for it, and my propensity to weigh “wisdom of crowds” majority opinion as evidence towards a given view, I’m actually leaning towards actually doing this.
Sorry for the delayed response.
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
These words are very kind. Thank you.
I’m starting to think it was a mistake for me to engage in this debate week thing. I just spent a good chunk of my baby’s first birthday arguing with strangers on the Internet about what amounts to animals vs. humans. This does not seem like a good use of my time, but I’m too pedantic to resist replying to comments I feel the need to reply to. -_-
In general, I feel like this debate week thing seems somewhat divisive as well. At least, it doesn’t feel nice to have so many disagrees on my posts, even if they still somehow got a positive amount of karma.
I really don’t have time to make high-effort posts, and it seems like low-effort posts do a disservice to people who are making high-effort posts, so I might just stop.
Oh, you edited your comment while I was writing my initial response to it.
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
We can infer their preferences not to suffer, but we can’t know what their “morality” is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don’t pretend to know this.
It’s getting late in my time zone, and I’m getting sleepy, so I may not reply right away to future comments.
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much. I’m not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong and the truth is some secret third thing. It’s basic epistemic humility to agree that we all have working but probably wrong models of the world.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn’t have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.
I think the most probable reason I worded it that way was that I felt that it wasn’t ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can’t canvas the chickens, it is a compromise in that sense.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
Yes, of course. My point isn’t that they are right though. Chickens can’t become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I’d probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
It’s fair to point out that the majority has been wrong historically many times. I’m not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I’m uncertain, that “asking the audience” or democracy seem like a good way to use the “wisdom of crowds” effect to get a relatively good prior.
I’m actually quite surprised by how quickly and how much that post has been upvoted. This definitely makes me update my priors positively about how receptive the forums are to contrarian viewpoints and civil debate. At least, I’m feeling less negativity than when I wrote that post.
I use neuron counts as a very rough proxy for the information processing complexity of a given organism. I do make some assumptions, like that more sophisticated information processing enables more complex emotional states, things like memory, which compounds suffering across time, and so on.
It makes sense to me that sentience is probably on some kind of continuum, rather than an arbitrary threshold. I place things like photo-diodes on the bottom of this continuum and highly sophisticated minds like humans near the top, but I’ll admit I don’t have accurate numbers for a “sentience rating”.
I hold my views on neuron counts being an acceptable proxy mostly because of what I learned from studying Cognitive Science in undergrad and then doing a Master’s Thesis on Neural Networks. This doesn’t make me an expert, but it means I formed my own opinions and disagree with the RP post somewhat. I have not had the time to formulate substantive objections in a rebuttal however. Most of my posts on these forums are relatively low-effort.
I don’t know. Certainly just parroting them is wrong. I just think we should give some weight to majority opinion, as it represents an aggregate of many different human experiences that seem to have aligned together and found common ground.
Also, a lot of my worry is not so much that EAs might be wrong, so much as that if our views diverge too strongly from popular opinion, we run the risk of things like negative media coverage (“oh look, those EA cultists are misanthropic too”), and we also are less likely to have successful outreach to people outside of the EA filter bubble.
In particular, we already have a hard time with outreach in China, and this animal welfare emphasis is just going to further alienate them due to cultural differences, as you can probably tell from my Confucius quote. The Analects are taught in school in both China and Taiwan and are a significant influence in Asian societies.
It’s also partly a concern that groupthink dynamics might be at play within EA. I noticed that there are many more comments from the animal welfare crowd, and I fear that many of the global health people might be too intellectually intimidated to voice their views at this point, which would be bad for the debate.
This is probably going to be downvoted to oblivion, but I feel it’s worth stating anyway, if nothing else to express my frustration with and alienation from EA.
On a meta level, I somewhat worry that the degree to which the animal welfare choice is dominating the global health one kinda shows how seemingly out-of-touch many EAs have become from mainstream common sense morality views.
In particular, I’m reminded of that quote from the Analects of Confucius:
When the stables were burnt down, on returning from court Confucius said, “Was anyone hurt?” He did not ask about the horses.
You can counter with a lot of math that checks out and arguments that make logical sense, but the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.
Maybe the EAs are still technically right and other people are just speciesist, but to me this does not bode well for the movement gaining traction or popular support.
Just wanted to get that out of my system.
I guess my unstated assumption is that if the lives of the chickens are already worth living, then increasing their welfare further will quickly run into the diminishing returns due to the law of diminishing marginal utility. Conversely, adding more lives linearly increases happiness, again, assuming that each life has at least a baseline level of happiness that makes the life worth living.
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of “life-affirming” view as a way to keep on going.
There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live.
To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it’s a tragedy.
But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can’t be expected to know. And I admit I’m not certain I’m right about this either.
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory’s results are intuitive is in a sense, a check on validity.
If that’s not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused.
I’m admittedly uncertain about how much to endorse such ideas, so I don’t try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles.
There’s the concept of a moral parliament that’s been discussed before. To simplify the decision procedure, I’d consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism.
I’ve also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hesitant to side with them.
I don’t expect any of these things to sway anyone else to change their mind, but hopefully you can understand why I have my rather eccentric and unorthodox views.
I’m not going to say who I voted for because I think a secret ballot is important. I will say I strongly agree with the idea of using more democracy in EA for making decisions so I applaud the forum and the organizers for having this event week and letting us vote.