I know this is a debate, but one thing I want to touch on is that animal welfare and human welfare are not necessarily in conflict. I think initiatives like preventing the rise of factory farming in the developing world could be really great for both animals and humans. Animals wouldn’t have to exist in horrible conditions, and humans could (as far as I know; don’t have sources with me right now) have greater food, water, and resource security, reduced ecological/climate devastation, and reduced risk of disease, to name a few things. I think it’s important to think about ways in which we can jointly improve animal welfare and global health, because we all ultimately want to create a better world.
Kenneth_Diao
A few reasons immediately come to mind for me:
There are many more animals in factory farms than humans (scale)
The average suffering of these animals is likely worse than the average suffering of humans (because animals are almost uniformly kept in horrendous conditions, while humans are not) (scale)
My intuition is that the “moral multiplier” of human ability to suffer is not much higher than 1, if at all, for many animals. Animals have central nervous systems and nociceptors just like we do. Mammal suffering in particular might be close to par with humans, but I see no obvious reason that birds or fish are somehow less able to suffer. I also think that there’s probably some bias due to our culture’s philosophical heritage of “rational capability = moral consideration”
Not an expert at this, though, so it’s just me freewheeling
I don’t have exact numbers with me, but I would bet that animal welfare/rights receives much less funding and attention than global health and development (neglectedness)
I’ve also heard that a dollar could prevent more years of, say, chicken suffering than years of human suffering (tractability)
For me, I think the biggest crux is whether you believe animal suffering is comparable to human suffering. Animal is a broad category, but I think at least for some animals, there is all the reason to think that their suffering is comparable and little reason to think it is not. The only reason I put one notch below the maximum is to signal that I am willing to concede some slight uncertainty about this, but nowhere near enough to persuade me that animal welfare/rights is not a pressing cause.
Thanks Gabe! Yes, I agree that aligning to the right values in the right way which will be widely accepted as legitimate is a pretty deep and broad problem.
Not Just For Therapy Chatbots: The Case For Compassion In AI Moral Alignment Research
Hi Leopold,
Thank you for the thoughtful comment! I appreciate that my experience has informed your decision-making, but in the end it’s just my experience, so take it with a grain of salt. I also appreciate your caution; I would say that I’m also a pretty cautious person (especially for an EA; I personally think we sometimes need a little more of that).
I will say that big and risky projects aren’t necessarily a bad thing; they’re just big and risky. So if you’ve carefully considered the risks and acknowledged that you’re committing to a big project that might not pay off and you have some contingency plans, then I think it’s fine to do. I just think that sometimes we get caught up in the vision and end up goodharting for bigger and more visionary projects rather than more actually effective ones (my failure mode in Spring 2023).
Best, Kenneth
This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.
I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).
Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.
Sorry I’m kind of just rambling and hoping something useful comes out of this.
Things EA Group Organisers Need To Hear
Great summary; thanks Hauke!
Reimagining Malevolence: A Primer on Malevolence and Implications for EA
On Direct Animal Advocacy
TLDR: Recent graduate with a B. S. in Psychology and certificate in Computer Science. Looking for opportunities which involve (academic) research, writing, and/or administrative/ops.
Skills & background: ~6 months doing academic research with a grant from Polaris Ventures regarding malevolence (dark personality traits). Before this I was a leader of the UW–Madison chapter of EA and President of its chapter of Effective Animal Advocacy. I also have a substack where I write mostly about EA stuff and philosophy. I have experience writing articles for both academic and lay audiences, writing newsletters, and coordinating events.Here’s my substack: https://kennethdiao.substack.com/
I should also have a forum post out soon which will showcase more of my research aptitudeLocation/remote: Would prefer remote but willing to relocate. I’m currently based in the Twin Cities, MN.
Availability & type of work: Currently, I am quite available and can start immediately. I am interested primarily in paid part-time (or full-time) opportunities, though I’m also open to volunteering.
Resume/CV/LinkedIn: https://www.linkedin.com/in/kenneth-diao-292b02168/
Email/contact: kenneth.diao@outlook.com
Other notes: My principal cause areas are animal advocacy and suffering reduction, though I’m also interested in learning more about AI governance. My fuzzy vision for my ultimate role is that it involves doing writing and research which is close enough to the public and policy world to be grounded and have a concrete impact. I’m hoping the next couple of roles are able to help me test my fit and develop aptitudes/capital for reaching that eventual stage.
Questions:Is it generally effective to go into academia?
Am I a good fit for an academic environment?
How should timelines (not) impact my decisions?
How effective is writing for public audiences?
How can I become a researcher who impacts policy (e.g., working at a think tank)? Do I need a policy or law degree?
Thanks everyone!
Hi Rob,
Thank you for writing this post. I am also highly disappointed that no institutional post-mortem has been conducted, so I’m glad that you’re speaking out about it. Now that the verdict has been officially handed down to SBF, there’s no excuse for there not to be an investigation anymore.
Maybe somehow there are good excuses (and yes, they are excuses) for why a formal investigation has not taken place. But no matter how florid or sophisticated they are, they won’t change my mind that a public investigation should take place. Pretty much no matter what, the reputation of the core EA leadership is going to take a hit if no public and formal investigation is carried out, at least in my eyes.
Regarding comments about psychopathy/sociopathy: I recently did a bunch of research on malevolence, so I feel confident in speaking on the subject. The term “sociopathy” seems to be the less well-defined term, so I would somewhat advise against using it, at least until greater clarity arises. However, psychopathy is a fairly established construct in the literature with a few widely-used instruments from the academy, so if you’re choosing between using psychopathy or sociopathy, I would say use psychopathy. But even psychopathy is a pretty confused term because it captures so many different characteristics (including callousness, grandiosity, impulsivity, and criminality) which don’t necessarily coincide. My opinion is that the cleanest way of talking about all this is to list out more specific and well-defined traits, such as callousness.
But, and I stress this, just because he wasn’t a violent criminal doesn’t prove he was a good, compassionate person. Neuroscientific evidence suggests that deficiencies in empathy/caring for others have distinct origins from violent or socially unacceptable behavioral expressions. Indeed, the main distinguishing point between psychopathy and Antisocial Personality Disorder (ASPD) is that psychopathy has a component that does not theoretically relate to violent or socially unacceptable behavioral expressions (according to an authority on Psychopathy). It would be most adaptive for a person to be able to abide by the most explicit and universal social norms (e.g., don’t kill people) but still do harm in covert, neutral, or even socially desirable ways (e.g., being the CEO of a giant meat company). This is the type of malevolent person I expect SBF is, if he indeed is malevolent.
I also intend to publish a post on this topic, but I thought I’d clarify here since I saw a discussion regarding sociopathy in the comments.
Hi Brian,
I’m honored that you read my article and thought it was valuable!
For the record, I also think that it’s good to know the truth. Maybe I wish it wasn’t necessary for us to know about these things, but I think it is necessary, and I very much prefer knowing about something and thus being able to act in accordance to that knowledge than not knowing about it. So yeah, don’t let my adverse reaction fool you; I love your work and admire you as a person.
Regarding love and hatred, the points you brought up do make me think. I try to always keep an evolutionary perspective in mind; that is, I tend to assume something is adaptive, especially if it’s survived across big time. So I think that, at least in certain environments, things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level; maybe they reach some kind of local maximum of adaptiveness. My hope is that there is a better way to retain the adaptive behavioral manifestations of these traits while avoiding the volatile and maladaptive aspects of these traits, and my belief is that we can approach this by having more correct motivations. Like I really idealise the approaches of people like Gandhi and MLK who recognised the wrongness of the status quo while also trying to create positive change with love and peace; I believe we need more of that. That being said, I take your point that darkness and hate can lead to love/reduction in hatred, and that this may always be true, especially in our non-ideal world.
Hi Alfredo, thanks for reading and suggesting those articles! I’ve skimmed the logarithmic scales article and for sure find that terrifying and depressing. All the more reason to lighten that heavy tail!
Great TL;DR
My Thoughts On Suffering
Thank you for writing about this. I am definitely a person whose concerns about AI are primarily about the massive suffering they might cause, especially when it comes to already-marginal entities or potential entities like non-human animals or digital minds.
I’ll note beforehand that I’m suffering-focused, but I’ll also note that I think even a regular utilitarian using EV reasoning could come to the same conclusions as I do.
I’m curious as to why this isn’t a greater focus in the AI Safety community. At least from my vantage point and recollection, over 90% of the people who talk about AI Safety focus exclusively on the threat AI poses to the continued existence of humanity. If they elaborate at all on what’s at stake in the far future, they emphasize the potential good that could come from having massive populations that are in immense states of bliss, which could be destroyed if we are destroyed (again this is my experience).
I think this rests on the assumption that there is a high likelihood (let’s say >90% confidence) that humanity will become a force of net good in the long term future should it survive to see that. I think that, at the very least, this crux should be tested more than it currently is. I would argue that humanity of the current day is almost certainly (>99% confidence) net harmful (even factory farming alone is an immense harm that it’s hard to argue any good humans do outweighs). I would also argue with similar confidence that humanity’s net impact was consistently negative at least from the agricultural revolution onward (mistreatment/exploitation of non-human animals, slavery, war to name a few major things). Suffice it to say that I would be very worried if an AGI was locked-in with the values of a randomly selected person today (I know some AGI timelines are quite short), or even a randomly selected person 100 years from now (assuming we survive that long), especially if they decide to keep us alive. I can’t give an estimate for how confident I am that humanity’s continued existence with AGI would be a good/bad thing. However, I agree that the suffering risk from AGI is not emphasized proportional to its potential expected consequence, and I’m curious to hear EA/AI Safety perspectives regarding this topic.
I’ll also quickly throw in the idea of humans deliberately creating malicious AGI with the intention of serving their own ends, which is an idea I’ve heard around a few times but know practically nothing about. Though I will say that I think the potential for such a scenario to arise and then become an S-risk is non-negligible (though I can’t really give a good estimate or back it with anything more than intuition).
I I think this is an interesting dilemma, and I am sympathetic to some extent (even as an animal rights activist). At the heart of your concern are 3 things:
Being too radical risks losing popular support
Being too radical risks being wrong and causing more harm than good
How do we decide what ethical system is right or preferable without resorting to power or arbitrariness?
I think in this case, 2) is of lesser concern. It does seem like adults tend to give far more weight to humans than animals (a majority of a sample would save 1 human over 100 dogs), though interestingly children seem to be much less speciesist (Wilks et al., 2020). But I think we have good reasons to give substantial moral weight to animals. Given that animals have central nervous systems and nociceptors like we do, and given that we evolved from a long lineage of animals, we should assume that we inherited our ability to suffer from our evolutionary ancestors rather than uniquely developing it ourselves. Then there’s evidence, such as (if I remember correctly) that animals will trade off material benefits for analgesics. And I believe the scientific consensus has consistently and overwhelmingly been that animals feel pain. Animals are also in the present and the harms are concrete, so animal rights is not beset by some of the concerns that, say, long-termist causes are. So I think the probability that we will be wrong about animal rights is negligible.
I sympathize with the idea that being too radical risks losing support. I’ve definitely had that feeling myself in the past when I saw animal rights activists who preferred harder tactics, and I still have my disagreements with some of their tactics and ideas. But I’ve come to see the value in taking a bolder stance as well. From my experience (yes, on a college campus, but still), many people are surprisingly willing to engage with discussions about animal rights and about personally going vegan. Some are even thankful or later go on to join us in our efforts to advocate for animals. I think for many, it’s a matter of educating them about factory farming, confronting them with the urgency of the problem, and giving them space to reflect on their values. And even if you don’t believe in the most extreme tactics, I think it’s hard to defend not advocating for animal rights at all. Just a few centuries ago, slavery was still widely accepted and practiced, and abolitionism was a minority opinion which often received derision and even threats of harm. The work of abolitionists was nevertheless instrumental in getting society to change its attitudes and its ways such that the average person today (at least in the West) would find slavery abhorrent. Indeed, people would roundly agree that slavery is wrong even if they were told to imagine that the enslaved person’s welfare increased due to their slavery (based on a philosophy class I took years ago). To make progress toward the good, society needs people who will go against the current majority.
And this may lead to the final question of how we decide what is right and what is wrong. This I have no rigorous answer to. We are trapped between the Scylla of dogmatism and the Charybdis of relativism. Here I can only echo the point I made above. I agree that we must give some weight to the majority morality, and that to immediately jump ten steps ahead of where we are is impractical and perhaps dangerous. But to veer too far into ossification and blind traditionalism is perhaps equally dangerous. I believe we must continue the movement and the process towards greater morality as best we can, because we see how atrocious the morality of the past has been and the evidence that the morality of the present is still far from acceptable.