Why it’s difficult to find cost-effective wild animal welfare interventions we could do now
Introduction
Most Wild Animal Welfare (WAW) researchers I talked to thought that we are unlikely to find WAW interventions that would be directly competitive with farmed animal welfare interventions in terms of direct short-term cost-effectiveness. After spending some months trying to find such interventions myself, I tentatively agree. In this text, I will try to explain why.
Experience from the project
I spent some months trying to find a WAW intervention that is:
tractable (can in principle be funded >$100K/yr in the next two years even if we choose not to do so),
non-controversial (>40% support and <30% oppose in a US poll), and
The first step in the process was listing all potential interventions. Even though many people contributed to it, I found this list to be underwhelming (unfortunately, I don’t think I can share the list without asking for permission from everyone who contributed to it). I feel that coming up with plausible interventions for farmed animals is much easier. And in fact, lists of farmed animal welfare ideas I’ve seen from Charity Entrepreneurship did seem much more promising. And I found it easy to think of more farmed animal charity ideas for Charity Entrepreneurship. But despite all my research, none of the WAW ideas seem promising enough to seriously consider.
Also, ideas in the WAW list seemed much more complex to research and gain certainty on than most ideas for farmed animals would be. Consequently, the impacts of WAW interventions also in general seemed to be much more uncertain. This makes me less excited about WAW interventions because it increases the effects of the optimizer’s curse.[1]
This could be because the farmed animal welfare movement is much more advanced at this point, and we already know what intermediate goals benefit farmed animals (e.g., reducing animal product consumption, and various welfare reforms). If we figure out what intermediate goals could be good for WAW (e.g., increasing the number of large herbivores), then it might be easier to find promising WAW interventions. In other words, WAW currently seems too neglected for us to have a huge impact right away.
The intervention that seemed to have the highest chance of being cost-effective enough in a demonstrable way was trying to reduce aquatic noise. While I think it’s promising compared to other WAW interventions I considered, there are many farmed animal interventions I would prioritize over reducing aquatic noise.
I still wouldn’t be very surprised if someone found multiple interventions that satisfy these criteria better, especially since I don’t have a background in ecology or something similar. Also, only a few people are working on finding WAW interventions and the space of all possible WAW interventions is large.
Comparing farmed and wild animal welfare interventions of similar categories
Both WAW and farmed animal welfare interventions can be categorized as:
Improving welfare during life
Changing population sizes
Reducing suffering during death
Below, I compare farmed and wild animal welfare interventions in each of these categories.
Improving welfare
Farmed animal welfare interventions seem more promising because humans are in control of the entire lives of farmed animals. This makes it easier to improve their conditions and genetics, and easier to monitor the effects of changes we make. Also, WAW interventions often affect more species which complicates things a lot. Hence, researching WAW welfare improvements seem to be less tractable.
Welfare improvements that to me seem most promising for wild animals are about some sort of pollution that causes a lot of stress to wild animals. Note that if it affects the population sizes of different species, it can be very difficult to know if an intervention is overall good or bad.
Changing population sizes
I believe that it can sometimes be easier to change the population sizes of small wild animals than to change the population sizes of farmed animals. The problem is that it’s more difficult to know whether the change is good or bad for animal welfare.
The lives of most farmed animals are clearly worse than non-existence. In contrast, it’s unclear whether wild animals’ lives are good or bad (see this talk). My best guess is that in the short term, it would be better for WAW to decrease the overall population of wild animals. However, this usually goes against other human interests (like environmental protection), while decreasing farmed animal populations generally goes along with these other human interests as it decreases our environmental footprint. This makes working on reducing wild animal numbers much more morally ambiguous and controversial or even politically infeasible.
Also, often when we decrease the population of one species, the population of some other species increases. This applies to both farmed and wild animal interventions and complicates any analysis of short-term effects a lot. I’d say that the effects on WAW of farmed animal welfare interventions are maybe a bit easier to predict but we still can’t predict the total effect on welfare with reasonable certainty. This has led me to analysis paralysis and I don’t know what to do about that.
Perhaps we could speculate that some r-selected species are worse off than some k-selected species. And perhaps if we grow one crop or tree instead of another, or fish different species of fishes, the ratio between the populations of these species would change. I simply didn’t have the expertise to try to find such an intervention, maybe someone else will. Perhaps a more promising idea is trying to eradicate some parasite that causes a lot of suffering (e.g., screwworm).
Reducing suffering during death
Since humane slaughter interventions typically affect animals for a short time, my personal intuition is that they are less important. But it depends on how you weigh intense suffering during death versus more prolonged but less intense suffering.
Reducing farmed animal slaughters might be more tractable because
Slaughter reforms for farmed animals and wild-caught fish should be cheaper per animal to implement and research because these animals die under human control. Although animal advocates are more likely to lobby the government to make changes rather than make changes themselves so it’s not easy to say which is more promising.
Different methods of wild animal slaughter may have complicated indirect effects that are difficult to account for. For example, chemical insecticides may have different effects on non-target species compared to insecticides that introduce predators, parasitoids, or pathogens to suppress insect populations. Unless we decide to ignore indirect effects to avoid analysis paralysis, this makes research more difficult. This may be less of a problem for liminal animals as city ecosystems are simpler.
On the other hand, since most animals humans kill are wild, the ultimate possible scale of working on wild animals is much higher.
I didn’t look into humane insecticides much because the author of this report told me in 2020 that there isn’t enough research on causes of insect suffering and nontarget effects of interventions to recommend an action that a non-research charity could center itself on. I took their word for it.
For wild or liminal animals, another option is to never let animals that we would kill come into existence, perhaps by cutting off the food supply or informing people how to prevent pest infestations. I looked a bit into preventing termite infestations and it seemed doable but not a priority, partly because they might be living good lives most of the time.
Closing thoughts
I only give limited weight to this sort of high-level reasoning I did above, there could be interventions to which my reasoning does not apply. Also, it’s not that surprising that I failed to find promising WAW interventions since I don’t have expertise in ecology or other relevant fields. But few other researchers who have been researching this for a while failed to find very promising interventions too, so far and think that it’s unlikely that they will find them.
Also, I remember GiveWell writing a long time ago that when they look deeper at some cause area, they almost always conclude that it’s less promising than they originally thought, so maybe this is what happened to me. That said, when I looked into farmed animal welfare (especially welfare reforms), I came away thinking that it’s much more promising than I thought.
Opinions expressed here are solely my own. I’m not currently employed by any organization. I wrote this text about a year ago but didn’t get around to publishing it until now, something might have changed since then.
Explanation of the Optimizer’s Curse (adapted from this post): Suppose you weigh ten identical items with very inaccurate scales. The item that is the heaviest according to your results is simply the item whose weight was the most overestimated by the scales. Now suppose the items are similar but not identical. The item that is the heaviest according to the scales is also the item whose weight is most likely an overestimate.
Similarly, suppose that you make very approximate cost-effectiveness estimates of ten different interventions. The charity that seems the most cost-effective according to your estimates could seem that way only because you overestimated its cost-effectiveness, not because it is actually more cost-effective than others. Consequently, even if we are unbiased in our estimates, we might be too optimistic about interventions that seem the most cost-effective. The more uncertain cost-effectiveness estimates are, the stronger the effect of the optimizer’s curse is. Hence we should prefer interventions whose cost-effectiveness estimates are more robust. Since cost-effectiveness estimates of WAW interventions tend to be very uncertain, this is an argument against WAW interventions. More on optimizer’s curse can be read in Karnofsky (2016).
If I’m reading correctly, you found that many researchers thought “it’s unlikely that they will find [cost-competitive WAW interventions]” which surprised me, since it seems like you found reducing aquatic noise to be borderline already. Did you just mean in the very near future? Or do many researchers think it’s unlikely we will ever identify such interventions?
Good question :) I researched aquatic noise because that was the only intervention where it seemed at least possible for me to estimate cost-effectiveness. But the estimate ended up being so uncertain that it didn’t provide much information. Science simply doesn’t have answers yet. I expect it to be the same for most WAW interventions. That is, I expect there to be huge uncertainty on how cost-effective they are (and whether they are even good for WAW when all things are considered), and in the best-case scenario, they might be as cost-effective as farmed animal welfare interventions. But we might never find out if we are in the best-case scenario. It’s difficult for me to say that aquatic noise is not worth looking into further because I spent like six months researching it but I think that for now there are enough better ideas in farmed animal welfare space so I don’t think we should pursue it. I can see WAW interventions being worth it if the animal advocacy movement gets a lot more funding or, I don’t know, very advanced artificial intelligence can be used to figure out all ecological consequences of nature somehow. Assuming AI does not change everything, I’d give a 15% chance that in the next 15 years, someone will find a WAW intervention that to me would seem “directly cost-effective (10%+ as cost-effective in expectation as chicken welfare reforms)” and “non-controversial (>40% support and <30% oppose in a US poll).” I’m not counting WAW interventions that have to do with the far future or changing values of the society here.
NOTE: I edited the shortform text to match what I say here. I used to say that I’m on the fence whether EA resources should be spent on reducing aquatic noise.
Hey, I will link to this shortform from a post that I plan to publish this week together with a short summary, so I didn’t feel a need to get more attention to these thoughts. Maybe I should’ve still posted it as a post as this is not what shortforms are for, but I guess I’m bit shy about posting :)
Thanks for the details! Just curious. It does feel post-y, but I can understand. Maybe posting as a personal blog (not on the frontpage) or on the animal welfare topic but not on the frontpage would have worked for what you are looking for
What’s a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump’s victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don’t know what’s on the other side. There are so many things that I don’t want to happen, that I’ve lost the sight of what I do want to happen. I don’t want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to.
For what it’s worth, I’m skeptical of approaches that try to design the perfect future from first principles and make it happen. I’m much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn’t cure all illness.)
How much we can help doesn’t depend on how awful or how great the world is, we can save the drowning child whether there’s a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of “EA specific” things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
Thanks. Yeah, I now agree that it’s better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
the person in the private message also told me about this serenity prayer: “grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.”
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it’s for the better long-term.
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy’s CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don’t want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this.
One possible way of thinking about this, which might tie your work in smaller battles into a ‘big picture’, is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]
Note that I’m only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).
As in this passage from one of Yudkowsky’s short stories:
“But time passed,” the Confessor said, “time moved forward, and things changed.” The eyes were no longer focused on Akon, looking now at something far away. “There was an old saying, to the effect that while someone with a single bee sting will pay much for a remedy, to someone with five bee stings, removing just one sting seems less attractive. That was humanity in the ancient days. There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere. And yet… and yet...”
“There was a threshold crossed somewhere,” said the Confessor, “without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together.”
It might be too hard to envision an entire grand future, but it’s possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer’s).
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project—I think most people’s dreams of a future are pretty wild and not something I would want for myself!
I don’t have a suggestion, but I’ve been encouraged and “heartwarmed” by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we’re skeptical abut whether “future designing” is a good idea at all.
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there’s a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.
I also think on the front of factory farming we focus too much on the entire problem, and not enough on how good the wins are in and of themselves.
I find videos about space colonization pretty inspiring. Of course, space colonization would ideally be paired with some level of suffering abolition, so we aren’t spreading needless suffering to other planets. Space colonization could help with political discord, since people with different ideas of a “good society” can band together and peacefully disperse through the solar system. If you think traveling the world to experience different cultures is fun, I expect visiting other planets to experience different cultures will be even better. On the AI front, rumor has it that scaling is slowing down… that could grant more time for alignment work, and increase the probability that an incredible future will come to pass.
EAG and covid [edit: solved, I’m not attending the EAG (I’m still testing positive as of Saturday)]
I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:
Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people.
Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult)
Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that.
Don’t go. I don’t mind doing that if that’s the best thing altruistically.
In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day.
This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid.
I will also need to use my laptop during meetings to take notes
Might it be possible to inform your conversational partners and get their consent to either A) use some type of transcription software (such as Otter.ai), or B) use your smartphone to record the audio of the conversation so that you can take notes manually later on? This would allow you to focus fully on the conversation, but it would also remove the limitation of typing on your laptop and would thus allow you to have walking meetings.
thanks but in this case there are other reasons why I need to use the laptop and make people I meet and survey to look at my laptop. I guess I mostly want to gaze how big of a deal people think covid is nowadays.
Ugh bad luck Saulius, I totally feel your frustration. I’ve had a few covid-bouts where I tested positive for over 2 weeks. It feels really frustrating to have to miss out on important things when it’s unclear that you’re even infectious, and also unclear that others are taking similar precautions.
It sounds like you’ve made your decision but fwiw, in your position I’d tell people about my covid status and offer them outdoor meetings if they were comfortable with that.
If I 1) had savings that cover over a year of my living expenses, 2) wasn’t already employed at an EA think tank, and 3) wanted to do EA research independently, I would probably apply to EA funds to do research on unspecified topics (if they would allow me to do that). I would ask them to give funds not now, but after the research period is over (let’s say 6 months). At the end of the research period, I would produce text that shows instances where I think I had impact and include reasoning why what I did may have had impact. Note that this could include not just published articles, but also comments or in-person communications with trusted advocates that changed how a certain organization does something, reviews of work of others, wikipedia article edits, etc. The amount of funds that I would receive would depend on EA funds manager’s opinion on how good or impactful my work was (or how good of a chance what I did had to be impactful). I imagine that there would be pre-agreed sums of money the manager could choose from. E.g.:
No significant effort to achieve impact - $0
Significant effort to achieve impact in ways that were plausible but most likely didn’t materialize - $8,000
Some expected impact - $15,000
High expect impact - $25,000
Very high expected impact - $40,000
Before the research period, there could be a consultation with the EA fund manager who is going to evaluate my work about what kind of work they think might be promising. Such consultations could also happen during the research period. Also, the research topics wouldn’t need to be something completely unspecified. E.g., it could be “something related to welfare reforms for farmed animals” or it could also be a fully specified topic.
I think that this is better than the traditional approach of applying for a grant to research a specific topic for the following reasons:
More direct motivation to do good. In the traditional approach, the incentive is to create a grant proposal that looks good on the surface (since I imagine that EA funds managers don’t have time to investigate grants very deeply). Then the financial incentive to do a good job can be less clear.
You can switch research directions on the go. This is good because of three related reasons:
It allows you to switch to more impactful research directions. I previously wrote this comment “To me, the main disadvantage of being funded through a fund is that I would be tied to a research topic and a timeframe in which I would have to complete the project (or at least that’s how I imagine it). Working at an organization allows me much more flexibility. I can begin researching a topic, see that it’s not as tractable as I thought, and then drop it. Alternatively, I can increase the scope of the project, or change it into something different, depending on what I feel will be more impactful. All of these scenarios happen often because the more I work on the project, the more informed I am about the most promising directions of that project.” This type of grant promise would allow flexibility for research being funded by a fund.
Relatedly, ability to work on whatever sparks your interest in the moment, freedom to do whatever you want. I sometimes read something on this forum, and want to read, or spend all day writing a comment on it. When I do it, I’m usually very productive while doing it because I work on what is interesting to me at the time. If I had a grant to do research on a specific topic, then I would be less likely to do any of this because I would feel pressure to research whatever I was paid to research.
Whenever I need to work on something, I don’t want to do it. And when I try to do it anyway, I am often less creative as I just want to get it over with. I’ve talked with some other people who had a similar issue.[1] I think that the structure I proposed would partly but not fully solve this issue for me at least.
Here are cons I can think of:
If you can work on anything, it can lead to too much indecisiveness about what to do. Sometimes it’s good when it’s decided what you should do and you don’t need to think about it.
It might create a nebulous pressure to do good that is difficult to act on, which could lead to stress.
One could fear that what they did was impactful but disagreed with the views of the assigned EA funds manager. In that case maybe other EA fund managers could get involved but that wouldn’t fully mitigate the problem.
I’m not going to compare this option with working for an EA research organization but I think that there are cons and pros compared to that too. I imagine that this sort of thing could be the right choice for some people.
I thought about this only for two hours so I’m probably missing some important considerations. Also, I don’t know if this is a new idea, I haven’t checked.. It reminds me of certificates of impact but it’s a bit different. If EA funds managers thought that this is a good idea, they could encourage people to apply in this way, and maybe make a separate application form for that.
To illustrate, I had multiple situations where I worked on some EA side project and was making a lot of progress in my free time and then my employer allowed me to work on it in my work time and my rate of progress per hour slowed down a lot. I think that for me this is because when I have to do something, I am motivated by negative incentives (fear of failure and hence losing my job, or doing job that is below what is expected of me and hence people thinking less of me (perfectionism)) more than by positive incentives (making an impact and impressing other people). This talk made me see that.
I guess this would also not necessarily have to be research. E.g., a grant for corporate campaigns where payout depends on the commitments they won. I imagine multiple problems with this and it’s probably a bad idea but perhaps it’s worth consideration in some cases.
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sharing. If you think it’s a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
That’s a cool idea. Presumably you would need a relatively uncensored chatbot—when I recently tried to get several to produce arguments in favour of Trump policies they all refused.
This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
more accessible to everyone. Those conversations often don’t happen in real life.
lower stakes, you can just speak your mind about your prejudices you have with no consequences
the chatbot can listen and explain things better and with more “empathy”, won’t be tired about answering the same questions again and again
You can make up different engaging situations and talk to people like candidates for presidency
It can be a bit like a warm up to talking to a real person
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusuf’s father tried to resist, and they took him away. They left Yusuf’s family without its provider. His mother’s health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusuf’s school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasn’t alone. Hakim said he worked for a “freedom movement” that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hope—maybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their people’s suffering and injustices, stories Yusuf hadn’t heard from the media. He talked about the honor of defending one’s family and land and how the so-called “enemies” were responsible for everything Yusuf’s family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a family—a group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his family’s suffering, he wondered if the anger wasn’t justified. One of the older boys took Yusuf aside one night and told him about his own family’s hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, “You’re one of us, Yusuf. You’re strong, and your family deserves justice.”
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was “just in case” they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusuf’s mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, “Mom, I’m doing this for us. For Dad.” She looked at him with sadness but didn’t argue; she didn’t have the strength.
The next day, Hakim told him it was time to make a true commitment—to sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and “become a hero.” Yusuf didn’t want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.
I sometimes meet people who claim to be vegetarians (don’t eat meat but consume milk and eggs) out of the desire to help the animals. If appropriate, I show them the http://ethical.diet/ website and explain that the production of eggs likely requires more suffering per calorie than most of the commonly consumed meat products. Hence, if they care about animals, avoiding eggs should be a priority. If they say that this is too many food products to give up, I suggest that perhaps instead of eating eggs, they could occasionally consume some beef (although that is bad for the environment). I think that the production of beef requires less suffering per calorie, even though I’m unsure how to compare suffering between different animals. In general, I’m skeptical about dietary change advocacy, but my intuition is that talking about this with vegetarians in situations where it feels appropriate is worth the effort. But I’m uncertain and either way, I don’t think this is very important.
Q: Has anyone estimated what is the risk of catching covid at the EAG London this year? Is it more like 5%, 20%, 50%, or 80%? I still haven’t decided whether to go (the only argument for not going being covid) and knowing what is the risk would make it a lot easier. Travelling is not a concern since I live in London not that far from the venue.
To get e.g. more than 20% probability, it seems like you’d have to make some very bad assumptions (weirdly high base rates of Covid amongst presumptive attendees, combined with incompetence or malice when it comes to testing). Seems more like 1-5% risk.
We’ve added a spreadsheet microcovid estimates for a few different scenarios to our COVID Protocol. You can see the whole protocol here. Link to the estimates here.
A tip for writing EA forum posts with footnotes
First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.
Tables and images
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/) and write a code like ![imageName](https://i.imgur.com/yMc2e8x.jpg “imageName”) in your markdown. Of course, the image address should be changed to your image. Currently, the only way to add tables is to make a screenshot of a table and add an image of it.
As I understand it, there will be a new EA forum editor some time soon and all this will not be needed anymore but for now this is how I make my EA forum posts.
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)
If you’ve put the images in a google doc, and made the doc public, then you’ve already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.
I’m not sure whether I should expect google or imgur to preserve their image-links for longer.
There was an interesting discussion on whether EA organizations should reveal the authors of posts they publishhere. You may want to check it out if this is relevant to you (not just the linked comment, but also the replies.)
Why don’t we fund movies and documentaries that explore EA topics?
It seems to me that the way society thinks about the future is largely shaped by movies and documentaries. Why don’t we create movies that shape the views in a way that’s more realistic and useful? E.g., I haven’t read the discussion on whether Terminator is or is not a good comparison for AI risks but it’s almost certainly not a perfect comparison. Why don’t we create a better one that we could point people to? Something that would explore many important points. Now that EA has more money, that seems plausible. In 2021, OpenPhil gave grants totalling $77.6 million for work on the potential risks from Advanced AI. The budget of a movie with an all-star cast and special effects like Don’t Look Up is $75 million. But the difference is that the movie might make money, maybe even more money than its budget. It’s not obvious to me that even something extravagant like this would be a bad investment because it might make it easier to make progress on AI policy and other stuff for years to come. Of course, movies wouldn’t have to be so high budget, especially at the start. And better approach would probably be creating documentaries. Maybe a series like Vox Explained for various EA issues or for longtermism. I think it could become popular because some of the EA ideas about how far future might look seem more interesting than a lot of sci-fi, and also more novel to most people. And this is not just about AI. E.g., I can imagine a nuanced documentary about wild animal suffering that also talks about why we should think twice before spreading nature to other planets.
Anyway, this is just a shower thought, I imagine that this has been discussed before but just wanted to post it in case it hasn’t been discussed enough. And note that I never worked on AI so I don’t know what I’m talking about in that part of my text.
Maybe a consideration is that these sorts of collaborations are harder to setup than it seems.
Basically, execution and alignment seems important and hard.
Even if there is available media talent and available funding, setting up the right aesthetic (in more than one sense) and content seems difficult.
It’s unclear, but there may be downside risk (from looking silly or condescending).
This may not pertain to Saulius’s point, which isn’t focused on outreach, but people have cringed at, or even vehemently opposed, certain kinds of involvement, like US federal agencies getting involved in funding AI safety. So directly promoting the movement (as opposed to underlying ideas or topics) isn’t seen as robustly good, but this is highly unclear.
[More responses are not necessary right now, thank you very much to all who filled it]
I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you’re interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces suffering of hens by at least 60%” and with some cost-effectiveness estimates. Thanks ❤️
I’m very interested in what you find out! It’d be great to compare the results of the survey with the predictions of the heavy-tailed valence hypothesis (see here or here).
If I were to read one of EA-related books (e.g. Doing Good Better, The Most Good You Can Do, The Life You Can Save, The Precipice, Superintelligence, etc.), I would consider writing/improving a summary of the book in wikipedia while reading it, in a way that conveys main points well. It could help you to digest the book better and help others to understand the ideas a bit. You could do it in english as well as maybe in some other language. To see whether it’s worth putting in the effort, you can check out Wikipedia pageview statistics of the books I mentioned and others here (it doesn’t include some views that come from redirects though). It seems that the page Superintelligence is the most viewed one out of these with an average of 4,597 monthly visitors.
Shower thought, probably not new: some EAs think that expanding the moral circle to include digital minds should be a priority. But the more agents care about the suffering of digital minds, the more likely it is that some agent that doesn’t care about it will use creating vast amounts of digital suffering as a threat to make other agents do something. To make the threat more credible, in at least some cases it may follow through, although I don’t know what is the most rational strategy here. Could this be a dominant consideration that could make the expected value of moral circle expansion to be negative for negative utilitarians? Because the intensity and the scale of purposefully created suffering could outweigh the incidental suffering that would be prevented in other scenarios by an expanded moral circle.
EDIT: I no longer think that this is a legitimate concern, see my comment below.
This is an interesting idea. I’m trying to think of it in terms of analogues: you could feasibly replace “digital minds” with “animals” and achieve a somewhat similar conclusion. It doesn’t seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible.
Perhaps the reason we don’t see this happening is that nobody really wants to influence vegans alone. There aren’t many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn’t much that an agent could gain from a threat.
I imagine the same might be true of digital minds. If it’s anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way—and so there wouldn’t be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.
At any rate, this might be a relevant consideration for other types of moral circle expansion, too.
I think it is useful to think about something like this happening in the current world like you did here because we have better intuitions about the current world. Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn’t happen for multiple reasons. One of them is that it would be irrational for vegans to agree to give money because then other people would continue exploiting them with this simple trick.
I think that the same applies to far future scenarios. If an agent allows itself to be manipulated this easily, it won’t become powerful. It’s more rational to just make it publicly known that you refuse to engage with such threats. This is one of the reasons why most Western countries have a publicly declared policy to not negotiate with terrorists. So yeah, thinking about it this way, I am no longer concerned about this threats thing.
Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn’t happen for multiple reasons.
Interestingly, there is at least one instance where this apparently has happened. (It’s possible it was just a joke, though.) There was even a law review article about the incident.
I think this is an interesting point but I’m not convinced that it’s true with high enough probability that the alternative isn’t worth considering.
In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions.
Another issue is the nature of precommitments[1]. It seems that under most games/simple decision theories for playing those games (eg “Chicken” in CDT), being the first to credibly precommit gives you a strategic edge under most circumstances. But if you’re second in those situations, it’s not clear whether “I don’t negotiate with terrorists” is a better or worse stance than swerving.
(And in the former case, with both sides precommitting, a lot of torture will still happen).
[1] using what I assume is the technical definition of precommitment
Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don’t care about it at all and even propose actions that would increase emissions so that they could trade “not doing that” with some concession from Party A. I’m sure that we could find lots of real-world examples of that.
Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries.
Why it’s difficult to find cost-effective wild animal welfare interventions we could do now
Introduction
Most Wild Animal Welfare (WAW) researchers I talked to thought that we are unlikely to find WAW interventions that would be directly competitive with farmed animal welfare interventions in terms of direct short-term cost-effectiveness. After spending some months trying to find such interventions myself, I tentatively agree. In this text, I will try to explain why.
Experience from the project
I spent some months trying to find a WAW intervention that is:
tractable (can in principle be funded >$100K/yr in the next two years even if we choose not to do so),
non-controversial (>40% support and <30% oppose in a US poll), and
directly cost-effective (10%+ as cost-effective in expectation as chicken welfare reforms).
The first step in the process was listing all potential interventions. Even though many people contributed to it, I found this list to be underwhelming (unfortunately, I don’t think I can share the list without asking for permission from everyone who contributed to it). I feel that coming up with plausible interventions for farmed animals is much easier. And in fact, lists of farmed animal welfare ideas I’ve seen from Charity Entrepreneurship did seem much more promising. And I found it easy to think of more farmed animal charity ideas for Charity Entrepreneurship. But despite all my research, none of the WAW ideas seem promising enough to seriously consider.
Also, ideas in the WAW list seemed much more complex to research and gain certainty on than most ideas for farmed animals would be. Consequently, the impacts of WAW interventions also in general seemed to be much more uncertain. This makes me less excited about WAW interventions because it increases the effects of the optimizer’s curse.[1]
This could be because the farmed animal welfare movement is much more advanced at this point, and we already know what intermediate goals benefit farmed animals (e.g., reducing animal product consumption, and various welfare reforms). If we figure out what intermediate goals could be good for WAW (e.g., increasing the number of large herbivores), then it might be easier to find promising WAW interventions. In other words, WAW currently seems too neglected for us to have a huge impact right away.
The intervention that seemed to have the highest chance of being cost-effective enough in a demonstrable way was trying to reduce aquatic noise. While I think it’s promising compared to other WAW interventions I considered, there are many farmed animal interventions I would prioritize over reducing aquatic noise.
I still wouldn’t be very surprised if someone found multiple interventions that satisfy these criteria better, especially since I don’t have a background in ecology or something similar. Also, only a few people are working on finding WAW interventions and the space of all possible WAW interventions is large.
Comparing farmed and wild animal welfare interventions of similar categories
Both WAW and farmed animal welfare interventions can be categorized as:
Improving welfare during life
Changing population sizes
Reducing suffering during death
Below, I compare farmed and wild animal welfare interventions in each of these categories.
Improving welfare
Farmed animal welfare interventions seem more promising because humans are in control of the entire lives of farmed animals. This makes it easier to improve their conditions and genetics, and easier to monitor the effects of changes we make. Also, WAW interventions often affect more species which complicates things a lot. Hence, researching WAW welfare improvements seem to be less tractable.
Welfare improvements that to me seem most promising for wild animals are about some sort of pollution that causes a lot of stress to wild animals. Note that if it affects the population sizes of different species, it can be very difficult to know if an intervention is overall good or bad.
Changing population sizes
I believe that it can sometimes be easier to change the population sizes of small wild animals than to change the population sizes of farmed animals. The problem is that it’s more difficult to know whether the change is good or bad for animal welfare.
The lives of most farmed animals are clearly worse than non-existence. In contrast, it’s unclear whether wild animals’ lives are good or bad (see this talk). My best guess is that in the short term, it would be better for WAW to decrease the overall population of wild animals. However, this usually goes against other human interests (like environmental protection), while decreasing farmed animal populations generally goes along with these other human interests as it decreases our environmental footprint. This makes working on reducing wild animal numbers much more morally ambiguous and controversial or even politically infeasible.
Also, often when we decrease the population of one species, the population of some other species increases. This applies to both farmed and wild animal interventions and complicates any analysis of short-term effects a lot. I’d say that the effects on WAW of farmed animal welfare interventions are maybe a bit easier to predict but we still can’t predict the total effect on welfare with reasonable certainty. This has led me to analysis paralysis and I don’t know what to do about that.
Perhaps we could speculate that some r-selected species are worse off than some k-selected species. And perhaps if we grow one crop or tree instead of another, or fish different species of fishes, the ratio between the populations of these species would change. I simply didn’t have the expertise to try to find such an intervention, maybe someone else will. Perhaps a more promising idea is trying to eradicate some parasite that causes a lot of suffering (e.g., screwworm).
Reducing suffering during death
Since humane slaughter interventions typically affect animals for a short time, my personal intuition is that they are less important. But it depends on how you weigh intense suffering during death versus more prolonged but less intense suffering.
Reducing farmed animal slaughters might be more tractable because
Slaughter reforms for farmed animals and wild-caught fish should be cheaper per animal to implement and research because these animals die under human control. Although animal advocates are more likely to lobby the government to make changes rather than make changes themselves so it’s not easy to say which is more promising.
Different methods of wild animal slaughter may have complicated indirect effects that are difficult to account for. For example, chemical insecticides may have different effects on non-target species compared to insecticides that introduce predators, parasitoids, or pathogens to suppress insect populations. Unless we decide to ignore indirect effects to avoid analysis paralysis, this makes research more difficult. This may be less of a problem for liminal animals as city ecosystems are simpler.
On the other hand, since most animals humans kill are wild, the ultimate possible scale of working on wild animals is much higher.
I didn’t look into humane insecticides much because the author of this report told me in 2020 that there isn’t enough research on causes of insect suffering and nontarget effects of interventions to recommend an action that a non-research charity could center itself on. I took their word for it.
For wild or liminal animals, another option is to never let animals that we would kill come into existence, perhaps by cutting off the food supply or informing people how to prevent pest infestations. I looked a bit into preventing termite infestations and it seemed doable but not a priority, partly because they might be living good lives most of the time.
Closing thoughts
I only give limited weight to this sort of high-level reasoning I did above, there could be interventions to which my reasoning does not apply. Also, it’s not that surprising that I failed to find promising WAW interventions since I don’t have expertise in ecology or other relevant fields. But few other researchers who have been researching this for a while failed to find very promising interventions too, so far and think that it’s unlikely that they will find them.
Also, I remember GiveWell writing a long time ago that when they look deeper at some cause area, they almost always conclude that it’s less promising than they originally thought, so maybe this is what happened to me. That said, when I looked into farmed animal welfare (especially welfare reforms), I came away thinking that it’s much more promising than I thought.
Opinions expressed here are solely my own. I’m not currently employed by any organization. I wrote this text about a year ago but didn’t get around to publishing it until now, something might have changed since then.
Explanation of the Optimizer’s Curse (adapted from this post): Suppose you weigh ten identical items with very inaccurate scales. The item that is the heaviest according to your results is simply the item whose weight was the most overestimated by the scales. Now suppose the items are similar but not identical. The item that is the heaviest according to the scales is also the item whose weight is most likely an overestimate.
Similarly, suppose that you make very approximate cost-effectiveness estimates of ten different interventions. The charity that seems the most cost-effective according to your estimates could seem that way only because you overestimated its cost-effectiveness, not because it is actually more cost-effective than others. Consequently, even if we are unbiased in our estimates, we might be too optimistic about interventions that seem the most cost-effective. The more uncertain cost-effectiveness estimates are, the stronger the effect of the optimizer’s curse is. Hence we should prefer interventions whose cost-effectiveness estimates are more robust. Since cost-effectiveness estimates of WAW interventions tend to be very uncertain, this is an argument against WAW interventions. More on optimizer’s curse can be read in Karnofsky (2016).
Thank you for sharing this!
If I’m reading correctly, you found that many researchers thought “it’s unlikely that they will find [cost-competitive WAW interventions]” which surprised me, since it seems like you found reducing aquatic noise to be borderline already. Did you just mean in the very near future? Or do many researchers think it’s unlikely we will ever identify such interventions?
Good question :) I researched aquatic noise because that was the only intervention where it seemed at least possible for me to estimate cost-effectiveness. But the estimate ended up being so uncertain that it didn’t provide much information. Science simply doesn’t have answers yet. I expect it to be the same for most WAW interventions. That is, I expect there to be huge uncertainty on how cost-effective they are (and whether they are even good for WAW when all things are considered), and in the best-case scenario, they might be as cost-effective as farmed animal welfare interventions. But we might never find out if we are in the best-case scenario. It’s difficult for me to say that aquatic noise is not worth looking into further because I spent like six months researching it but I think that for now there are enough better ideas in farmed animal welfare space so I don’t think we should pursue it. I can see WAW interventions being worth it if the animal advocacy movement gets a lot more funding or, I don’t know, very advanced artificial intelligence can be used to figure out all ecological consequences of nature somehow. Assuming AI does not change everything, I’d give a 15% chance that in the next 15 years, someone will find a WAW intervention that to me would seem “directly cost-effective (10%+ as cost-effective in expectation as chicken welfare reforms)” and “non-controversial (>40% support and <30% oppose in a US poll).” I’m not counting WAW interventions that have to do with the far future or changing values of the society here.
NOTE: I edited the shortform text to match what I say here. I used to say that I’m on the fence whether EA resources should be spent on reducing aquatic noise.
Hi, I’m curious what made you write this as a shortform rather than a post?
Hey, I will link to this shortform from a post that I plan to publish this week together with a short summary, so I didn’t feel a need to get more attention to these thoughts. Maybe I should’ve still posted it as a post as this is not what shortforms are for, but I guess I’m bit shy about posting :)
Thanks for the details! Just curious. It does feel post-y, but I can understand. Maybe posting as a personal blog (not on the frontpage) or on the animal welfare topic but not on the frontpage would have worked for what you are looking for
What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump’s victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don’t know what’s on the other side. There are so many things that I don’t want to happen, that I’ve lost the sight of what I do want to happen. I don’t want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
For what it’s worth, I’m skeptical of approaches that try to design the perfect future from first principles and make it happen. I’m much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn’t cure all illness.)
How much we can help doesn’t depend on how awful or how great the world is, we can save the drowning child whether there’s a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of “EA specific” things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
Thanks. Yeah, I now agree that it’s better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
the person in the private message also told me about this serenity prayer: “grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.”
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it’s for the better long-term.
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy’s CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don’t want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this.
One possible way of thinking about this, which might tie your work in smaller battles into a ‘big picture’, is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]
Note that I’m only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).
As in this passage from one of Yudkowsky’s short stories:
FWIW: definitely not a world vision, but Ozy’s blog is the most heart-warming thing I’ve read after the recent US elections.
It might be too hard to envision an entire grand future, but it’s possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer’s).
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project—I think most people’s dreams of a future are pretty wild and not something I would want for myself!
I don’t have a suggestion, but I’ve been encouraged and “heartwarmed” by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we’re skeptical abut whether “future designing” is a good idea at all.
Sorry to hear that you’re having a rough time!
When I’m feeling like this, I find that the only thing that helps is actually finishing a project end-to-end so I feel momentum.
Something I intrinsically think is valuable but wasn’t going to get done otherwise. (Like improving wikis or cleaning up a mess in a park).
Going as small as possible while still being satisfying helps remind me that there are things within my control and people around me that I can help.
I also liked this post from FarmKind
https://www.linkedin.com/posts/aidan-alexander_𝐌𝐲-𝐌𝐚𝐬𝐭𝐞𝐫𝐩𝐥𝐚𝐧-𝐭𝐨-𝐄𝐧𝐝-activity-7262449165924712451-lb7T?utm_source=share&utm_medium=member_android
I like Bostrom’s Letter from Utopia
Edge Esmeralda seems like a great bottom up experiment in a nontrivially better way of living together: https://www.edgeesmeralda.com/
A marginal rather than transformative revolution if you will.
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there’s a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.
I also think on the front of factory farming we focus too much on the entire problem, and not enough on how good the wins are in and of themselves.
I find videos about space colonization pretty inspiring. Of course, space colonization would ideally be paired with some level of suffering abolition, so we aren’t spreading needless suffering to other planets. Space colonization could help with political discord, since people with different ideas of a “good society” can band together and peacefully disperse through the solar system. If you think traveling the world to experience different cultures is fun, I expect visiting other planets to experience different cultures will be even better. On the AI front, rumor has it that scaling is slowing down… that could grant more time for alignment work, and increase the probability that an incredible future will come to pass.
EAG and covid [edit: solved, I’m not attending the EAG (I’m still testing positive as of Saturday)]
I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:
Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people.
Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult)
Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that.
Don’t go. I don’t mind doing that if that’s the best thing altruistically.
In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day.
This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid.
I think you’re likely to be contagious, even though you’re asymptomatic.
In my experience, many people express gratitude when I inform them I’m infected and offer to cancel meetings.
In terms of advice from the EA Global team we don’t have a strict policy on covid and you can use your best judgement. You may wish to test/mask.
I (Iz) would personally ask that you inform your 1:1 meeting partners and that you aren’t unmasked inside whilst still testing positive.
Thanks,
Iz
So sad that this happens to you again :(
Thanks for taking the safety measures!
This obviously doesn’t help you now, but in case you weren’t already aware, you can buy more vaccine doses now
Might it be possible to inform your conversational partners and get their consent to either A) use some type of transcription software (such as Otter.ai), or B) use your smartphone to record the audio of the conversation so that you can take notes manually later on? This would allow you to focus fully on the conversation, but it would also remove the limitation of typing on your laptop and would thus allow you to have walking meetings.
thanks but in this case there are other reasons why I need to use the laptop and make people I meet and survey to look at my laptop. I guess I mostly want to gaze how big of a deal people think covid is nowadays.
Ugh bad luck Saulius, I totally feel your frustration. I’ve had a few covid-bouts where I tested positive for over 2 weeks. It feels really frustrating to have to miss out on important things when it’s unclear that you’re even infectious, and also unclear that others are taking similar precautions.
It sounds like you’ve made your decision but fwiw, in your position I’d tell people about my covid status and offer them outdoor meetings if they were comfortable with that.
Research grants with outcome-based payouts
If I 1) had savings that cover over a year of my living expenses, 2) wasn’t already employed at an EA think tank, and 3) wanted to do EA research independently, I would probably apply to EA funds to do research on unspecified topics (if they would allow me to do that). I would ask them to give funds not now, but after the research period is over (let’s say 6 months). At the end of the research period, I would produce text that shows instances where I think I had impact and include reasoning why what I did may have had impact. Note that this could include not just published articles, but also comments or in-person communications with trusted advocates that changed how a certain organization does something, reviews of work of others, wikipedia article edits, etc. The amount of funds that I would receive would depend on EA funds manager’s opinion on how good or impactful my work was (or how good of a chance what I did had to be impactful). I imagine that there would be pre-agreed sums of money the manager could choose from. E.g.:
No significant effort to achieve impact - $0
Significant effort to achieve impact in ways that were plausible but most likely didn’t materialize - $8,000
Some expected impact - $15,000
High expect impact - $25,000
Very high expected impact - $40,000
Before the research period, there could be a consultation with the EA fund manager who is going to evaluate my work about what kind of work they think might be promising. Such consultations could also happen during the research period. Also, the research topics wouldn’t need to be something completely unspecified. E.g., it could be “something related to welfare reforms for farmed animals” or it could also be a fully specified topic.
I think that this is better than the traditional approach of applying for a grant to research a specific topic for the following reasons:
More direct motivation to do good. In the traditional approach, the incentive is to create a grant proposal that looks good on the surface (since I imagine that EA funds managers don’t have time to investigate grants very deeply). Then the financial incentive to do a good job can be less clear.
You can switch research directions on the go. This is good because of three related reasons:
It allows you to switch to more impactful research directions. I previously wrote this comment “To me, the main disadvantage of being funded through a fund is that I would be tied to a research topic and a timeframe in which I would have to complete the project (or at least that’s how I imagine it). Working at an organization allows me much more flexibility. I can begin researching a topic, see that it’s not as tractable as I thought, and then drop it. Alternatively, I can increase the scope of the project, or change it into something different, depending on what I feel will be more impactful. All of these scenarios happen often because the more I work on the project, the more informed I am about the most promising directions of that project.” This type of grant promise would allow flexibility for research being funded by a fund.
Relatedly, ability to work on whatever sparks your interest in the moment, freedom to do whatever you want. I sometimes read something on this forum, and want to read, or spend all day writing a comment on it. When I do it, I’m usually very productive while doing it because I work on what is interesting to me at the time. If I had a grant to do research on a specific topic, then I would be less likely to do any of this because I would feel pressure to research whatever I was paid to research.
Whenever I need to work on something, I don’t want to do it. And when I try to do it anyway, I am often less creative as I just want to get it over with. I’ve talked with some other people who had a similar issue.[1] I think that the structure I proposed would partly but not fully solve this issue for me at least.
Here are cons I can think of:
If you can work on anything, it can lead to too much indecisiveness about what to do. Sometimes it’s good when it’s decided what you should do and you don’t need to think about it.
It might create a nebulous pressure to do good that is difficult to act on, which could lead to stress.
One could fear that what they did was impactful but disagreed with the views of the assigned EA funds manager. In that case maybe other EA fund managers could get involved but that wouldn’t fully mitigate the problem.
I’m not going to compare this option with working for an EA research organization but I think that there are cons and pros compared to that too. I imagine that this sort of thing could be the right choice for some people.
I thought about this only for two hours so I’m probably missing some important considerations. Also, I don’t know if this is a new idea, I haven’t checked.. It reminds me of certificates of impact but it’s a bit different. If EA funds managers thought that this is a good idea, they could encourage people to apply in this way, and maybe make a separate application form for that.
Any thoughts?
To illustrate, I had multiple situations where I worked on some EA side project and was making a lot of progress in my free time and then my employer allowed me to work on it in my work time and my rate of progress per hour slowed down a lot. I think that for me this is because when I have to do something, I am motivated by negative incentives (fear of failure and hence losing my job, or doing job that is below what is expected of me and hence people thinking less of me (perfectionism)) more than by positive incentives (making an impact and impressing other people). This talk made me see that.
Interesting idea. I think this could be useful in cases where people know that they don’t have the credibility to receive a direct grant.
I guess this would also not necessarily have to be research. E.g., a grant for corporate campaigns where payout depends on the commitments they won. I imagine multiple problems with this and it’s probably a bad idea but perhaps it’s worth consideration in some cases.
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sharing. If you think it’s a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
That’s a cool idea. Presumably you would need a relatively uncensored chatbot—when I recently tried to get several to produce arguments in favour of Trump policies they all refused.
This feels like it could easily be counterproductive.
A chatbot’s “relatable backstory” is generative fiction, and the default “Trump supporter” or “liberal voter” is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn’t agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn’t actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you’re talking to the first option rather than the second, you’re actually understanding less.
I don’t think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side—in many cases because they didn’t really pay a lot of attention to politics and had one particularly big concern—by getting them to talk to a robot trained on the other side’s talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there’s a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
Why is this better than actually talking to someone with the opposing viewpoint?
more accessible to everyone. Those conversations often don’t happen in real life.
lower stakes, you can just speak your mind about your prejudices you have with no consequences
the chatbot can listen and explain things better and with more “empathy”, won’t be tired about answering the same questions again and again
You can make up different engaging situations and talk to people like candidates for presidency
It can be a bit like a warm up to talking to a real person
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusuf’s father tried to resist, and they took him away. They left Yusuf’s family without its provider. His mother’s health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusuf’s school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasn’t alone. Hakim said he worked for a “freedom movement” that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hope—maybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their people’s suffering and injustices, stories Yusuf hadn’t heard from the media. He talked about the honor of defending one’s family and land and how the so-called “enemies” were responsible for everything Yusuf’s family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a family—a group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his family’s suffering, he wondered if the anger wasn’t justified. One of the older boys took Yusuf aside one night and told him about his own family’s hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, “You’re one of us, Yusuf. You’re strong, and your family deserves justice.”
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was “just in case” they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusuf’s mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, “Mom, I’m doing this for us. For Dad.” She looked at him with sadness but didn’t argue; she didn’t have the strength.
The next day, Hakim told him it was time to make a true commitment—to sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and “become a hero.” Yusuf didn’t want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.
I sometimes meet people who claim to be vegetarians (don’t eat meat but consume milk and eggs) out of the desire to help the animals. If appropriate, I show them the http://ethical.diet/ website and explain that the production of eggs likely requires more suffering per calorie than most of the commonly consumed meat products. Hence, if they care about animals, avoiding eggs should be a priority. If they say that this is too many food products to give up, I suggest that perhaps instead of eating eggs, they could occasionally consume some beef (although that is bad for the environment). I think that the production of beef requires less suffering per calorie, even though I’m unsure how to compare suffering between different animals. In general, I’m skeptical about dietary change advocacy, but my intuition is that talking about this with vegetarians in situations where it feels appropriate is worth the effort. But I’m uncertain and either way, I don’t think this is very important.
Q: Has anyone estimated what is the risk of catching covid at the EAG London this year? Is it more like 5%, 20%, 50%, or 80%? I still haven’t decided whether to go (the only argument for not going being covid) and knowing what is the risk would make it a lot easier. Travelling is not a concern since I live in London not that far from the venue.
Hi Saulius, I’ve done 3 very basic estimates here:
https://docs.google.com/spreadsheets/d/1C6lU4klgisqG150-yR_jZjt253sVrgp2umIbgkUbKbU/edit#gid=0
To get e.g. more than 20% probability, it seems like you’d have to make some very bad assumptions (weirdly high base rates of Covid amongst presumptive attendees, combined with incompetence or malice when it comes to testing). Seems more like 1-5% risk.
Thank you Matt!! After reading your answer I bought the ticket :)
Thanks for the suggestion!
We’ve added a spreadsheet microcovid estimates for a few different scenarios to our COVID Protocol. You can see the whole protocol here. Link to the estimates here.
See you soon :)
A tip for writing EA forum posts with footnotes First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.
Tables and images If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/) and write a code like
![imageName](https://i.imgur.com/yMc2e8x.jpg “imageName”)
in your markdown. Of course, the image address should be changed to your image. Currently, the only way to add tables is to make a screenshot of a table and add an image of it.As I understand it, there will be a new EA forum editor some time soon and all this will not be needed anymore but for now this is how I make my EA forum posts.
If you’ve put the images in a google doc, and made the doc public, then you’ve already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.
I’m not sure whether I should expect google or imgur to preserve their image-links for longer.
Thanks for the gdocs to markdown tip. I didn’t know I could do that, but it’ll make writing posts for LW and EAF more convenient!
What’s the syntax for footnotes?
This displays as:
Main text[1]
footnote
Thanks!
You can also write “in-line” footnotes: See this guide to footnote syntax.
There was an interesting discussion on whether EA organizations should reveal the authors of posts they publish here. You may want to check it out if this is relevant to you (not just the linked comment, but also the replies.)
Why don’t we fund movies and documentaries that explore EA topics?
It seems to me that the way society thinks about the future is largely shaped by movies and documentaries. Why don’t we create movies that shape the views in a way that’s more realistic and useful? E.g., I haven’t read the discussion on whether Terminator is or is not a good comparison for AI risks but it’s almost certainly not a perfect comparison. Why don’t we create a better one that we could point people to? Something that would explore many important points. Now that EA has more money, that seems plausible. In 2021, OpenPhil gave grants totalling $77.6 million for work on the potential risks from Advanced AI. The budget of a movie with an all-star cast and special effects like Don’t Look Up is $75 million. But the difference is that the movie might make money, maybe even more money than its budget. It’s not obvious to me that even something extravagant like this would be a bad investment because it might make it easier to make progress on AI policy and other stuff for years to come. Of course, movies wouldn’t have to be so high budget, especially at the start. And better approach would probably be creating documentaries. Maybe a series like Vox Explained for various EA issues or for longtermism. I think it could become popular because some of the EA ideas about how far future might look seem more interesting than a lot of sci-fi, and also more novel to most people. And this is not just about AI. E.g., I can imagine a nuanced documentary about wild animal suffering that also talks about why we should think twice before spreading nature to other planets.
Anyway, this is just a shower thought, I imagine that this has been discussed before but just wanted to post it in case it hasn’t been discussed enough. And note that I never worked on AI so I don’t know what I’m talking about in that part of my text.
Check out #33 on the Future Fund list of project ideas:
https://ftxfuturefund.org/projects/
I think people have been working with Kurzgesagt and probably others.
Maybe a consideration is that these sorts of collaborations are harder to setup than it seems.
Basically, execution and alignment seems important and hard.
Even if there is available media talent and available funding, setting up the right aesthetic (in more than one sense) and content seems difficult.
It’s unclear, but there may be downside risk (from looking silly or condescending).
This may not pertain to Saulius’s point, which isn’t focused on outreach, but people have cringed at, or even vehemently opposed, certain kinds of involvement, like US federal agencies getting involved in funding AI safety. So directly promoting the movement (as opposed to underlying ideas or topics) isn’t seen as robustly good, but this is highly unclear.
ah, thanks so much for pointing this out, happy to see that funders already have this idea on their radar and I don’t need to do anything :)
[More responses are not necessary right now, thank you very much to all who filled it]
I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you’re interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces suffering of hens by at least 60%” and with some cost-effectiveness estimates. Thanks ❤️
I’m very interested in what you find out! It’d be great to compare the results of the survey with the predictions of the heavy-tailed valence hypothesis (see here or here).
Completed this, but was difficult!
If I were to read one of EA-related books (e.g. Doing Good Better, The Most Good You Can Do, The Life You Can Save, The Precipice, Superintelligence, etc.), I would consider writing/improving a summary of the book in wikipedia while reading it, in a way that conveys main points well. It could help you to digest the book better and help others to understand the ideas a bit. You could do it in english as well as maybe in some other language. To see whether it’s worth putting in the effort, you can check out Wikipedia pageview statistics of the books I mentioned and others here (it doesn’t include some views that come from redirects though). It seems that the page Superintelligence is the most viewed one out of these with an average of 4,597 monthly visitors.
Shower thought, probably not new: some EAs think that expanding the moral circle to include digital minds should be a priority. But the more agents care about the suffering of digital minds, the more likely it is that some agent that doesn’t care about it will use creating vast amounts of digital suffering as a threat to make other agents do something. To make the threat more credible, in at least some cases it may follow through, although I don’t know what is the most rational strategy here. Could this be a dominant consideration that could make the expected value of moral circle expansion to be negative for negative utilitarians? Because the intensity and the scale of purposefully created suffering could outweigh the incidental suffering that would be prevented in other scenarios by an expanded moral circle.
EDIT: I no longer think that this is a legitimate concern, see my comment below.
This is an interesting idea. I’m trying to think of it in terms of analogues: you could feasibly replace “digital minds” with “animals” and achieve a somewhat similar conclusion. It doesn’t seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible.
Perhaps the reason we don’t see this happening is that nobody really wants to influence vegans alone. There aren’t many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn’t much that an agent could gain from a threat.
I imagine the same might be true of digital minds. If it’s anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way—and so there wouldn’t be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.
At any rate, this might be a relevant consideration for other types of moral circle expansion, too.
I think it is useful to think about something like this happening in the current world like you did here because we have better intuitions about the current world. Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn’t happen for multiple reasons. One of them is that it would be irrational for vegans to agree to give money because then other people would continue exploiting them with this simple trick.
I think that the same applies to far future scenarios. If an agent allows itself to be manipulated this easily, it won’t become powerful. It’s more rational to just make it publicly known that you refuse to engage with such threats. This is one of the reasons why most Western countries have a publicly declared policy to not negotiate with terrorists. So yeah, thinking about it this way, I am no longer concerned about this threats thing.
Interestingly, there is at least one instance where this apparently has happened. (It’s possible it was just a joke, though.) There was even a law review article about the incident.
I think this is an interesting point but I’m not convinced that it’s true with high enough probability that the alternative isn’t worth considering.
In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions.
Another issue is the nature of precommitments[1]. It seems that under most games/simple decision theories for playing those games (eg “Chicken” in CDT), being the first to credibly precommit gives you a strategic edge under most circumstances. But if you’re second in those situations, it’s not clear whether “I don’t negotiate with terrorists” is a better or worse stance than swerving.
(And in the former case, with both sides precommitting, a lot of torture will still happen).
[1] using what I assume is the technical definition of precommitment
Interesting!
Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don’t care about it at all and even propose actions that would increase emissions so that they could trade “not doing that” with some concession from Party A. I’m sure that we could find lots of real-world examples of that.
Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries.