My name is Saulius Šimčikas. I spent the last year on a career break and now I’m looking for new opportunities. Previously, I worked as an animal advocacy researcher at Rethink Priorities for four years. I also did some earning-to-give as a programmer, did some EA community building, and was a research intern at Animal Charity Evaluators. I love meditation and talking about emotions.
saulius
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sharing. If you think it’s a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
Hmm, yes that is a scarier headline. But I think that as long as we do it in ways that are also good from sustainability point of view, we would look really benign. Like we do a thing that many people agree is good for an unusual reason. There are definitely much more outrageous sounding scandals going around all the time.
I’m not sure if I agree. The worst-case scenario seems like an article titled, ‘Organization Opposes Irrigation Subsidies Due to Insect Harm, Not Environmental Impact.’ Realistically, would that provoke much anger? It might just come off as quirky or amusing rather than headline material. Often, lobbying arguments don’t fully reveal the underlying motivations. I think it’s common for people and companies to lobby for policies that benefit them financially while framing them as sustainable or taxpayer-friendly.
I haven’t examined the screwworm eradication in detail. Someone told me that gene drives are politically infeasible. People working on it told me that it’s totally feasible. ¯\_(ツ)_/¯, political feasibility is not something I can evaluate. The cost-effectiveness in the linked article seems a lot more conservative than my estimates.
If screwworm eradication intervention is promising, then maybe there are other promising WAW interventions. Yes, so far the experience of researchers has been that it’s more difficult to find cost-effective WAW interventions compared to farmed animal interventions. This is partly because it’s so difficult to think about the indirect effects of WAW. But someone told me “unknown unknowns cancel each other out.” In other words, maybe we don’t need to think about 3rd order effects because they might be canceled out by 4th order effects, and so on. I feel very confused about this, I’d like to think more about it at some point.
Also, perhaps if we find WAW interventions, they might have a bigger scale than typical farmed animal welfare interventions. So maybe searching for FAW interventions is easier and more immediately rewarding but it’s still just as worthwhile to search for WAW interventions.
I think that these interventions by Brian Tomasik could be promising, though I haven’t examined them in detail. They’d reduce insect numbers by doing things like opposing irrigation subsidies using environmental and economical arguments. It’s unclear if insects live net negative lives to me, but this makes sense for negative utilitarians, or if you think there’s >50% chance that they live net negative lives and you’re ok with uncertainty. We discussed in these comments where we worried about PR risks because our true motivations would be different from the stated ones. But I now know multiple other organizations that do similar stuff without any problems.
Spreading the idea/meme that we should care about wild animals seems potentially very important. We could have AGI that might be able to do magic-like stuff soon. Or at least an unprecedented AI-fuelled economic growth. It seems possible that this would create a situation of abundance, where problems like poverty and climate change are fully solved. If the values of the society remain as they are, a lot of resources might be used for conservation, species preservation, and so on, without almost any care about the welfare of individual animals. Wildlife could also be spread to other planets with little or no thought given to the vast amount of suffering it would create. All of this seems a bit less likely to happen if we just try to spread the idea of wild animal welfare more. I’d be excited to see things like documentaries for mainstream audiences about WAW. Humane Hancock mentioned a plan for a WAW documentary, and I’m excited about it.
There may or may not be even more cost-effective things to do for the far future, like reducing x-risks and thinking about how to help digital minds. But that doesn’t mean that spreading the idea of WAW is not worthwhile. I don’t think that x-risk and digital mind stuff would get significantly less funding or talent if someone also worked on spreading the idea of WAW. So perhaps there’s not much point in comparing the two :)
Sure (^-^) I’ll do it in a comments below. Note that these are a little more than shower thoughts. I’d love some discussion and back and forth on these. Perhaps I will write a post with conclusions after these discussions.
My views on WAW have changed quite a lot since I wrote this. I think there are things within WAW that could be very promising. I hope to write more about that in the future.
Good points :) You might be interested in this sequence (see the links at the bottom of the summary)
Please don’t treat cost-effectiveness estimates as such an exact science. There are so many subjective choices you make in them. For example, you could say that cage-free campaigns speed up changes by 5 years, or 50 years. Both choices are defensible but the result will be 10 times different just based on this choice alone.
It’s impossible to tell without seeing the THL’s estimate, but they probably were conservative when estimating their cost-effectiveness. It’s what I would do if I was doing such estimate for THL. $2.63 per hen impacted is already high enough for most people to want to donate. Maybe it’s even better because it’s more believable. And if they make it less conservative, someone might criticize them. In any case, THL took down the $2.63 estimate, so that’s a strong reason not to treat it seriously.
Do you have to live in the U.S. (or even in a swing state) to do something useful?
thanks but in this case there are other reasons why I need to use the laptop and make people I meet and survey to look at my laptop. I guess I mostly want to gaze how big of a deal people think covid is nowadays.
EAG and covid [edit: solved, I’m not attending the EAG (I’m still testing positive as of Saturday)]
I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:
Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people.
Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult)
Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that.
Don’t go. I don’t mind doing that if that’s the best thing altruistically.
In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day.
This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid.
It’s also useful to ask yourself why you want to write in the first place. I personally think that there are too many people whose plan to help the world is to write on the EA forum and that a lot of effort spent on writing for the EA forum would be better spent on doing more direct forms of altruism. I sometimes find that I fool myself that I’m doing something effective just because I’m spending time on the EA forum. It can be useful for some niche careers, but it depends.
This is how I felt when I first tried to write for the EA forum. In order to know what kind of text is needed, and what would be new in the topic you are writing about, you kind of need to know everything that was already written and what sort of stuff would influence decision-makers. It’s impossible to know all that for someone new to the space. This is why I think it’s useful for senior people to suggest very concrete topics to junior researchers and then to guide them. And especially for the first few articles, the more specific the topic, the better. I think this article has more advice like that.
I like making a distinction between superficial beliefs and deeply held beliefs which are often entirely subconscious. You have a superficial belief that Starcraft is balanced but a deeply held belief that your faction is the weakest.
For another example, my dad lived all his life in a world where alcohol was socially acceptable, while everyone agreed that all other drugs were the worst thing ever, quickly leading to addiction, etc. He once even remarked how if alcohol was invented today, it would surely be illegal because it has so many negative consequences, even compared to some other drugs. But it’s just a funny thought to him. He offers me a drink whenever I come to visit him, but he got immediately very concerned when I mentioned that I’ve tried cannabis. He can’t just suddenly rewire his brain to change the associations he has with something like cannabis. Even if I tell him about some studies about cannabis not being that harmful, especially when used rarely, in his subconsciousness, there might barely be a difference between cannabis and drugs like heroin. Maybe he could rewire his subconscious reaction by going through all his memories where he was told something bad about drugs and reinterpreting them in the face of the new evidence. But ain’t nobody has time for that.
Well, it’s worth trying to rewire yourself about deeply held beliefs that really harm you like “I am unlovable”, “I don’t deserve happiness”, “I can’t trust anyone”, etc. This is a big part of what therapy does, I think. But for most topics like Starcraft factions, we just have to accept that there will always be a mismatch between superficial beliefs and deeply held beliefs.
A lot of these arguments apply for wild animals but not so much for farmed.
Even if most humans die young if they lose the ability to feel pain, that is not true for Jo Cameron. And the idea of some people thinking about this is to just modify the mutated gene she has in others. I asked GPT-4 and it says that other animals have that gene too.
But it’s not such a big issue if farmed animals injure themselves or die young because they injure themselves. I imagine that injuries are mostly bad because of pain. Higher pre-slaughter mortality would make it less profitable but farmers might find ways to prevent them from dying young or meat prices could be higher.
Regarding “10 million species”: most of the impact would be from doing this for the few species that are farmed in very large numbers like chickens and whiteleg shrimp
There were many predictions about AI and AGI in the past (maybe mostly last century) that were very wrong. I think I read about it in Superintelligence. A quick Google search shows this article which probably talks about that.
more accessible to everyone. Those conversations often don’t happen in real life.
lower stakes, you can just speak your mind about your prejudices you have with no consequences
the chatbot can listen and explain things better and with more “empathy”, won’t be tired about answering the same questions again and again
You can make up different engaging situations and talk to people like candidates for presidency
It can be a bit like a warm up to talking to a real person
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.