My name is Saulius Šimčikas. I spent the last year on a career break and now I’m looking for new opportunities. Previously, I worked as an animal advocacy researcher at Rethink Priorities for four years. I also did some earning-to-give as a programmer, did some EA community building, and was a research intern at Animal Charity Evaluators. I love meditation and talking about emotions.
saulius
EAG and covid [edit: solved, I’m not attending the EAG (I’m still testing positive as of Saturday)]
I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:
Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people.
Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult)
Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that.
Don’t go. I don’t mind doing that if that’s the best thing altruistically.
In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day.
This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid.
It’s also useful to ask yourself why you want to write in the first place. I personally think that there are too many people whose plan to help the world is to write on the EA forum and that a lot of effort spent on writing for the EA forum would be better spent on doing more direct forms of altruism. I sometimes find that I fool myself that I’m doing something effective just because I’m spending time on the EA forum. It can be useful for some niche careers, but it depends.
This is how I felt when I first tried to write for the EA forum. In order to know what kind of text is needed, and what would be new in the topic you are writing about, you kind of need to know everything that was already written and what sort of stuff would influence decision-makers. It’s impossible to know all that for someone new to the space. This is why I think it’s useful for senior people to suggest very concrete topics to junior researchers and then to guide them. And especially for the first few articles, the more specific the topic, the better. I think this article has more advice like that.
I like making a distinction between superficial beliefs and deeply held beliefs which are often entirely subconscious. You have a superficial belief that Starcraft is balanced but a deeply held belief that your faction is the weakest.
For another example, my dad lived all his life in a world where alcohol was socially acceptable, while everyone agreed that all other drugs were the worst thing ever, quickly leading to addiction, etc. He once even remarked how if alcohol was invented today, it would surely be illegal because it has so many negative consequences, even compared to some other drugs. But it’s just a funny thought to him. He offers me a drink whenever I come to visit him, but he got immediately very concerned when I mentioned that I’ve tried cannabis. He can’t just suddenly rewire his brain to change the associations he has with something like cannabis. Even if I tell him about some studies about cannabis not being that harmful, especially when used rarely, in his subconsciousness, there might barely be a difference between cannabis and drugs like heroin. Maybe he could rewire his subconscious reaction by going through all his memories where he was told something bad about drugs and reinterpreting them in the face of the new evidence. But ain’t nobody has time for that.
Well, it’s worth trying to rewire yourself about deeply held beliefs that really harm you like “I am unlovable”, “I don’t deserve happiness”, “I can’t trust anyone”, etc. This is a big part of what therapy does, I think. But for most topics like Starcraft factions, we just have to accept that there will always be a mismatch between superficial beliefs and deeply held beliefs.
A lot of these arguments apply for wild animals but not so much for farmed.
Even if most humans die young if they lose the ability to feel pain, that is not true for Jo Cameron. And the idea of some people thinking about this is to just modify the mutated gene she has in others. I asked GPT-4 and it says that other animals have that gene too.
But it’s not such a big issue if farmed animals injure themselves or die young because they injure themselves. I imagine that injuries are mostly bad because of pain. Higher pre-slaughter mortality would make it less profitable but farmers might find ways to prevent them from dying young or meat prices could be higher.
Regarding “10 million species”: most of the impact would be from doing this for the few species that are farmed in very large numbers like chickens and whiteleg shrimp
There were many predictions about AI and AGI in the past (maybe mostly last century) that were very wrong. I think I read about it in Superintelligence. A quick Google search shows this article which probably talks about that.
Cultured meat predictions were overly optimistic, although many of those predictions might have been companies hyping up their products to attract investors. There’s also probably a selection bias where the biggest cultured meat optimisits are the ones who become cultured meat experts and make predictions
Detach the grim-o-meter comes to mind. I think that post helped me a little bit.
I’m afraid that I don’t remember anymore. You can reach out to the Welfare Footprint Project directly about this if this is decision-relevant to you. If you do that, updating with their answer here would be useful as I would like to know this too.
I wonder if Open Philanthropy thinks about it because they fund both animal advocacy and global poverty/health. Animal advocacy funding probably easily offsets its negative global poverty effects on animals. It takes thousands of dollars to save a human life with global health interventions and that human might consume thousands of animals in her lifetime. Chicken welfare reforms can half the suffering of thousands of animals for tens of dollars. However, I don’t like this sort of reasoning that much because we may not always have interventions as cost-effective as chicken welfare reforms.
Also, you can argue against the poor meat eater problem by pointing out that it’s very unclear whether increased animal production is good or bad for animals. In short, the argument would be that there are way more wild animals than farmed animals, and animal product consumption might substantially decrease wild animal populations. Decreasing wild animal populations could be good because wild animals suffer a lot, mostly due to natural causes. See https://forum.effectivealtruism.org/topics/logic-of-the-larder I think this issue is also very under-discussed.
- Apr 4, 2024, 4:44 AM; 13 points) 's comment on defun’s Quick takes by (
I think the reason is that it doesn’t really have a target audience. Animal advocacy interventions are hundreds of times more cost-effective than global poverty interventions. It only makes sense to work on global poverty if you think that animal suffering doesn’t matter nearly as much as human suffering. But if you think that, then you won’t be convinced to stop working on global poverty because of its effects on animals. Maybe it’s relevant for some risk-averse people.
I love all of these decisions and reasoning and think that you should go into this direction even further. I think it might be both cheaper and more effective to mostly abandon EAGs and run smaller, more specialised events instead.
My hypothesis is that people from different cause areas don’t get much out of interacting with each other at EAGs. This hypothesis can be tested with questions on post-EAG surveys. I believe the hypothesis because I just stick to other animal advocates at EAGs since these are the people I have the most productive and work-relevant conversations. I see other animal advocates doing the same.
Currently, I see EAGs as three or four barely related conferences running in the same building at the same time. This has drawbacks. Attendees have to navigate a bigger venue to find talks and 1-1s. More importantly, attendees are less likely to start chats with random strangers or join a random group of people talking because there are fewer common things to talk about. You’re less likely to randomly bump into the people you’d have the most productive conversations with. Or you might bump into them later in the conference which would give you less time to spend with them.
I sometimes do talk with people from other cause areas but it often goes something like this:
“What do you work on?”
“Animals, you?”
“AI”
We might part ways after that, or we might talk about food, how we got into EA, or our jobs.[1] But in whichever case, I’m unlikely to change my mind on something work-relevant or to find a new collaborator in that conversation. While it’s nice to make friends with random people, it’s not a good use of time on a weekend that costs “around $1.5k–2.5k per person.” If all EAG was like that, even providing a guide dog to a blind person for $50k seems like a better use of charity money. So these are not the interactions you want to foster.
I do think that a productive cross-pollination of ideas from different EA cause areas is possible. But smaller events that are dedicated to a specific type of interplay between cause areas might be much better at this. I and the person working on AI mentioned above might find a productive conversation topic much easier at an event like the AI, Animals, and Digital Minds conference because we will be prompted by talks and the theme to think about relevant topics. Although we might also be prompted to have a similar conversation by a talk at an EAG about it.
For other types of cross-pollination, local EA events might be better and cheaper. It’s not like I need a specific AI safety specialist who needs to fly in so I could ask them my beginner questions about AI safety (or debate prioritising animals vs AI). My local EA interested in AI would do. Making local EA friends might also be better because it can be more helpful for sustaining motivation. Also, when I was thinking about switching from animals to AI, I didn’t have time at an EAG to grab a random AI safety person and ask them beginner questions because I was busy meeting animal advocates and they were probably busy too. And I didn’t know which AI safety person to grab.
I think it’s useful to ask “did people from all around the world need to fly for this?” when considering what conversations we want to encourage at global conferences. Examples that satisfy this criterion include (1) people working on similar things in different countries learning from each other, (2) meeting the few other people in the world who work on your niche topic, (3) meeting your colleagues in person. More specialised conferences would likely make all of such conversations easier.
- ^
We could also try to find out why we chose to prioritize different cause areas. And the first few such conversations can be very productive but in my experience, most people who have been in EA for a while have had too many such conversations already. I’m open to the possibility that I’m wrong about this but in that case, I’d rather organise separate smaller local events for people who are interested in cause area battles.
- ^
Nice ^_^ One final thought. I mentioned that scale depends on multiple parameters:
Current human population
Expected growth in the human population
Current animal production per capita
Expected change in the production per capita
You account for 2,3, and 4 with a separate variable “expected growth in animal production” which would be something like “projected number of farmed animals in 2050 divided by the current number of farmed animals”. And then also have a variable “Current human population”. I think it makes sense to split because these two variables matter for different reasons, and someone may put weight on one but not the other.
Also, yes, I very much had the same dilemma years ago. Mine went something like this:
Heart: I figured it out! All I care about is reducing suffering and increasing happiness!Brain: Great! I’ve just read a lot of blogs and it turns out that we can maximise that by turning everything into a homogenous substance of hedonium, including you, your mom, your girlfriend, the cast of Friends, all the great artworks and wonders of nature. When shall we start working on that?
Heart: Ummm, a small part of me think that’d be great but… I’m starting to think that maybe happiness and suffering is not ALL I care about, maybe it’s a bit more complex. Is it ok if we don’t turn my mom into hedonium?
My point is, in the end, you think that suffering is bad and happiness is good because your emotions say so (what other reason could there be?). Why not listen to other things your emotions tell you? Ugh, sorry if I’m repeating myself.
You don’t seem to apply your reasoning that our current values might be “extremely sub-optimal” to your values of hedonium/EA/utilitarianism. But I think there are good reasons to believe they might be very sub-optimal. Firstly, most people (right now and throughout history) would be terrified of everything they care about being destroyed and replaced with hedonium. Secondly, even you say that it “doesn’t make me feel good and it is in direct opposition to most of my values”, despite being one of the few proponents of a hedonium shockwave. I’m unsure why you are identifying with the utilitarian part of you so strongly and ignoring all the other parts of you.
Anyway, I won’t expand because this topic has been discussed a lot before and I’m unlikely to say anything new. The first place that comes to mind is Complexity of Value—LessWrong
and if you are in Europe, CARE conference is great. I think people can get up to speed very fast at such conferences. They can seem scary when you don’t know anyone there but I think animal advocates are generally friendly and welcoming to newcomers :)
I see. My personal intuition is that it wouldn’t convince many people. I mean, cooked food includes cooked meat. So, unfortunately, their argument that we have evolved to have meat in our diets still stands.
thanks but in this case there are other reasons why I need to use the laptop and make people I meet and survey to look at my laptop. I guess I mostly want to gaze how big of a deal people think covid is nowadays.