I’m a freelance writer and editor for the EA community. I can help you edit drafts and write up your unwritten ideas. If you’d like to work with me, book a short calendly meeting or email me at ambace@gmail.com. Website with more info: https://amber-dawn-ace.com/
Amber Dawn
This is a complex of questions on the theme of ‘did you actually enjoy your job, and is this important?’
When you were earning to give, did you enjoy your day-to-day work and find it motivating and meaningful, even if you expected your largest impact to be from your donations? If not, was that difficult, and how did you deal with it? Is your impression that other EtG-ers had/have a similar experience? In general, is it important for EtG-ers to feel positive about their work, or can one compensate for a less good working life by focusing on the positive impact of one’s donations?
First of all, I don’t think suicide would be morally required even if you did cause lots of harm to animals. I think we have a right to live.
Second, I don’t think suicide is the best way for you to help animals. I’m not sure of your exact situation, but as you get older you’re likely to get more independence from your parents and community, and at that point you can stop eating animals products. At that point you’ll also have the whole of your life and career ahead of you. If you dedicate your career to animal welfare, that will easily outweigh the suffering caused by you not being vegan for a few years in your teens. I don’t think you should beat yourself up about not being vegan because you’re forced by family and societal pressure.
You say you’re not sure how you can help animals with your career, but I think STEM majors can do a lot to help animals! You could become a welfare biologist for example, helping study the experiences and welfare of animals so that we have a better idea of how to prevent suffering. Or you could work on developing vegan meat alternatives or cultured meat, eventually making it cheaper and easier for more people to become vegan.
You can also donate money: because animal welfare improvements in agriculture affect a large number of animals, my understanding is that even with quite a small donation you can prevent a lot of suffering. So you don’t have to make a large income to make a difference here.
I hope this is helpful and your difficult situation improves soon!
Atlantic bluefin tuna are being domesticated: what are the welfare implications?
‘Use your connections, media, and social media to push your country’s leaders to call for de-escalation and ceasefire. This costs you nothing but time’ - what concretely do you suggest, for me and people like me? (I’m an ordinary person living in the UK). I think what usually stops me from taking particular action at times like this is a sense that nothing I can do will matter. I could post on social media that I want the conflict to stop, but I don’t think anyone influential will notice or care.
I don’t mean this as an excuse, I just get really frustrated by calls to action that are not concrete, because I really take on board the moral force but I don’t actually know how to do something about it, and my time and energy isn’t infinite.
I don’t really have an answer, but do you think this is a trend in mutual aid generally? (ie, that mutual aid networks are generally dominated by less wealthy and marginalised people) Anecdotally, I was in a UK-based mutual aid group and the admin made the same claim. It’s possible though that your group and my former group just arose in online ‘bubbles’ that were dominated by these poorer demographic groups, and maybe there are other mutual aid groups where more wealthy folks do join and contribute.
Yeah, I think you might be right -like, it would mostly be covered by Philosophy, right?
This seems relevant to this question: https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities. Disclaimer: I haven’t read it.
I do think this is an interesting question: how to deal with tiny probabilities of great utilities?
A couple of thoughts:
(1) On the object level: most religions are mutually exclusive. Also, I don’t know that much about comparative religion, but I do know that Christianity has both a Hell, and doesn’t allow you to worship other gods. So like, you probably have to pick one religion, rather than hedge your bets.
And maybe you are saying, ‘we should figure out which religion has the best heaven/worst hell, therefore would be worst to be wrong about, and try to practice that one’. But I think this is going to be quite hard to do, for religions that do have heavens and hells: like, there’s no “evidence” about these things beyond what religious texts say. And religious texts probably frame heaven as infinite bliss and hell as infinite suffering.
Another argument atheists might make is: ‘yes, there is an infinitesimal probability that Christianity and Judaism and Hinduism and Sikhism and other religions are true. I think there is an equal(ly infinitesimal) chance that it’s the case that atheists go to heaven, and religious people go to hell, and the Atheist Heaven is better than the religious heavens, and the hell is better than the religious hells. So might as well keep doing what I’m doing, ie, being an atheist’.
Like sure, Atheist Heaven is just made up and there’s no social tradition of it; but the atheists usually don’t think the fact that religions exist in society and are traditional is good evidence of their being true—otherwise they would be religious or agnostic.
(2) on the meta/community level, aka ‘why are you getting downvoted’: I’m actually not sure what proportion of EAs really think that you should guide your life by doing things that have a tiny probability of producing vast utility. Like, I think this idea is very much in the EA/longtermism intellectual DNA, but also, most actual EAs either work on or donate to (i) nearterm stuff with a decently high chance of being effective (animal advocacy, global health) , or (ii) preventing existential risks that they believe are not vanishingly unlikely. Most EA longtermism work, as far as I can tell, is in category (ii), and doesn’t actually require you to believe or seriously plan your life around the longtermist argument of ‘you should take Pascal’s wager seriously’.
Yeah I don’t have a strong opinion about whether they would accelerate it—I was just saying, even if some workers would support acceleration, other workers could work to slow it down.
One reason that developers might oppose slowing down AI is that it would put them out of work, wouldn’t it? (Or threaten to). So if someone is not convinced that AI poses a big risk, or thinks that pausing isn’t the best way to address the risk, then lobbying to slow down AI development would be a big cost for no obvious benefit.
Thanks, I didn’t have this on my radar! I’ll try to get some iodised salt.
Interesting question to think about!
I’m not 100% sure, but I think I got more hard-working when I started university. I think this was basically because at school I found it easy to do well, and was also a teacher’s pet/people pleaser, so I didn’t really have the notion of ‘doing less well at schoolwork than was physically possible’ (ie ‘half-assing it with all you’ve got’). But at university stuff got harder, obviously. So basically the bar for quality was raised but I didn’t lower my expectations of myself accordingly: it didn’t occur to me that I could just submit a shitty essay, or submit it late. I also really enjoyed the work and found it stimulating and gratifying, which helped. (I’m sure this is the rose-coloured glasses, but I kind of miss sitting in the library at 3am feeling full of adrenaline and Insights).
Since then, my hard-workingness has fluctuated. I think things that affect it most are:
-interest in what I’m working on
-accountability to others, but it has to be real and not just a thing I’ve set up as a productivity hack (so deadlines set with academic supervisors or clients = motivating, self-imposed deadlines/beeminder/accountability buddies = not motivating)
I think stuff like productivity hacks haven’t helped much, and inner work has generally taken me in the opposite direction) (ie, it’s made me more aware of the costs of working too hard and neglecting other values).
Thanks for writing this; I’ve thought about this before, it seems like an under-explored (or under-exploited?) idea.
Another point: even if ML engineers, software devs etc either could not be persuaded to unionize, or would accelerate AI development if they could, maybe other labour unions could still exert pressure. E.g., workers in the compute or hardware supply chain; HR, cleaners, ops, and other non-technical staff who work at AI companies? Perhaps strong labour unions in sectors that are NOT obviously related to AI could be powerful here, e.g. by consumer boycotts (e.g., what if education union members committed to not spending money on AI products unless and until the companies producing them complied with certain safety measures?)
Some recent polls suggest that the idea of slowing down AI is already popular among US citizens (72% want to slow it down). My loose impressions are also that (i) most union members and organizers are on the political left (ii) many on the left are already sceptical about AI, for reasons related to (un)employment, plagiarism (i.e. critics of art AI’s use of existing art), capitalism (tech too controlled by powerful interests), algorithmic bias. So this might not be an impossible sell, if AI safety advocates communicate about it in the right way.
Moyo—a new organization offering a UBI to people in poverty
Should Effective Altruists be Valuists instead of utilitarians?
[Question] What proportion of total EA funding comes from Open Philanthropy, GiveWell, and other big EA funders?
Some thoughts on the general discussion:
(1) some people are vouching for Kat’s character. This is useful information, but it’s important to note that behaving badly is very compatible with having many strengths, treating one’s friends well, etc. Many people who have done terrible things are extremely charismatic and charming, and even well-meaning or altruistic. It’s hard to think bad things about one’s friends, but unfortunately it’s something we all need to be open to. (I’ve definitely in the past not taken negative allegations against someone as seriously as I should have, because they were my friend).
(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don’t correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it’s false.
This makes these interventions seem less sincere to me, because I think if someone posted a bunch of lies about me, in my first comments/reactions I would be less concerned about the meta appropriateness of the post having been posted, and more concerned to be like “this post says Basic Thing X but that’s completely false, actually it was Y, and A, B and C can corroborate”. On the earlier post where an anonymous account accused Nonlinear of bad behaviour, Kat’s responses actually made me update against her, because she immediately attacked the validity of even raising the critique and talked about the negative effects of gossip (on the meta level), rather than expressing concern about possible misunderstandings at NL (for example). For me, this is reminiscent of the abuse tactic DARVO (Deny, Attack, Reverse Victim and Offender): these early comments meant that much of the conversation on this post has been about the appropriateness of Ben publishing it now, or the appropriateness of Emerson threatening to sue him, rather than the object-level ‘hey apparently there are these people in our community who treated their employees really badly’.
Massive thanks to Ben for writing this report and to Alice and Chloe for sharing their stories. Both took immense bravery.
There’s a lot of discussion on the meta-level on this post. I want to say that I believe Alice and Chloe. I currently want to keep my distance from Nonlinear, Kat and Emerson, and would caution others against funding or working with them. I don’t want to be part of a community that condones this sort of thing.I’m not and never have been super-involved in this affair, but I reached out to the former employees following the earlier vague allegations against Nonlinear on the Forum, and after someone I know mentioned they’d heard bad things. It seemed important to know about this, because I had been a remote writing intern at Nonlinear, and Kat was still an occasional mentor to me (she’d message me with advice), and I didn’t want to support NL or promote them if it turned out that they had behaved badly.
Chloe and Alice’s stories had the ring of truth about them to me, and seemed consistent with my experiences with Emerson and Kat — albeit I didn’t know either of them that well and I didn’t have any strongly negative experiences with them.
It seems relevant to mention that Chloe and Alice were initially reluctant to talk to me about any of this. This is inconsistent with the claim that they are eager to spread vicious lies about NL at any chance they get.I’m glad this is out in the open: it felt unhygienic to have this situation where there were whisperings and rumours but no-one felt empowered to be specific about anything.
Antonio Montani on team-building and ikigai
Thanks, this post is interesting. I’ve often experienced the frustration that EA seems to really emphasise the importance of cause prioritisation, but also that the resources for how to actually do it are pretty sparse. I’ve also fallen into the trap of ‘apply for any EA job, it doesn’t matter which’, and have recently been thinking that this was a mistake and that I should invest more time in personal cause prioritization, including more strongly considering causes that EAs don’t tend to prioritize, but that I think are important.
I think the idea of ‘heavy-tailedness’ can be overused. I’d need to look more into the links to thoroughly discuss this, but a few points:
(1) By definition, not everyone can be in the heavy tail. Therefore, while it might be true that some job opportunities that exist are orders-of-magnitude more impactful than my current job, it’s less clear to me that those opportunities aren’t already taken.
Concretely, a job at AMF is orders-of-magnitude more impactful than most jobs, but they’re not hiring afaik, and even if they were, they might not hire me.
And you might say ’ok, well, but that’s a failure of imagination if you only think that roles at famous EA orgs are super high impact—maybe you should be an entrepreneur, or...‘
But my point is… it seems like by definition, not everyone can have exceptional impact/be in the heavy tail.
(2) As an EA, I shouldn’t care about whether I personally am in an unusually-high-impact role, but that the best-suited person does it, by which I mean ‘the person who is most competent at this job but who also wouldn’t do some other, more impactful job, even more competently’. So maybe some EAs take a view like ‘well, I’m not sure exactly which EA jobs are the most impactful, but I’ll just contribute to the EA ecosystem which supports whichever people end up with those super-impactful roles’.
This is a cool project! I’ve registered my interest.
This doesn’t directly address your questions in the post, but it addresses the titular question of ‘which products should we prioritize avoiding?’ Ozy Brennan suggests that ‘you can eliminate 95% of the suffering associated with your diet simply by giving up farmed fish, poultry, and eggs’. I don’t know if they took the associated insect suffering into account.
https://thingofthings.substack.com/p/on-ameliatarianism