Thanks for the question Ben! The main reason that this is a priority is to help EA Funds (which is now part of CEA) grow and diversify their donations, by making it easier to gather info from donors[1] and build relationships with them, and giving us more freedom to optimize the UX of the donation flow. AWF in particular has ambitious 2026 plans and a significant funding gap, and weâd be excited to help them reach their donation goal for this year! :)
GWWC, the primary platform EA Funds has used historically, defaults to opt-out for donor data sharing. As far as I understand, this prevents us from being able to contact the majority of donors. We recently added the option of donating via every.org as well, which is opt-in by default so thatâs improved the situation.
Technical Alignment Research Accelerator (TARA) applications are closing in one week!
Apply as a participant or TA by January 23rd to join the 14-week remotely taught, in-person run program (based on the ARENA curriculum) designed to accelerate your path to meaningful technical alignment research!
Built for you to learn around full-time work or study by attending meetings in your home city on Saturdays and doing independent study throughout the week. Finish the program with a project to add to your portfolio, key technical AI safety skills, and connections across APAC.
See this post for more information and apply through our website here.
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/âhour, and expects ~5â10 hours/âweek. Heâs looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.
Super sceptical probably very highly intractable thought that I havenât done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostromâs simulation argument, like:
All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us.
The Fermi Paradox: there donât seem to be any other civilizations in the observable universe. There are many explanations for the Fermi Paradox, but one additional explanation might be that whoever is simulating the universe created it for us, or they donât care about other civilizations, so havenât simulated them.
We seem to be really early on in human history. Only about 60 billion people have ever lived IIRC but we expect many trillions to live in the future. This can be explained by the Doomsday argumentâthat in fact we are in the time in human history where most people will live because we will soon go extinct. However, this phenomenon can also be explained by us living in a simulationâsee next point.
Not only are we really early, but we seem to be living at a pivotal moment in human history that is super interesting. We are about to create intelligence greater than ourselves, expand into space, or probably all die. Like if any time in history were to be simulated, I think thereâs a high likelihood it would be now.
If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that Iâm just thinking about this all wrongâlike, of course I can come up with a motivation for a simulation to explain any feature of the universe⊠it would be hard to find something that doesnât line up with an explanation that the simulators just being interested in that particular thing. But in any case, thatâs still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed weâre all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making.
But everyone I talk to seems to have a relaxed approach to it, like itâs impossible to make any progress on this and that it couldnât possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:
We may be able to infer from the nature of the universe and the natural problems ahead of us what the simulators are looking to understand or gain from the simulation (or at least we might attach percentage likelihoods to different goals). Maybe there are good arguments to aim to please the simulators, or not. Maybe we end the simulation if there are end-conditions?
Being in a simulation gives some weight to the probability that aliens exist (they probably have a lower probability of existing if we are in a simulation), which helps with long-term grand planning. Like, we wouldnât need to worry about integrating defenses against alien attacks or engaging in acausal trade with aliens.
We can disregard arguments like The Doomsday Argument, lowering our p(doom)
Some questions Iâd ask is:
How much effort have we put into figuring out if there is something decision-relevant to do about this from a moral impact perspective? How much effort should we put into this?
How much effort has gone into figuring out if we are, in fact, in a simulation, using empiricism? What might we expect to see in a simulated universe vs a real world? How we can we search for and detect that?
Overall, this does sounds nuts to me and it probably shouldnât go further than this quick take, but I do feel like there could be something here, and itâs probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and Iâd be interested in hearing thoughts (especially a good counterargument to working on this so I donât have to think about it anymore) or learning about past efforts.
All the things you mentioned arenât uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.)
The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis.
Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test peopleâs faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isnât really there. (I also think itâs just not true that this is the most interesting point in history.)
Similarly, the book of Genesis says that God created humans in his image. Maybe he didnât create aliens with high-tech civilizations because heâs only interested in beings with high technology made in his image.
It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis.
You might think the simulation hypothesis is preferable because itâs a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulatorsâ reality, maybe the fundamental relationship between consciousness and physical phenomena such as matter, energy, space, time, and physical forces is such that consciousness can directly, automatically shape physical phenomena to its will. If we observed this happening in our universe, we would describe this as magic or a miracle.
Whether you call them âsimulatorsâ or âGodâ or an âevil demonâ or âLokiâ, and whether you call it a âsimulationâ or an âillusionâ or a âdreamâ, these are just different surface-level labels for substantially the same idea. If you stipulate laws of nature radically other than the ones we believe we have, what youâre talking about is supernatural.
If you try to assume that the physics and other laws of nature in the simulatorsâ reality is the same as in our perceived reality, then the simulation argument runs into a logical self-contradiction, as pointed out by the physicist Sean Carroll. Endlessly nested levels of simulation means computation in the original simulatorsâ reality will run out. Simulations at the bottom of the nested hierarchy, which donât have enough computation to run still more simulations inside them, will outnumber higher-level simulations. Since the simulation argument says, as one of its key premises, that in our perceived reality we will be able to create simulations of worlds or universes filled with many digital minds, but the simulation hypothesis implies this is actually impossible, then the simulation argumentâs conclusion contradicts one of its premises.
There are other strong reasons to reject the simulation argument. Remember that a key premise is that we ourselves or our descendants will want to make simulations. Really? Theyâll want to simulate the Holocaust, malaria, tsunamis, cancer, cluster headaches, car crashes, sudden infant death syndrome, and Guantanamo Bay? Why? On our ethical views today, we would not see this as permissible, but rather the most grievous evil. Why would our descendants feel differently?
Less strongly, computation is abundant in the universe but still finite. Why spend computation on creating digital minds inside simulations when there is always a trade-off between doing that and creating digital minds in our universe, i.e. the real world? If we or our descendants think marginally and hold as one of our highest goals to maximize the number of future lives with a good quality of life, using huge amounts of computation on simulations might be seen as going against that goal. Plus, there are endlessly more things we could do with our finite resource of computation, most we canât imagine today. Where would creating simulations fall on the list?
You can argue that creating simulations would be a small fraction of overall resources. Iâm not sure thatâs actually true; I havenât done the math. But just because something is a small fraction of overall resources doesnât mean it will be likely be done. In an interstellar, transhumanist scenario, our descendants could create a diamond statue of Hatsune Miku the size of the solar system and this would take a tiny percentage of overall resources, but that doesnât mean it will likely happen. The simulation argument specifically claims that making simulations of early 21st century Earth will interest our descendants more than alternative uses of resources. Why? Maybe theyâll be more interested in a million other things.
Overall, the simulation hypothesis is undisprovable but no more credible than an unlimited number of other undisprovable hypotheses. If something seems nuts, it probably is. Initially, you might not be able to point out the specific logical reasons itâs nuts. But thatâs to be expected â the sort of paradoxes and thought experiments that get a lot of attention (that âgo viralâ, so to speak) are the ones that are hard to immediately counterargue.
Philosophy is replete with oddball ideas that are hard to convincingly refute at first blush. The Chinese Room is a prime example. Another random example is the argument that utilitarianism is compatible with slavery. With enough time and attention, refutations may come. I donât think oneâs inability to immediately articulate the logical counterargument is a sign that an oddball idea is correct. Itâs just that thinking takes time and, usually, by the time an oddball idea reaches your desk, itâs proven to be resistant to immediate refutation. So, trust that intuition that something is nuts.
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument Iâve seen in quite a while, which was refreshing for my peace of mind.
That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesnât seem that great, but it seems plausible at least?
Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.
I would not describe the finetuning argument and the Fermi paradox as strong evidence in favour of the simulation hypothesis. I would instead say that they are open questions for which a lot of different explanations have been proposed, with the simulation offering only one of many possible resolutions.
As to the âimportanceâ argument, we shouldnât count speculative future events as evidence of the importance of now. I would say the mid-20th century was more important than today, because thatâs the closest we ever got to nuclear annihilation (plus like, WW2).
Iâve thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While itâs possible that you or I are alone in the simulation, we canât, realistically, know this. We canât know with much certainty that the apparently sentient beings who share our world arenât actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming youâre a Utilitarian), or have rights/âautonomy to be respected, etc.
We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for peopleâs minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, theyâll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.
Given all this, I donât see the point in trying to defy them or doing really anything differently than what youâd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.
If weâre alone in the sim, then it doesnât matter what we do anyway, so I focus on the possibility that we arenât alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.
At least, thatâs the way I see things right now. Your mileage may vary.
As an 80k advisor, my ToC is âTry and help someone to do something more impactful than if they had not spoken to me.â
Mainly, this is helping get people more familiar with/âexcited about/âdoing things related to AI safety. Itâs also about helping them with resources and sometimes warm introductions to people who can help them even more.
Are there any particular pipelines /â recommended programs for control research?
Just the things you probably already know about â MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work
What are the most neglected areas of work in the AIS space?
Hard question, with many opinions! Iâm particularly concerned that âmaking illegible problems legibleâ is neglected. See Wei Daiâs writing about this
More groundedly, Iâm concerned weâre not doing enough work on Gradual Disempowerment and more broadly questions of {how to have a flourishing future/âwhat is a flourishing future} even if we avoid catastrophic risks
In general, AI safety work needs to contend with a collection of subproblems. See davidadâs opinion â A list of core AI safety problems
There are many other such opinions, and itâs good to scan through them to work out how theyâre all connected, so that you can see the forest for the trees; and also to work out which problems youâre drawn to/âcompelled by, and seek out whatâs neglected within those đ
Some questions about ops-roles:
What metrics should I use to evaluate my performance in ops/âfieldbuilding roles? I find ops to be really scattered and messy, and so itâs hard to point to consistent metrics.
Hard to talk about this in concrete terms, because ops is so varied; every task can have its own set of metrics. Instead, think through this strategically:
Be clear on the theory(ies) of change, and your roles/âactivities/âtasks in it(them). Once you can articulate those things, the metrics worth measuring become a lot clearer
Sometimes weâre not tracking impact because impact evaluation is notoriously difficult. Look for proxies. Red-team them with people you admire
Fieldbuilding metrics can be easier to generate, but I donât claim to be an expert here â ask folks at BlueDot, or the fellowships for better input.
How many people completed the readings?
How many people did I get to sign up for the bluedot course?
How many of those finished the bluedot course?
How many people did I get into an Apart Hackathon?
Did any of my people win?
And so onâŠ
Likewise, I have a hard time discerning what âopsâ really means. What are the best tangible âopsâ skills I should go out of my way to skill up on if I want to work in the field building/âprogrammes space? Are there âhardâ ops skills I should become really good at (like, familiarity with certain software programmes, etc)
Ops is usually a âget stuff doneâ bucket of work. Yes, it can help to have functional experience in an ops domain like âFinanceâ or âIT/âoffice tech infra/âwebsiteâ (and especially âLegalâ), but a LOT of ops can be learned on the job/âon your own; AI safety is stacked full of folks who didnât let âI donât know anything about opsâ stop them from figuring it out and getting it done
Under what circumstances should a âtechnical personâ consider switching their career to fieldbuilding?
First things first:
Fieldbuilding is not a consolation prize. Do fieldbuilding if youâre really passionate about helping AI go well, and fieldbuilding is your comparative advantage.
And doubling down on that:
It really really really helps if fieldbuilders are very competent. A fieldbuilder who doesnât know their shit about AI risk and AI safety can propagate bad ideas among the people theyâre inducting into the field.
This can have incredibly high costs
Pollutes the commons
Wastes time downstream where all this would need to be corrected
Bounces people who might be able to quickly get up to speed, because their initial contact with these fieldbuilders is of poor quality, poor argumentation, poor epistemics
Conversely a great fieldbuilder is one who knows how to tend their flock, what they need to prosper and grow to become competent at thinking about AI safety properly, and being able to do AI safety things
How would you recommend going about doing independent project work for upskilling in-place of doing something like SPAR or MATS?
Why not both? In general, I want people to ask themselves this question when making decisions. You can do a lot more than you give yourself credit for.
At the current margins SPAR, MATS etc. are probably better than independent work
Some of these fellowships have pretty high signal to employers (based on evidence that has been generated over time)
There is a lot that these fellowships offer that are sometimes hard to get without them
Research support, mentorship, community engagement, well-scoped projects with deliverables and accountability
Also softer things like physical space , some money
But if youâre great at doing stuff independently, go for it! Neel Nanda didnât need a fellowship.
A key idea is to keep your eye on the ball â be productive!
The point is generate outputs
That make you learn
That show that you have learned
That are related to AI safety
That get feedback
That show that you update based on (relevant/âgood/âhigh-quality) feedback
Mildly against the Longtermism --> GCR shift Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months Iâve noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
Open Phil renaming itâs EA Community Growth (Longtermism) Team to GCR Capacity Building
Anecdotal data from conversations with people working on GCRs /â X-risk /â Longtermist causes
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation /â working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I canât help but feel something is off about this framing. Some concerns (no particular ordering):
From a longtermist (~totalist classical utilitarian) perspective, thereâs a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesnât differentiate between âhumanity prevents GCRs and realises 1% of itâs potentialâ and âhumanity prevents GCRs realises 99% of its potentialâ
Preventing an extinction-level GCR might move us from 0% to 1% of future potential, but thereâs 99x more value in focusing on going from the âokay (1%)â to âgreat (100%)â future.
From a âcurrent generationsâ perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people /â animals alive today
Iâm pretty uncertain about this, but my guess is that alleviating farmed animal suffering is more welfare-increasing than e.g. working to prevent an AI catastrophe, given the latter is pretty intractable (But I havenât done the numbers)
If GCRs actually are more cost-effective under a âcurrent generationsâ worldview, then I question why EAs would donate to global health /â animal charities (since this is no longer a question of âworldview diversificationâ, just raw cost-effectiveness)
More meta points
From a community-building perspective, pushing people straight into GCR-oriented careers might work short-term to get resources to GCRs, but could lose the long-run benefits of EA /â Longtermist ideas. I worry this might worsen community epistemics about the motivation behind working on GCRs:
If GCRs only go through on longtermist grounds, but longtermism is false, then impartial altruists should rationally switch towards current-generations opportunities. Without a grounding in cause impartiality, however, people wonât actually make that switch
From a general virtue ethics /â integrity perspective, making this change on PR /â marketing reasons aloneâwithout an underlying change in longtermist motivationâfeels somewhat deceptive.
As a general rule about integrity, I think itâs probably bad to sell people on doing something for reason X, when actually you want them to do it for Y, and youâre not transparent about that
Thereâs something fairly disorienting about the community switching so quickly from [quite aggressive] âyay longtermism!â (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention /â admission that this happened or why
From a longtermist (~totalist classical utilitarian) perspective, thereâs a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
I would be curious to know your thougths on my post arguing that decreasing the risk of human extinction is not astronomically cost-effective.
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesnât differentiate between âhumanity prevents GCRs and realises 1% of itâs potentialâ and âhumanity prevents GCRs realises 99% of its potentialâ
The same applies to preventing human extinction over a given period. Humans could go extinct just after the period, or go on to an astronomically valuable, and I believe the former is much more likely.
From a longtermist (~suffering focused) perspective, reducing GCRs might be net-negative if the future is (in expectation) net-negative
This also applies to reducing the risk of human extinction.
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think theyâre worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
Anti-claims
(I.e. claims I am not trying to make and actively disagree with)
No one should be doing EA-qua-EA talent pipeline work
I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years.
The terms EA and longtermism arenât useful and we should stop using them
I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree
Itâs implausible that existential risk reduction will come apart from EA/âLT goals
E.g. it might come to seem (I donât know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another.
This seems like a reason people who arenât EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I donât think it outweighs the arguments in favor of more existential risk-centric outreach work.
This isnât mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didnât decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term âGCRâ over âlongtermismâ, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.
Iâve upvoted this comment, but weakly disagree that thereâs such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is:
From a longtermist (~totalist classical utilitarian) perspective, thereâs a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter.
That âifâ clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse of becoming interstellar (which, from a longtermist perspective, is what mattersânot ârecoveryâ). Iâve written a sequence on this here, and have a calculator which allows you to easily explore the simple modelâs implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, itâs easy to believe âlesserâ catastrophes are many times more likely, so could very well be where the main expected loss of value lies.
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesnât differentiate between âhumanity prevents GCRs and realises 1% of itâs potentialâ and âhumanity prevents GCRs realises 99% of its potentialâ
I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence theyâve ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make peopleâs lives better (generally lumping those in as âshorttermistâ). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points)
From a âcurrent generationsâ perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people /â animals alive today
This seems to be treating âfocus only on current generationsâ and âfocus on Pascalian arguments for astronomical value in the distant futureâ as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite low, but one might still have considerable concern for the next few generations.
From a general virtue ethics /â integrity perspective, making this change on PR /â marketing reasons aloneâwithout an underlying change in longtermist motivationâfeels somewhat deceptive.
Counterpoint: I think the discourse before the purported shift to GCRs was substantially more dishonest. Nanda and Alexanderâs posts argued that we should talk about x-risk rather than longtermism on the grounds that it might kill you and everyone you knowâwhich is very misleading if you only seriously consider catastrophes that kill 100% of people, and ignore (or conceivably even promote) those that leave >0.01% behind (which, judging by Luisa Rodriguezâs work is around the point beyond which EAs would typically consider something an existential catastrophe).
I basically read Zabelâs post as doing the same, not as desiring a shift to GCR focus, but as desiring presenting the work that way, saying âIâd guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, itâs unlikely that weâd re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problemâ (emphasis mine).
Nanda, Alexander and Zabelâs posts all left a very bad taste in my mouth for exactly that reason.
Thereâs something fairly disorienting about the community switching so quickly from [quite aggressive] âyay longtermism!â (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention /â admission that this happened or why
This is as much an argument that we made a mistake ever focusing on longtermism as that we shouldnât now shift away from it. Oliver Habryka (canât find link offhand) and Kelsey Piper are two EAs whoâve publicly expressed discomfort with the level of artificial support WWOTF received, and Iâm much less notable, but happy to add myself to the list of people uncomfortable the business, especially since at the time he was a trustee of the charity that was doing so much to promote his career.
One thought is that a GCR framing isnât the only alternative to longtermism. We could also talk about caring for future generations.
This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And itâs pluralistic, connected to indigenous worldviews and environmentalist rhetoric.
Over the last ~6 months Iâve noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly⊠My guess is these changes are (almost entirely) driven by PR concerns about longtermism.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. Iâm not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work weâve completed, wasnât designed to address this question directly. I donât think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.
We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/âlongtermist message testing, and broader work assessing public attitudes towards EA/âlongtermism (which I donât have linkable applications for).
I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you donât want to maximise persuasiveness, itâs still important to understand how different groups are understanding (or misunderstanding) your message.
One point that hasnât been mentioned: GCRâs may be many, many orders of magnitude more likely than extinctions. For example, itâs not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesnât make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanityâs long-term potential. It is not plausibleâat least not for the reasons providedâ that âGCRâs may be many, many orders of magnitude more likely thanâ existential catastrophes, on reasonable interpretations of âmany, manyâ.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
A virus that killed everyone except humans in extremely remote locations might well destroy humanityâs long-term potential
How?
I see it delaying things while the numbers recover, but itâs not like humans will suddenly become unable to learn to read. Why would humanity not simply pick itself up and recover?
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
This is very vague. What other threats? It seems like a virus wiping out most of humanity would decrease the likelihood of other threats. It would put an end to climate change, reduce the motivation for nuclear attacks and ability to maintain a nuclear arsenal, reduce the likelihood of people developing AGI, etc.
Humanityâs chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.
The framing âPR concernsâ makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague âPRâ.
Is your claim âImpartial altruists with ~no credence on longtermism would have more impact donating to AI/âGCRs over animals /â global healthâ?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/âGCRs
If No, then Iâm confused why one wouldnât donate to animals /â global health instead?
[I use âdonateâ rather than âwork onâ because donations arenât sensitive to individual circumstances, e.g. personal fit. Iâm also assuming impartiality because this seems core to EA to me, but of course one could donate /â work on a topic for non-impartial/â non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group youâre partial towards. (With the caveat that âno credence on longtermismâ is underspecified, since we havenât said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that âglobal health is most cost-effective on near-term timescalesâ but whatâs really happened is that âa well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.â Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we havenât yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Speaking personally, I have also perceived a move away from longtermism, and as someone who finds longtermism very compelling, this has been disappointing to see. I agree it has substantive implications on what we prioritise.
Speaking more on behalf of GWWC, where I am a researcher: our motivation for changing our cause area from âcreating a better futureâ to âreducing global catastrophic risksâ really was not based on PR. As shared here:
We think of a âhigh-impact cause areaâ as a collection of causes that, for donors with a variety of values and starting assumptions (âworldviewsâ), provide the most promising philanthropic funding opportunities. Donors with different worldviews might choose to support the same cause area for different reasons. For example, some may donate to global catastrophic risk reduction because they believe this is the best way to reduce the risk of human extinction and thereby safeguard future generations, while others may do so because they believe the risk of catastrophes in the next few decades is sufficiently large and tractable that it is the best way to help people alive today.
Essentially, weâre aiming to use the term âreducing global catastrophic risksâ as a kind of superset that includes reducing existential risk, and that is inclusive of all the potential motivations. For example, when looking for recommendations in this area, we would be happy to include recommendations that only make sense from a longtermist perspective. A large part of the motivation for this was based on finding some of the arguments made in several of the posts you linked (including âEA and Longtermism: not a crux for saving the worldâ) compelling.
Also, our decision to step down from managing the communications for the Longtermism Fund (now âEmerging Challenges Fundâ) was based on wanting to be able to more independently evaluate Longviewâs grantmaking, rather than brand association.
From a âcurrent generationsâ perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people /â animals alive today
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I donât think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.
If GCRs actually are more cost-effective under a âcurrent generationsâ worldview, then I question why EAs would donate to global health /â animal charities (since this is no longer a question of âworldview diversificationâ, just raw cost-effectiveness)
IMO, the main axis of variation for EA related cause prio is âhow far down the crazy train do we goâ not âperson affecting (current generations) vs otherwiseâ (though views like person affecting ethics might be downstream of crazy train stops).
Mildly against the Longtermism --> GCR shift
Idk what I think about Longtermism --> GCR, but I do think that we shouldnât lose âthe future might be totally insaneâ and âthis might be the most important century in some longer viewâ. And I could imagine focus on GCR killing a broader view of history.
That said, if we literally just care about experiences which are somewhat continuous with current experiences, itâs plausible that speeding up AI outcompetes reducing GCRs/âAI risk. And itâs plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by âhave people survive until techno utopia and ensure that techno utopia happensâ. And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.
This sounds like an accusation, when it could so easily have been a compliment. The net effect of comments like this is fewer posts and fewer quick takes.
I actually meant it as a compliment, thanks for pointing out that it can be received differently. I liked this âquick takeâ and believe it would have been a high-quality post.
I was not aware that my comment would reduce the number of quick takes and posts, but I feel deleting my comment now just because of the downvotes would also be weird. So, if anyone reads this and felt discouraged by the above, I hope you rather post your things somewhere rather than not at all.
Iâd be keen for great people to apply to the Deputy Director role ($180-210k/ây, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and Iâve been working with the MBDF team for a while now and am impressed by what theyâre getting done.
People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially by the scientific consensus that has formed against building it since then, but there is some remaining risk that the boundaries are not drawn far enough from the brink that bad actors could access it. Having a great person in this role would help ensure a wider safety margin.
What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.)
How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.)
Robust ways to reduce wild animal suffering
New/âunderrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here).
Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research).
How do these and the safest consensual EAA interventions (see 1) interact?
Evaluating the backfire risks of different welfare reforms for farmed insects, shrimp, fish, or chickens (see DiGiovanni 2025).
Other things related to deep uncertainty in animal welfare (see DiGiovanni 2025 and Graham 2025 for context).
Red-teaming the cost-effectiveness analyses made by key actors on different animal welfare interventions (especially those relevant to anything listed above).
More fundamental philosophical or psychological stuff relevant to cause prio:
A) Under cluelessness, what forms of bracketing[3] (or different solutions) make most sense to guide our actions?
B) New/âunderrated arguments for being particularly worried about the suffering of sentient beings (rather than about pleasure or other things).
C) What explains the fact that some EA animal advocates buy suffering-focused ethics and others donât? What are the cruxes? What persuaded them? Are there social backgrounds that determine someoneâs degree of (non-)sympathy for suffering-focused ethics?
D) How to avoid reducing the credibility of any of the (fairly niche) kinds of work in these two lists?
How do we anticipate very understandable reactions like this one when talking about nematodes and/âor indirect effects on wild animals? (e.g., how do we make clear what this work implies and does not imply?)
Yup, something a variety of views can get behind. E.g., not âbuying beefâ.
For âconsensual EAA interventionsâ above, I think I was thinking more ânot something EAs see as ineffective like welfare reforms for circus animalsâ. If this turned out to be the safest animal intervention, I suspect this wouldnât convince many EAs to consider it. But if, say, developing alternatives to rodents as snake food turned out to be very safe, this could weigh a lot in its favor for them.
Iâve donated about $150,000 over the past couple years. Here are some of the many (what I believe to be) mistakes in my past giving:
Donating to multiple cause areas. When I first started getting into philosophy more seriously, I adopted a vegan lifestyle and started identifying as EA within only a few weeks of each other. Deciding on my donation allocations across cause areas was painful, as I assign positive moral weights to both humans and animals and they might even be close in intrinsic value. I felt the urge to apologize to my vegan, non AI-worrier friends for increasing my ratio of AI safety donations to animal welfare donations, and my non-vegan, non-EA friends and family thought that donating to animals over humans was crazy. Now my view is something like: donations to AI safety are probably orders of magnitude more effective than to animal welfare or global health + development, so I should (and do) allocate 100% to AI safety.
Donating to multiple opportunities within the same cause area. Back in my early EA global health + development days, I found and still find the narrative of âsome organizations are 100x more effective than othersâ pretty compelling, but I internally categorized orgs into two buckets: high EV and low EV. I viewed GiveWell-recommended organizations as broadly âHigh EV,â assuming that even if their point estimates differed, their credence intervals overlapped sufficiently to render the choice between them negligible. This might even be true! However, I do not believe this to generalize to animal welfare and AI safety. Now Iâve come full circle in a way, and believe that actually, some things are multiple times (or even orders of magnitude) higher EV than other things, and have chosen to shut up and multiply. If you are a smaller donor, it is unlikely that your donation will sufficiently saturate a donation opportunity such that your nth dollar should go elsewhere.
Donating to opportunities that major organizations recommend/âfund publicly. Major organizations may face constraints that individual donors do not. Non-profits are limited in the political activity they can engage in. Large funders may face reputational constraints that make certain grantees a poor fit. For instance, CG has noted that right-of-center AI policy groups may not be a good match for their main funder despite potentially doing valuable work, and certain cause areas may be too weird for certain funders.
Donating at the end of the year. Major evaluators often post their public recommendations at the end of November because philanthropic activity spikes in December due to holidays and the end of the tax season. The best donation opportunities do not only appear in December! If Iâm donating to 501(c)(3)s and trying to optimize taxes, I use a DAF so that I can donate in a month other than December. But also, the tax status of an organization is not a proxy for impact. For example, in the US, donating to 501(c)(3)s and 501(c)(4)s may provide tax benefits. Assuming you would donate the funds saved on taxes, it still may be higher EV to donate to non-501(c)(3)/â501(c)(4) opportunities and just take the tax hit. Additionally, time discounting may be steep enough such that you should make sacrifices (tax or otherwise) to donate now rather than later.
I used to donate mid-year for the reasons you gave. The last couple years I donated at the end of the year because the EA Forum was running a donation election in early December, and I wanted to publish my âwhere Iâm donatingâ post shortly before the donation election, and I donât want to donate until after Iâve published the post. But perhaps syncing with the donation election is less important and I should publish and donate mid-year instead?
After hanging out with the local Moral Ambition group (sadly thereâs only one in Malmö), Iâve found a shorthand to exprss the difference in methodology compared to EA. Both movements aim to find people who aready have the âA,â and cultivate the other component in them.
Many effective altruism communities target people who already wish to help the world (Altruism), then guide and encourage them to reach further (be more Effective).
Moral Ambition meanwhile targets high achieving professionals and Ivy Leaguers (Ambition), then remind them that the world is burning and they should help put out the fire (be more Moral).
The idea seems to be to promote a message to not give up animal products, but rather donate to organisations that effectively campaign to improve farm animal welfare (including EA favourites like The Humane League, Fish Welfare Initiative and the Shrimp Welfare Project).
Promoting donating to such organisations seems all well and good, but it puts out very negative messages about being a vegan (which apparently means you will have âannoyed friends and familyâ and âgot bloating from plant proteinâ etc.). This has got a lot of negative attention from vegan groups that Iâve seen. The website seems a bit ridiculous in places e.g. its âexpertâ views are just those of some eating champions. [EditâOK that last bit was the authors being tongue-in-cheek.]
Interestingly the person who seems to be doing the PR, Toni Vernelli, used to do the PR for Veganuary, and wrote on the forum defending it less than a year ago: link. Itâs unclear if they actually changed their mind or have some other motivation to change their stance.
Anyway, it seems like quite a controversial initiative, unnecessarily negative about veganism and quite poorly put together [editâOK that last part was unfair, more effort had gone into it than Iâd initially realised]. As a donor to the EA Animal Welfare Fund, itâs not something Iâd expect to be paying towards myself [editâfollowing discussion, Iâll withhold judgement from here until we see how it all plays out].
As someone who worked with Thom to raise almost ÂŁ2000 for FarmKind charities for my birthday fundraiser a couple of months ago, I want to say publicly that I am disappointed by FarmKindâs communication with the animal movement over this campaign.
As far as I can tell, FarmKind misled the movement by initially saying (or at least heavily implying) that they cooperated with Veganuary on this campaign; and they havenât acknowledged the statement by the CEO of Veganuary which makes clear that FarmKind did not in fact cooperate with Veganuary on this campaign.
Misleading the movement, and then not acknowledging doing so, violates important movement norms relating to transparency and accountability.
I continue to be open to evidence that FarmKind did not mislead the movement, and if this evidence is presented I will retract this criticism and apologise on this comment.
The criticism in this comment is separate from public-facing questions about FarmKindâs campaign, such as âis this campaign likely to harm societal perceptions of veganism, and even if so, how would that trade off against the opportunity to bring more money into the animal movement?â.
Evidence
Thom and Aidan, co-founders of FarmKind, initially said this about FarmKindâs cooperation with Veganuary:
Thom referred to âcooperationâ and wrote âwe [FarmKind and Veganuary] are all on good terms and there is absolutely no infightingâ here
Aidan characterised the ostensible conflict between FarmKind and Veganuary as reported in the initial press articles as âpart of the bitâ here and âkayfabeâ (fake wrestling) here
But that seems misleading:
Wendy Matthews, CEO of Veganuary, wrote that Veganuary âwas not involved in developing the âForget Veganuaryâ campaign and had no role in shaping or approving its messaging or executionâ here
Jane Land, co-founder of Veganuary, said that FarmKindâs campaign led to âa morale dentâ in Veganuaryâs team here, going on to say (whilst wiping away tears):
[the Veganuary team are] working way over time during this period; and then for another organisation to come in and undermine, try to discredit arguably, the work youâre doing â that is hard to take I think, when you feel like youâre doing so much...
...the risk in this case has very much not been symmetrical, in that Veganuary has the more established reputation and visibility. So I feel like we did have more to lose during this...
Thom himself saying at 34:50 in this YouTube interview:
I hope that we havenât implied that Veganuary was sort of in on it or something like this because I wouldnât want to do that. Thatâs not the the case.
Thom and Aidan have also come close to deciding what is in Veganuaryâs interests. Given the absence of cooperation between FarmKind and Veganuary, I think this is inappropriate and disrespectful to Veganuary:
Our campaign provides them with another opportunity to put forward the benefits of diet change [Thom here]
Tapping into the pre-existing anti-Veganuary media narrative is a feature, not a bug, because this is why [the press are] running stories about effective giving for farmed animals (which they would never touch otherwise) and giving Veganuary free media coverage here [Thom here]
[this campaign is] generating coverage for Veganuary who have a harder time getting in the media each year without a new hook [Aidan here]
Why this matters
Iâve seen several people (like Lewis Bollard here, and several others in this thread) question FarmKind or wonder about their cooperation with Veganuary, and Iâve seen some people (like Aidan Kankoyku here) say that there was cooperation, based on what Thom and Aidan initially said. The former deserve an answer, and the latter ought to be told what actually happened â by FarmKind.
I am increasingly worried that FarmKindâs campaign, especially their handling of their relationship with Veganuary, is harming the vegan movementâs perception of effective altruism and EA animal advocacy (forgive the crude labels). You can see this in this LinkedIn comment by Alistair Currie of the Vegan Society. If you think that this is happening, that it matters and that some of the vegan movementâs concerns are reasonable, more EAs ought to challenge FarmKind publicly.
Next steps
Chris Bryant of Bryant Research apologised to Jane (and by extension Veganuary) here because his initial response to FarmKindâs campaign had focused on its public-facing impact (e.g. donations) in a positive way, and he had overlooked the campaignâs impact on the people at Veganuary and by extension the animal movement. I commend Chris for this response, which I think demonstrates personal integrity and promotes good movement norms. I would strongly encourage FarmKind to do something similar, and would commend them for it.
I am keen to discuss questions about the roles that veganism and offsetting do and ought to play in our movement; I am struggling to do so, and I think many others are, until FarmKind acknowledge their misleading communications and apologise.
Would it not make more sense that do a campaign encouraging the vegan community to donate (and donate more effectively)? It seems the vegan community is well primed to want to use their money to help animals, rather than meat eaters. So it seems like a much lower hanging fruit to hold a campaign for this purpose rather than hold an anti-vegan campaign to get meat eaters to donate to help animals. I also somehow feel anti-vegan meat eaters would simply resonate with the anti-vegan sentiment of the Forget Veganuary campaign, rather than actually end up donating (though this is just a hunch). It might also give them âlicenseâ to eat more meat as they can now simply âoffsetâ their consumption, but that sounds a lot like âstart a fire and donate to the fire brigadeâ kind of situation.
I could imagine that at this point this is quite a rough place to be in and to navigate going forward for FarmKind. One potential way might be:
apologise to the veganuary founders, CEO, and team for the impact on their brand, decades work, current campaign, and adding to their stresses on the dawn of veganuary 2026. Acknowledging that the campaign may have hurt many within the team at a personal level, and that undermining another org in the movement and their campaign is in hindsight unethical;
Really own and extend that apology to any offence and upset caused within the wider movement;
Show really remorse by taking down the campaign asap;
Make amends by helping correct damage to the veganuary brand and message by putting a good story to the press of how you called this wrong, and that thereâs value to the vegan diet as well as donatingâgood enough that the press covers it. Then do a future fundraiser specifically for Veganuary, or commit a proportion of your future fundraising to them.
None of thatâs easy, especially when under duress, but could well be the right thing for all parties long term, and regaining some goodwill from large parts of the movement.
Given the pitfalls of mass communication, I am worried that the âforget Veganuaryâ piece of this will be a bigger takeaway for most people than âdonate to help farmed animalsâ
Thank you to everyone on the EA Forum who has shared their thoughts and reflections so far.
We would like to clarify that Veganuary was not involved in developing the âForget Veganuaryâ campaign and had no role in shaping or approving its messaging or execution. While we were given advance notice that FarmKind was planning a campaign promoting offsetting as an alternative to trying vegan in January and were kept informed about media timing, we did not have sight of the website content until after it was launched, nor of the final PR framing. We share some of the concerns raised in this discussion about the potential risks associated with this approach.
Our organization supports both dietary change and effective philanthropy. We see the value in open discussion about the most effective ways to reduce harm to animals. We recognize that everyone involved shares a desire to end factory farming, even where we disagree on strategy/âtactics.
As this discussion has arisen during our busiest period of the year, our focus is now on executing an impactful Veganuary 2026 and delivering a norm-changing campaign that reduces demand for animal products at scale and incentivizes corporations to shift the food environment toward higher availability and visibility of cruelty-free options. We may not be able to engage deeply in real time now, but weâre open to further discussion and evaluation post-January.
Iâm not wild about this campaign either. Iâve shared this feedback privately with Aidan and Thom, but think thereâs value to doing so publicly to make clear that EA /â the animal movementâs moderate wing /â FarmKindâs funders donât uniformly endorse this approach. (To be clear: Iâm writing in my personal capacity and havenât discussed the following with anyone else at Coefficient Giving.)
Iâm a huge fan of FarmKindâs team. Iâve personally donated to them and directed funding to them via Coefficient Giving. I thought they did an incredible job during the Dwarkesh fundraiser earlier this year and I admire their ingenuity and grit in pursuing the very hard challenge of bringing in counterfactually new funds to effective animal advocacy. I appreciate that they meant well with this campaign, which I think they saw as using a a playful fake-feud with Veganuary to generate media.
But I thing this campaign was a mistake for three reasons:
This feels like an incitement to infighting, which has long plagued the animal movement. In recent years, Iâve seen the abolitionist /â more radical wing of the animal movement take major good faith steps to reduce this infighting (see, e.g., my session with Wayne Hsiung at this yearâs AVA). Whether Veganuary was in on this or not, Iâm seeing vegan activists reasonably interpreting this as an attack on their advocacy. I think we should have a very high bar for deliberately starting a fight in the movement, and I donât think this meets it.
This feels like an attack on vegans. I think we should also have a very high bar for attacking well-meaning people doing good in the world, whether vegans, EAs, organ donors, aid workers, or longtermists. I appreciate that attacking vegans wasnât the campaignâs intent, but I think it was the predictable result, and certainly how the folks in the Daily Mailâs comments sections have (gleefully) interpreted it.
This feels dishonest. To be clear: I donât think FarmKind intended it this way and I think the people behind it are deeply ethical people. But I think our movement is at its best when we hold ourselves to high standards and that includes not deliberately misleading people. And creating a fake âmeat-eating campaignâ feels like it crosses the line for me.
Again, this isnât to question the intent or abilities of FarmKindâs team. Instead, Iâm sharing how I personally feel about this campaign. I hope we can avoid campaigns like this in future, while continuing to pursue the innovation in tactics that the animal movement and EA needs.
As @NickLaing has pointed out, I think how people perceive the campaign or interpret its message is a lot more important than what the intentions are behind it. We can try and spin it however we like, but this is a straightforwardly anti-vegan campaign, maybe not in intent but in actuality. It is absolutely horrible in its attitude towards vegans, even though vegans are probably more likely to donate money to animals than any other group. Here are just a few choice snippets from the site:
1. Someone trying to go vegan had to plan every meal, give up her favourite foods, annoy friends and family, and get bloated. For all that, she helped far fewer animals than someone with an overflowing platter of meat. 2. âCan you survive Veganuary?â implying that its some terrible trial that someone needs to endure. 3. âEvery day can be hard when youâre veganâ. Hardly selling veganism. 4. Listing celebrities who couldnât âmake veganism stick,â including references to them feeling weak and struggling with ill health.
Honestly, youâd have a hard time finding a carnivore influencer who more passionately bashes veganism. People should donate money. They should also go vegan. If they canât do both, they should at least do one. But if they can do both, they should do both. That isnât implied anywhere; to the contrary, veganism is portrayed as a waste of time and vegans as weak, misguided, joyless fools.
Thom from FarmKind here. We at FarmKind wanted to provide a bit of context and explanation for the choices weâve made around this campaign.
Context
Cooperation: We let Veganuary know about our intention to launch this campaign at the very start of our planning process and have kept them informed throughout. Our campaign provides them with another opportunity to put forward the benefits of diet change. We are all on good terms and there is absolutely no infighting.
Origin: At this time of year, due to the annual Veganuary campaign, many people and the UK press debate the pros and cons of diet change, often with very entrenched views on both sides. This creates a unique opportunity to get people who are currently unwilling to change their diet to consider donating as an alternative entry-point into helping farmed animalsâsomething that is extremely hard to get media attention for most of the time.
Goal: The goal of this campaign is to get the question of âshould you do Veganuaryâ more media attention, and shift the focus from âis eating animals badâ to a focus on the question of which solution(s) to factory farming an individual will choose to participate in. In other words, we want the debate to be about whether to choose diet change or donating, rather than whether factory farming is a problem worth dealing with or not.
Our funders: FarmKind made the decision to launch this campaign. Organisations and individuals that have provided FarmKind with funding are not endorsing the campaign and it would be a mistake to equate past funding of FarmKind with support for our approach.
Campaign
The campaign encourages people to offset their meat this January by donating to help fix factory farming. As part of this, we hired three top competitive eaters to talk about donating to offset the animal welfare impact of their diet as they undertake one of their typical eating challenges.
By working with individuals who eat meat (but who would be undertaking these meat-eating challenges anyway), we can help reduce suspicion among entrenched meat eaters that our true motive is to make them vegan. It allows us to be authentic in our message that being unwilling to change your diet doesnât mean you canât start helping animals.
Our campaign aims to show that those who are unwilling to change their diet today can and should still begin their lifelong journey of helping animals by donating to charities working to change the food system.
Concerns
We know that some may have concerns about this approach and feel uncomfortable with the idea of paying competitive eaters who are eating meat, even in an effort to help farmed animals. However, to make change we have to start from where people are now. For most people, that starting point is eating and enjoying meat and being unwilling to change their diet.
Some media coverage has suggested that our campaign aims to encourage people to eat meat or that we are running a âmeat-eating campaignâ. This is untrue, and we have corrected them. Tapping into the pre-existing anti-Veganuary media narrative is a feature, not a bug, because this is why theyâre running stories about effective giving for farmed animals (which they would never touch otherwise) and giving Veganuary free media coverage.
As part of our commitment to being as transparent and effective as we can, weâre happy to answer specific questions anyone has about the campaign but as this campaign is ongoing we may have to answer some questions in the future or privately via email.
Thanks Thom for responding. I wasnât actually aware of who FarmKind were when I wrote my post above. It looks like a very good project overall, thanks for your work in the space.
Your response doesnât answer for me the question of why it was decided to create such an anti-vegan campaign (at least in its webpage). I can see there could be a lot of good done by persuading people who are unlikely to try a vegan diet to donate. But something along the lines of âIf you donât want to be vegan but want to help animals, try this insteadâ or even âIf you hate Veganuary, hereâs how to beat vegans at their own goalsâ or something would seem to suffice (but with better words...). Creating a webpage full of negative messages about being vegan doesnât seem necessary, and seems to me to actually be misinformation, given Iâm not aware of anything showing that the typical Veganuary participantâs experience is like what is presented.
Having read the article in the Telegraph, I didnât think it was actually that badâit seemed to be mainly arguing for promoting donations rather than diet change, and didnât actually seem to put veganism down (except for bringing up âvegan dogmaâ). (Though I wouldnât agree that putting on a meat-eating challenge is ethically OK.) So being negative about veganism doesnât seem to have been necessary to get publicity, so it makes it seem even stranger why the campaign web page takes this line.
It doesnât seem to have been picked up by any substantial media outlet other than the right wing UK pressâIâd have thought it would be desirable to get a broader reach, since Iâd guess that people on the political left would be more likely to donate, and I wonder if being less adversarial might have worked better.
It would be good to see follow up analysis of what impact on donations the campaign actually has.
Cooperation: We let Veganuary know about our intention to launch this campaign at the very start of our planning process and have kept them informed throughout.
Aidan says here that it is a âbitâ. That would seem to imply that Veganuary are collaborating with you on this. Can you say if thatâs accurate? If thereâs a follow up, it would seem good to highlight it to people here.
Our funders: FarmKind made the decision to launch this campaign. Organisations and individuals that have provided FarmKind with funding are not endorsing the campaign and it would be a mistake to equate past funding of FarmKind with support for our approach.
One of the things that people are going to do with a campaign like this is try to see who is funding it. Currently if you click the âTransparencyâ link at the bottom of the campaign page, it goes to a list of FarmKindâs funders, including the EA Animal Welfare Fund. Itâs then going to at least raise the possibility in peopleâs minds that these funders implicitly endorse the campaign. Unless youâve switched to self-funding, it does seem like these fundersâ money is being used to finance it (including individual donors to the EA AWF). Would it not be normal to check with funders before launching a campaign thatâs expected to be controversial? Particularly if their own donors might feel attacked by the campaign? It seems like it creates a fair amount of potential for blowback against the EA animal welfare movement.
If there is some complex strategy involving coordination with Veganuary or others, Iâd hope it was discussed with a diverse range of experienced people in the animal welfare space and got their endorsement.
I would also say that the campaign web page loses credibility by calling competitive eaters âexpertsâ (Iâve seen this in comments in non-EA spaces) - why would anyone go to such people for expertise on how to best help farm animals through donating? To me, relevant âexpertsâ would be people knowledgeable about welfare campaigns and ethics.
I think there should also be considerably more nuance around the idea of offsetting impacts of meat-eatingâcalling it âlike carbon offsettingâ seems misleading as they seem different in a number of significant ways, which may affect what people want to decide to do.
Thank you so much for your response, Thom. Would you be able to clarify whether the meat-eating challenge âin which three competitive eaters will consume nothing but animal products for a whole dayâ, as reported in the Telegraph and Daily Mail, were misrepresentations by these outlets, or was this originally part of the campaign and FarmKind then changed course in response to the backlash? The articles still have the same headlines and no corrections have been made with regards to the meat-eating competition in either of the articles, as far as I can tell.
Cooperation: We let Veganuary know about our intention to launch this campaign at the very start of our planning process and have kept them informed throughout. Our campaign provides them with another opportunity to put forward the benefits of diet change. We are all on good terms and there is absolutely no infighting.
I think it would be more useful to clarify if Veganuary supported you doing this campaign. If the answer is yes, that seems great! If the answer is no, this seems explicitly not cooperative, and in that case, it would be misleading to frame this as a cooperative effort (independent of if this was good or bad to do). I donât think whether or not Veganuary was informed was what folks were looking for, but if they endorsed the idea or did not endorse it /â anti-endorsed it.
I think this just seems like a clarification worth making here given how negative the reaction has been to the campaign (from within the movementâhopefully it had a positive reaction externally!)
Thanks for this replyâI agree with most of what you have written here.
I think though youâve missed some of the biggest problems with this campaign.
1. This seems to undermine vegans and vegetarians (see image above), and their efforts to help animals. It seems straightforwardly fair to interpreted this as anti-veganuary and anti-vegan, especially at a glance.
2. What matters in media is how you are portrayed, not what the truth is. Your initial campaign poster is ambiguous enough that its easy to interpret as a pro meat-eating campaign and anti-vegan campaign. I could have interpreted it as that myself, I donât think the media were grossly wrong here to report that.
The Telegraph article is pretty good actually overall and makes good points that could be good for animal welfare, although the first âclickbaityâ title and paragraph is unfortunate (see above)
Media lasts for a day, correcting it is the right thing to do but doesnât have much of an impact.
I can see what you are trying to do here, and its quite clever. I love most of your stuff, but this campaign seems like a mistake to me.
After having a quick look at this campaign, it pretty straightforwardly seems misguided and confusing. Farmkindâs efforts to appeal to regular people to donate rather than go vegan seems good and makes sense. This adversarial campaign looks and feels awful. Two reasons immediately jumped out as to why it feels off.
it undermines and even goads vegans and vegetarians doing their bit for animals
glorifying people who eat lots of meat feels bad in a guttaral almost âKantianâ kind of way, regardless of the utilitarian calculation.
In general i think complex utilitarian arguments struggle to be communicated well in pithy campaigns.
Iâm surprised the FarmKind people have made what seems like a pretty straightforward mistake like this, Iâve been super impressed by all the other material they have put out.
I think I understand the worries and discomfort people feel about this approach. But Iâm not sure how fruitful it is for all of us to have a vibes-based conversation about the possible merits of this campaign. It already exists. It might end up being good, it might end up being bad. We can make it better. If you think some of the risks taken and assumptions made by FarmKind are unaddressed, letâs talk about how we can mitigate those. Letâs also figure out how we can support FarmKind do what they intend to do for animals. And most importantly, letâs make sure we learn from this campaign.
How can we learn from this experiment?
Trying new approaches in this complex and relatively new space is great if you thoughtfully measure if it works or not. Measurement and evaluation are especially important because there are backfire risks and because this is a deeply underfunded cause area, so we cannot afford to be careless.
It can be easy to falsely attribute successes and failures. So, what are some indicators that this might demand pivoting /â repeating? Iâd love to hear from FarmKind, The Mission Motor, behavioral scientists, and ACE researchers who worked on the Better for Animals resource what they think would give us valuable insights.
What is the bar for money raised that would make this worth it? What is the cost of FarmKindâs Veganuary campaign, what else could have been done with those funds, how much money is raised through their platform specifically in response to this campaign?
Can we assess spillover effects?
Are there some PhD students out here who are willing to work with FarmKind to figure out some RCTs to learn some stuff? E.g. how long do people donate, do they change their diet, what do they think of factory farming, what were their priors, etc.
How can we mitigate possible harms?
Risk: discussion remains focused on individual diet change, not ending factory farming
Can FarmKind, now that they have the attention, redirect their messaging and no longer talk about diets but instead about the horrors of factory farming?
Can both vegans/âVeganuary and FarmKind state that what they care about is a more hospitable world for all and that industrial agriculture is the enemy.
Risk: moral circle expansion is slowed
Can Toni and FK and participants come out saying something like this, âDonât get us wrong, we are all actually bleeding hearts, we do care about animals, we donât think eating animals the way society does now is necessary, natural, or normal, but we are just being pragmatic. We think being vegan is good, but preaching veganism is not.â?
Can they direct some of the funds they raise to high-impact interventions that do things like education programs aimed at fostering compassion and empathy for animals, anti-speciesist policy work, actions promoting moral consideration of animals in public discourse, etc.?
Risk: time is wasted on infighting
Can both Veganuary and FarmKind state that what they agree on and care about is a more hospitable world for all and that industrial agriculture is the enemy?
Can Toni flip-flop some more, and in February say, âYou know what, I was wrong. Itâs not either/âor; it should be both or can be a little bit of each.â?
Can FarmKind share the metrics and results of their campaign and show up in vegan spaces like R/âvegan to explain their approach and solicit feedback?
I think AVA is planning to host a discussion about this at their Summit in Canada in May.
Risk: fewer people reduce their animal consumption or do it later
Meat producers can use this in their propaganda; can we use AI to find the conversations about this that misrepresent the arguments and counteract them?
Can Toni and other former vegans come out and say something like, âActually, after having hung out with all of these meat eaters and learning more about where their food comes from and having seen what it does to their bodies, I think itâs actually kinda gross/âdisgusting/âunsympathetic. Iâm happy they donate, but for their sake, I hope they eventually put their mouths where their money is.â
What would happen if Veganuary went on offense with aggressive angles like:
âWe applaud that FarmKind offers all the weak-willed meat-addicts out there a compassion cheat code against animal cruelty. We do hope that the people who listen to Toni and FarmKindâs advice 1) also talk with their doctors and nutritionists and 2) learn about the hidden truths about factory farming.â
âWe agree that there are multiple roads that lead to Rome, and the super-highway is one where we both do no harm and reduce harm as much as possible. So, we actually already recommend to people who participate in Veganuary also donate to high-impact pro-animal charities. Yes, we are even more holier-than-thou than you thought. We hope vegans put their money where their mouths are. And we hope that offsetters eventually put their mouths where their money is, for animalsâ sake and their own.â
âHow do you know someone is a meat eater? They will tell you. (And theyâre more likely to need GLP-1.)â
âIf youâre not one of these privileged people who can buy humanely raised meat and donate money, remember that beans are healthy, cheap, and cruelty free.â
(I donât particularly endorse any of these messages, but I could see people pulling up a chair and a popcorn bucket to watch this while being exposed to different arguments based on the same premise, that farming cruelty is bad.)
How can we increase the likelihood of success?
Opportunities to increase donation conversion
Is there a possibility for a follow-up press release by FarmKind or a pitch with testimonials of carnists who have made donations?
What would happen if FarmKind dares vegans and Veganuary supporters to donate? Can they do a donation contest with Veganuary? ACE can probably set up a fundraising page for vegans if Veganuary doesnât want to do it on the FarmKind site. (Happy to credit FarmKind for those donations, but Iâd like them to go where they are likely to do the most good.)
Can Toni share where she donates to?
Can we leverage the comment sections to encourage people to share where they donate and include donate links?
Opportunities for awareness increase
Can Toni talk about how Veganuary doesnât talk about animals enough and too much about health and climate, and how the big problem is factory farming?
Can FarmKind include and promote people in their pitches who also started reducing their meat intake after learning more about factory farming?
Can FarmKind or Toni talk about small-bodied animals and their Shrimpact work? What if Toni says, âSure, maybe itâs okay if some of these people want to eat some red meat and offset their donations, as long as they donât start eating chicken or salmon, or eggs.â
There are probably more and more productive ways to help FarmKind and Veganuary and the whole EAA movement in this endeavor. Please share your ideas. Also, what will you do this January, donate, go vegan, or both?
Three final thoughts that I didnât really know where to put:
If we think AI can soonish solve some of the big alt-protein questions (taste, scaling, price, etc.), then we will still need people to stop thinking they need animal products. If we think public discussions will affect alignment, then we need pro-animal messaging to be out there. Iâm wondering if this means that hard-to-measure interventions toward increased prevalence of anti-speciesist values might have become more important than I thought they were. On the other hand, if we think AI will solve factory farming, maybe in the meantime we need to focus as much of our time as possible on increasing the welfare of animals who are farmed until then, and thatâs more likely done through welfare campaigns than promoting veganism. Either way, we should probably be careful in how we talk about vegans and bring animals up more often, even in meat reduction work. However, Iâm very uncertain about all of this and curious what you think.
What could this offsetting approach to donating mean for effective giving? Is there a way to leverage this work to get people to make GWWC pledges or to get offsetters to think about how they use their donations in general. FarmKind wasnât successful in becoming the Giving Multiplier for animals and pivoted to offsetting, but maybe they can still direct offsetters to the Giving Multiplier?
I work fulltime in animal advocacy. I donât think that gives me an excuse to eat animals. I am vegan. I donât think that absolves me from donating to effective charities to reduce as much harm as possible. Itâs a privilege that I can do all three of these things. In this world, few people can. It seems good to encourage people to do everything they can, while also understanding that might be limited. So, letâs help people help more animals as best as they can. We need to understand better what works and work together to make that happen.
Edit: This is my personal take and not Animal Charity Evaluatorsâ opinion.
Just a quick word from me, Nicoll from The Mission Motor (TMM supports Monitoring, Evaluation, and Learning in the animal movement). Thanks Stien, for your balanced and clear thoughts and for asking for our take.
Based on what I read, I would consider this to be a novel and higher-risk intervention. Many of the more common interventions in the animal space could do with more robust data gathering, but a higher-risk/ânovel intervention would warrant an even stronger focus on MEL.
Common data gathering instruments, such as surveys, interviews, focus groups, etc. (when asking the right questions), can work well here to gather relevant data. And, saying this with a bit of caution, I donât think more elaborate MEL tools are needed.
Some of the challenges we foresee are reaching particular groups you might want data on (eg. people who read the campaign materials and donât actively engage, but could change their attitude or behaviour) and saying something sensible about the overall effect of the campaign. Particularly as it likely impacts another campaign (Veganuary), and because of comparing increased animal welfare through donations vs less animals in factory farming leading net negative lives as a result of less animal consumption.
I think it is possible to overcome these and other challenges, but this might come at too high a cost to still be a responsible use of resources.
To be able to properly comment on credible indicators, Iâd love to know the specific Theory of Change, so I wonât go into that now.
I totally assume FarmKind has done some MEL work already, but if we can be of assistance, weâd be happy to help!
Really well written, and an incredibly good breakdown of some of some of the strategic factors here that I wouldnât have come up with myself reading the above.
But I also think you may have partially missed the mark here. Statements like:
Trying new approaches in this complex and relatively new space is great if you thoughtfully measure if it works or not.
are utilitarian in flavor and really the whole of the comment is. What if you think this sort of thing is just promoting bad norms that just sort of feel deontologically wrong?
One way I can see that is violating a norm of kindness to others. Vegans sacrifice a lot, and to have someone highlighting the negatives from within the movement isnât great vibes. âBut theyâre not talking about current vegans, just those potentially thinking about changeâ Okay great, try telling the Christian that they should stop recruiting because Christians âannoy friends and familyâ leading a lifestyle thatâs a significant burden to everyone, themselves included. I doubt theyâll be enthused. To state what I mean here more clearly rather than leaving it to be inferred: casting sometimes thatâs a big part of someoneâs life in a negative life generally doesnât make their day better.
But they protest âNo no, you got us wrong. We really are pro vegans, we just think this is a more effective way to get eyes on the issue and increase exposure to AW topicsâ Now I think this is potentially violating some norm of trust or honesty. Maybe if the person comes to care about AW they wouldnât really care in the end, but I know if I decided to start donating rather than trying for diet change again, just to discover that this was all some ploy to drum up further controversy and reach, Iâd feel played and more than a bit disillusioned.
If I put on my utilitarian cap, everything you say above seems right. If I put on my deontologist cap, this campaign just doesnât seem quite right. The utilitarian in me feels compelled to say âbut I also donât know what itâs like to work in comms around AW, and maybe attention really is just some significant bottleneck standing between further animal lives savedâ. The deontologist then responds âyeah, maybe. But is this the type of thing youâd see in a healthy community of animal advocates?â [1]
I realize that youâre not endorsing the strategy and are just analyzing it, part of this speaks to the analysis but part of it is also aimed at those executing as well.
Love this comment so so much! Only minor disagreement is that I think the forum here isnât a bad place to have a bit of a âvibes basedâ conversation about a campaign like this. Then we can move into great analysis like yours right here.
But Iâm not sure how fruitful it is for all of us to have a vibes-based conversation about the possible merits of this campaign.
I think promoting good norms and making them more âcommon knowledgeâ is one of the few ways that EA Forum conversations can maybe be useful.
As in, I think itâs good that âeveryone knows that everyone knowsâ that we should have a strong bias to be collaborative towards other projects with similar goals, and these threads can help a bit with that.
(To be clear, my sense is that FarmKind is already well aware of this and this is collaborative campaign, especially after reading their comment. I mean for the EA Forum readers community as a whole)
Contrarian marketing like this seems like it would only work well if the thing being opposed was extremely well known, which I donât think Veganuary is.
This might be a bit pedantic, but I would note that Veganuary is more popular in the UK. If we adjust the Google trends search to be UK-only, it looks more comparable.
Of course, I suspect Movember is more US-based, so this is now maybe too biased towards Veganuary, and even so, Movember still outpaces Veganuary, but it does look more competitive.
(I donât know if Black History Month is a fair comparable, especially considering itâs part of the US education system in a way the other two arenât.)
Again, I donât think this changes your larger point all that much, but figured additional context helps.
This feels like a very negative take on a lighthearted campaign that is trying to get across an important point. Itâs important to do outreach to people who disagree with youâeven people who think vegans are annoying.
I doesnât seem âlightheartedâ to meâit seems quite serious. OK, the browser âgameâ is quite silly. But if itâs meant to be lighthearted then that seems to have not come across to quite a lot of people⊠Trying to appeal to people who donât want to adopt a vegan diet is fine, but I donât think attacking another groupâs effort and the idea of veganism in general is.
Youâre right that we arenât the target audience. I take this as probably evidence in the other direction. I think if EAâs on the forum feel uncomfortable about this, the general public is likely to take it even worse than us.
I agree that its a light-hearted campaign, that is clever with good intentions. I just think its a mistake and might well do more harm than good. Thatâs OK, this is just one campaign among many great ones from FarmKind
âI think if EAâs on the forum feel uncomfortable about this, the general public is likely to take it even worse than usââI really disagree with this. EAâs values and sensibilities are very different to the average person. Things that EAs consider horrifically callous are normal to the average person and vice versa.
Examples of the former: eating meat, keeping all your wealth for yourself, âcharity begins at homeâ
Examples of the latter: measuring impact and saying we shouldnât give resources to organizations that donât perform well against these measurements, donating to help shrimp rather than people, donating to help strangers overseas rather than your local community, expressing support for billionaires who give away some of their wealth
There hasnât been backlash to this campaign from average people, only EAs and animal advocates.
measuring impact and saying we shouldnât give resources to organizations that donât perform well against these measurements
Are FarmKind claiming that Veganuary is one of those organisations?
There hasnât been backlash to this campaign from average people, only EAs and animal advocates.
Depends what you mean by âbacklashââkind of unclear to me what backlash from average (non-vegan) people would look like, especially given I suspect most of them who have read a headline about it think this is just an anti-vegan campaign.
The comments on the Daily Mail piece (which should be taken with a huge pinch of salt, given itâs the Daily Mail + online comments in 2025) look quite a lot like backlash to me though.
There hasnât been backlash to this campaign from average people, only EAs and animal advocates.
I think non-EA animal advocates count as being part of the general public in Nickâs usage? From what Iâve seen itâs been going down badly with them so far...
Why not go even further with outreach and diss the unpopular issue of animal welfare altogether? Then you can reach a huge crowd of people with your new modified message for good: âanimal welfare is irrelevantâ.
Yes, Iâm joking, but keeping a payload, any payload, at the cost of the actual principles of your supposed cause, is pointless. Like, they could adjust their message to appeal to people who are alienated by appeals to animal welfare at all, and just advocate for meatless mondays in the name of reducing methane emissions. But that would be pretty ineffective, just like sending this bizarre, conflicted message and discouraging pro-animal advocacy is ineffective.
Oh. I find this negative and personally upsetting.
Effective altruism brought to animal advocacy a strong norm of collaboration and this feels like undermining years of work. I wrote about it some time ago:
Back in the days, the movement was constantly infighting and spending significant time attacking and criticizing each other. There were a lot of personal attacks, hostile takeovers, and constant attempts to bring individuals down.
In this post I wonât get into details, but many ambitious projects stopped due to this culture, and I suspect many people have drifted away from the movement because of it.
This campaign seems like a well made one, but I think it contributes to polarization and I worry of alienating potential talent that is motivated by helping animals. It feels off to use a name for campaign that uses other charityâs name in a negative senseâfeels like an attack. Finally, very adversarial tone toward plant based choices undermines some of the charitiesâ work recommended by FarmKind, like Dansk Vegetarisk Forening.
So, overall it feels like optimizing for bringing money at the expense of collaborativeness and at the expense of other factors that contribute to the impact of the movement, like alienating talent.
I hope Iâm wrong and that Iâm missing some considerations, but I think effective altruists should have moral guardrails that make them unlikely to engage in certain behaviors and, to me, collaborativeness is one of the virtues that should not be discarded easily.
If anything, it feels a bit like a missed opportunity for some collab with Veganuary, but maybe FarmKind had reached out to Veganuary.
This seems right to me. The Telegraph article had a quote from Veganuary that was critical of the campaign. My understanding is that FK has been keeping Vegnaury informed throughout the process ,which is good, but it does not seem to be the case that this was a collaboration between the two.
Veganuary seeming against it is part of the bit. These media outlets hate Veganuary and wouldnât cover it if they thought it was what they wanted. We (FarmKind) have an announcement coming tomorrow explaining the context behind this campaign but the TL;DR is that it is not encouraging meat eating, itâs encouraging donating as another option for people who arenât willing to change their diet, and generating coverage for Veganuary who have a harder time getting in the media each year without a new hook
This seems to be contradicted by Wendyâs comment above.
Iâm pretty concerned (and confused) about the lack of alignment between FarmKindâs perspective and Veganuaryâs on the extent of cooperation between the two ahead of the campaign launch.
I hope that we havenât implied that Veganuary was sort of in on it or something like this because I wouldnât want to do that. Thatâs not the the case.
Thanks for engaging Aidan. Things may be clearer once we see any follow up I guess, but this strategy seems like it could come across as duplicitous, and rather risky not just for the organisations involved but also the wider EA movement, given the desire to seem trustworthy after the events of the past couple of years.
I get the good intentions here but it looks to have backfired badly. Obviously Iâm not deep in this but I hope that withdrawing the campaign and a quick apology is on the table for you guys at least. All the best figuring it out!
Thank you, thatâs good to know! If the campaign isnât encouraging meat-eating, why does it feature competitive meat eating? Are you concerned that itâs been reported as a âmeat-eating campaignâ in several outlets?
So this is . . . . ~EA kayfabe? (That term refers to âthe portrayal of staged elements within professional wrestling . . . . as legitimate or real.â).
Completely speculating here, but I wonder how much of the impetus for a campaign like this could be (emphasis on could!) illustrative of a broader disinterest in diet change work among some EAs. And so, even if vegnauary and adjacent efforts, or even veganism generally, are undermined in public discourse, some EAs might be ok with this because they basically donât think diet change is a serious way to help animals?
Like, to me, if this campaign successfully brings in a lot of donations that otherwise wouldnât be given, then that would be a success, assuming in the interim there arenât major fractures in the movement generally or other harms. But I wonder if some EAs basically round those fractures to zero regardless of how serious they are/âmay seem.
This could be completely wrong, though! This is a quick take afterall :).
Encouraging such donations could be good, and advocating for diet change doesnât seem to be favoured in EA. Advocating a âmoral offsettingâ approach to meat consumption is probably controversial I guess, but within realms of the plausibly reasonable. There doesnât seem to be anything gained by being negative about veganism though, and not doing that would seem robustly better.
Editâperhaps it could be argued that a campaign against veganism may more effectively raise attention than if no criticism were made. That would still seem to me to be an excessively risky and divisive strategy, though. And it makes claims that donât seem to generally be correct about veganism and says some other silly things, which doesnât seem like a good way to go.
There doesnât seem to be anything gained by being negative about veganism though, and not doing that would seem robustly better.
Being seen as honest about the problems with veganism raises their credibility with their other recommendations. âOh yes, weâre not like those annoying people youâve already rejected, we have a different viewâ.
It doesnât really seem honest to me. It ignores all the experiences of people who didnât find it particularly problematic or even positive to do Veganuary.
Woah! Agreed. I have a somewhat more positive view of go-vegan/âmeat reduction campaigns; but even disregarding that, this doesnât make sense. Current vegans are probably the best targets for a donate-more campaign and I can tell from experience reading r/âvegan that this is unlikely to go down well!
Friendly reminder that thereâs only 1 day left to apply for this upcoming round of SPAR!
Apply by January 14 to join our largest round yet â 130+ projects with mentors from Google DeepMind, RAND, UK AISI, Apollo Research, SecureBio, MIRI, and more!
This is likely the only SPAR round until fall later this year.
Work on a part-time AI safety, AI policy, AI security, or biosecurity project. Open to students & professionals; prior research experience not required for all projects.
Weâve heard from a lot of people who feel theyâre getting rejected from jobs for being overqualified, which can be pretty frustrating. One thing that can help with this is to think about overqualification as an issue with poor fit for a particular role. Essentially, what feels like a general penalty for past success is usually about more specific concerns that your hiring manager might have, like:
Will you actually be good at this work? You might have years of experience in senior roles, or other impressive credentials, but this doesnât always mean youâll be able to perform well in a more junior role. For instance, if youâve been managing teams for years, they may worry you lack recent hands-on experience and donât know current best practices.
Will you stick around? If youâve been leading large teams but are applying for an individual contributor role, they might wonder if youâll actually find the work engaging or get bored without the higher-stakes responsibilities. They may worry youâre just using this as a stepping stone until something better comes along. Hiring is costly and time-consuming, so they donât want to invest in someone whoâll be gone in a few months.
Will you expect more than they can offer? If youâve worked in more senior roles, an organization might think youâll be looking for opportunities for growth, benefits, and a salary beyond what the organization is able to offer. If youâre likely to demand more than theyâre able to give, they wonât want to waste time advancing you through the process.
If youâre genuinely excited about a role, but are worried about being perceived as overqualified, the good news is that you can address these concerns in your application (especially your cover letter or application answers). For instance, if youâre stepping down in seniority, explain why you actually want to do this work. If youâve worked in management and are wanting a return to the hands-on work youâre really passionate about, then mention this.
You should also make sure to emphasize the parts of your background that are most relevant to the role, rather than the ones that seem most impressive in general. Your PhD might be impressive, for example, but unless itâs closely connected to the role youâre applying for, you might want to highlight other parts of your CV instead (like your operational experience if youâre applying for an ops role).
The important takeaway is to think about your fit for a specific role rather than your qualification level. Having more experience in a certain area isnât necessarily better if it doesnât help with the type of work youâd actually be doing, or if it implies youâll have expectations that an organization wonât be able to match.
Iâm running a small fundraise match for Innovate Animal Ag until January 16th. IAA helped accelerate in-ovo sexing in the US, one of Lewisâ Ten big wins in 2024 for farmed animals. I think Robert and team have a thoughtful and different approach to welfare that seems tractable. At leastâitâs a bet worth placing. I imagine IAA bringing new welfare technologies above the line of commercial viability and providing the fuel for orgs like Humane League to push forward. Join me in my (small) match!
I made this simple high-level diagram of critical longtermist âroot factorsâ, âultimate scenariosâ, and âultimate outcomesâ, focusing on the impact of AI during the TAI transition.
This involved some adjustments to standard longtermist language. âAccident Riskâ â âAI Takeover âMisuse Riskâ â âHuman-Caused Catastropheâ âSystemic Riskâ â This is spit up into a few modules, focusing on âLong-term Lock-inâ, which I assume is the main threat.
You can read interact with it here, where there are (AI-generated) descriptions and pages for things.
Curious to get any feedback!
Iâd love it if there could eventually be one or a few well-accepted and high-quality assortments like this. Right now some of the common longtermist concepts seem fairly unorganized and messy to me.
---
Reservations:
This is an early draft. Thereâs definitely parts I find inelegant. Iâve played with the final nodes instead being things like, âPre-transition Catastrophe Riskâ and âPost-Transition Expected Valueâ, for instance. I didnât include a node for âPre-transition valueâ; I think this can be added on, but would involve some complexity that didnât seem worth it at this stage. The lines between nodes were mostly generated by Claude and could use more work.
This also heavily caters to the preferences and biases of the longtermist community, specifically some of the AI safety crowd.
Iâll take this post off the frontpage later today. This is just a quick note to say that you can always message me (or use the intercom featureâchat symbol on the bottom right of your desktop screen) if youâd like to make suggestions or give feedback about the EA Forum.
I can attest that I message @Toby Tremlettđč quite a bit and heâs always really nice, even when my suggestions are kind of stupid or a little emotional.
Actually heâs polite and nice even when theyâre really stupid or extremely emotional as well.
I thought this could be relevant to a few people interested or working in bioethics:
The Bioethics Interest Group is one of several dozen Special Interest Groups that operate out of the Office of Intramural Research at the NIH. Its monthly virtual seminars âprovide a discussion forum, consider different views, and present research on complex ethical issues in medical research.â If you are interested in or working in bioethics, I thought you might find it interesting to sign up for its newsletter so that you have the opportunity to read about and consider attending its seminars.
(Half baked and maybe just straight up incorrect about peopleâs orientations)
I worry a bit about groups thinking about the post-AGI future (e.g., Forethought) will not want to push for something like super-optimized flourishing because this will seem weird and possibly uncooperative with factions that donât like the vibe of super-optimization. This might happen even if these groups thinking about the future do believe in their hearts that super-optimized flourishing is the best outcome.
It is very plausible to me that the situation is âconvexâ in the sense that it is better for the super-optimizers to optimize fully with their share of the universe, while the other groups do what they want with their share (with rules to prevent extreme suffering, pessimization etc). I think this approach might be better for all groups, rather than aiming for a more universal middle ground that leaves everyone disappointed. This bad middle ground might look like a universe that is both not very optimized for flourishing but is still super weird and unfamiliar.
It would be very sad if we miss out on the optimized flourishing because we were trying to not seem weird or uncooperative.
Speculatively, I think there could actually just be convergence here, though, once you account for moral uncertainty and just very plausible situations where doing bad by everyoneâs lights are as bad as, say, utilitarian nightmares but just easier to get others on board for (ie extreme power).
Two hours before you posted this, MacAskill posted a brief explanation of viatopianism.
This essay is the first in a series that discusses what a good north star [for post-superintelligence society] might be. I begin by describing a concept that I find helpful in this regard:
Viatopia: an intermediate state of society that is on track for a near-best future, whatever that might look like.
Viatopia is a waystation rather than a final destination; etymologically, it means âby way of this placeâ. We can often describe good waystations even if we have little idea what the ultimate destination should be. A teenager might have little idea what they want to do with their life, but know that a good education will keep their options open. Adventurers lost in the wilderness might not know where they should ultimately be going, but still know they should move to higher ground where they can survey the terrain. Similarly, we can identify what puts humanity in a good position to navigate towards excellent futures, even if we donât yet know exactly what those futures look like.
In the past, Toby Ord and I have promoted the related idea of the âlong reflectionâ: a stable state of the world where we are safe from calamity, and where we reflect on and debate the nature of the good life, working out what the most flourishing society would be. Viatopia is a more general concept: the long reflection is one proposal for what viatopia would look like, but it need not be the only one.
I think that some sufficiently-specified conception of viatopia should act as our north star during the transition to superintelligence. In later essays Iâll discuss what viatopia, concretely, might look like; this note will just focus on explaining the concept.
. . .
Unlike utopianism, it cautions against the idea of having some ultimate end-state in mind. Unlike protopianism, it attempts to offer a vision for where society should be going. It focuses on achieving whatever society needs to be able to steer itself towards a truly wonderful outcome.
I think Iâm largely on board. I think Iâd favor doing some amount of utopian planning (aiming for something like hedonium and acausal trade). Viatopia sounds less weird than utopias like that. I wouldnât be shocked if Forethought talked relatively more about viatopia because it sounds less weird. I would be shocked if they push us in the direction of anodyne final outcomes. I agree with Peter that stuff is âconvexâ but I donât worry that Forethought will have us tile the universe with compromisium. But I donât have much private info.
Yeah, agreed on that point. Folks at Forethought arenât necessarily thinking about what a near-optimal future should look like, theyâre thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future.
Question: Should I serve on the board of a non-EA charity?
I have an opportunity through work to help guide a charity doing work on childrenâs education and entertainment in the UK and US. It has an endowment in the tens of millions of pounds.
Has anyone else had experience serving on the board or guiding committee of a non-EA charity? Did you feel like you were able to have a positive influence? Do you have any advice?
I would ask myself something like these questions to figure this out, Iâm assuming by the picture you paint that you donât think their current work is necessarily wildly impactful?
1. Do I have the time and headspace to take this on? Will it negatively affect other things I do
2. Do I like the other board members (at least in theory), and will I work well with them?
3. Will this be something energy giving and enjoyable for me? Some work (even if not that impactful) can almost paridoxically give us more energy for the more impactful stuff. Iâve noticed this more and more over the years.
4. Is there perhaps an opportunity for me to shape the charityâs work towards something more impactful? Influencing the thought world of children has potential. Thereâs a saying attributed to the Jesuits which goes something like âGive us a child till they are 7 and weâll have them for lifeâ, so those years are importantly formative.
For those who are interested in what the CEA Online Team[1] is up to, Iâve set up a new OKRs doc for 2026 and summarized our Q1 2026 plans.
Our team runs this Forum
Thanks Sarah! Is something written up about the CEA donation system? Iâm surprised that thatâs a priority, but obviously know zero details.
Thanks for the question Ben! The main reason that this is a priority is to help EA Funds (which is now part of CEA) grow and diversify their donations, by making it easier to gather info from donors[1] and build relationships with them, and giving us more freedom to optimize the UX of the donation flow. AWF in particular has ambitious 2026 plans and a significant funding gap, and weâd be excited to help them reach their donation goal for this year! :)
GWWC, the primary platform EA Funds has used historically, defaults to opt-out for donor data sharing. As far as I understand, this prevents us from being able to contact the majority of donors. We recently added the option of donating via every.org as well, which is opt-in by default so thatâs improved the situation.
Technical Alignment Research Accelerator (TARA) applications are closing in one week!
Apply as a participant or TA by January 23rd to join the 14-week remotely taught, in-person run program (based on the ARENA curriculum) designed to accelerate your path to meaningful technical alignment research!
Built for you to learn around full-time work or study by attending meetings in your home city on Saturdays and doing independent study throughout the week. Finish the program with a project to add to your portfolio, key technical AI safety skills, and connections across APAC.
See this post for more information and apply through our website here.
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/âhour, and expects ~5â10 hours/âweek. Heâs looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.
More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
Super sceptical probably very highly intractable thought that I havenât done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostromâs simulation argument, like:
All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us.
The Fermi Paradox: there donât seem to be any other civilizations in the observable universe. There are many explanations for the Fermi Paradox, but one additional explanation might be that whoever is simulating the universe created it for us, or they donât care about other civilizations, so havenât simulated them.
We seem to be really early on in human history. Only about 60 billion people have ever lived IIRC but we expect many trillions to live in the future. This can be explained by the Doomsday argumentâthat in fact we are in the time in human history where most people will live because we will soon go extinct. However, this phenomenon can also be explained by us living in a simulationâsee next point.
Not only are we really early, but we seem to be living at a pivotal moment in human history that is super interesting. We are about to create intelligence greater than ourselves, expand into space, or probably all die. Like if any time in history were to be simulated, I think thereâs a high likelihood it would be now.
If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that Iâm just thinking about this all wrongâlike, of course I can come up with a motivation for a simulation to explain any feature of the universe⊠it would be hard to find something that doesnât line up with an explanation that the simulators just being interested in that particular thing. But in any case, thatâs still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed weâre all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making.
But everyone I talk to seems to have a relaxed approach to it, like itâs impossible to make any progress on this and that it couldnât possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:
We may be able to infer from the nature of the universe and the natural problems ahead of us what the simulators are looking to understand or gain from the simulation (or at least we might attach percentage likelihoods to different goals). Maybe there are good arguments to aim to please the simulators, or not. Maybe we end the simulation if there are end-conditions?
Being in a simulation gives some weight to the probability that aliens exist (they probably have a lower probability of existing if we are in a simulation), which helps with long-term grand planning. Like, we wouldnât need to worry about integrating defenses against alien attacks or engaging in acausal trade with aliens.
We can disregard arguments like The Doomsday Argument, lowering our p(doom)
Some questions Iâd ask is:
How much effort have we put into figuring out if there is something decision-relevant to do about this from a moral impact perspective? How much effort should we put into this?
How much effort has gone into figuring out if we are, in fact, in a simulation, using empiricism? What might we expect to see in a simulated universe vs a real world? How we can we search for and detect that?
Overall, this does sounds nuts to me and it probably shouldnât go further than this quick take, but I do feel like there could be something here, and itâs probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and Iâd be interested in hearing thoughts (especially a good counterargument to working on this so I donât have to think about it anymore) or learning about past efforts.
All the things you mentioned arenât uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.)
The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis.
Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test peopleâs faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isnât really there. (I also think itâs just not true that this is the most interesting point in history.)
Similarly, the book of Genesis says that God created humans in his image. Maybe he didnât create aliens with high-tech civilizations because heâs only interested in beings with high technology made in his image.
It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis.
You might think the simulation hypothesis is preferable because itâs a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulatorsâ reality, maybe the fundamental relationship between consciousness and physical phenomena such as matter, energy, space, time, and physical forces is such that consciousness can directly, automatically shape physical phenomena to its will. If we observed this happening in our universe, we would describe this as magic or a miracle.
Whether you call them âsimulatorsâ or âGodâ or an âevil demonâ or âLokiâ, and whether you call it a âsimulationâ or an âillusionâ or a âdreamâ, these are just different surface-level labels for substantially the same idea. If you stipulate laws of nature radically other than the ones we believe we have, what youâre talking about is supernatural.
If you try to assume that the physics and other laws of nature in the simulatorsâ reality is the same as in our perceived reality, then the simulation argument runs into a logical self-contradiction, as pointed out by the physicist Sean Carroll. Endlessly nested levels of simulation means computation in the original simulatorsâ reality will run out. Simulations at the bottom of the nested hierarchy, which donât have enough computation to run still more simulations inside them, will outnumber higher-level simulations. Since the simulation argument says, as one of its key premises, that in our perceived reality we will be able to create simulations of worlds or universes filled with many digital minds, but the simulation hypothesis implies this is actually impossible, then the simulation argumentâs conclusion contradicts one of its premises.
There are other strong reasons to reject the simulation argument. Remember that a key premise is that we ourselves or our descendants will want to make simulations. Really? Theyâll want to simulate the Holocaust, malaria, tsunamis, cancer, cluster headaches, car crashes, sudden infant death syndrome, and Guantanamo Bay? Why? On our ethical views today, we would not see this as permissible, but rather the most grievous evil. Why would our descendants feel differently?
Less strongly, computation is abundant in the universe but still finite. Why spend computation on creating digital minds inside simulations when there is always a trade-off between doing that and creating digital minds in our universe, i.e. the real world? If we or our descendants think marginally and hold as one of our highest goals to maximize the number of future lives with a good quality of life, using huge amounts of computation on simulations might be seen as going against that goal. Plus, there are endlessly more things we could do with our finite resource of computation, most we canât imagine today. Where would creating simulations fall on the list?
You can argue that creating simulations would be a small fraction of overall resources. Iâm not sure thatâs actually true; I havenât done the math. But just because something is a small fraction of overall resources doesnât mean it will be likely be done. In an interstellar, transhumanist scenario, our descendants could create a diamond statue of Hatsune Miku the size of the solar system and this would take a tiny percentage of overall resources, but that doesnât mean it will likely happen. The simulation argument specifically claims that making simulations of early 21st century Earth will interest our descendants more than alternative uses of resources. Why? Maybe theyâll be more interested in a million other things.
Overall, the simulation hypothesis is undisprovable but no more credible than an unlimited number of other undisprovable hypotheses. If something seems nuts, it probably is. Initially, you might not be able to point out the specific logical reasons itâs nuts. But thatâs to be expected â the sort of paradoxes and thought experiments that get a lot of attention (that âgo viralâ, so to speak) are the ones that are hard to immediately counterargue.
Philosophy is replete with oddball ideas that are hard to convincingly refute at first blush. The Chinese Room is a prime example. Another random example is the argument that utilitarianism is compatible with slavery. With enough time and attention, refutations may come. I donât think oneâs inability to immediately articulate the logical counterargument is a sign that an oddball idea is correct. Itâs just that thinking takes time and, usually, by the time an oddball idea reaches your desk, itâs proven to be resistant to immediate refutation. So, trust that intuition that something is nuts.
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument Iâve seen in quite a while, which was refreshing for my peace of mind.
That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesnât seem that great, but it seems plausible at least?
Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.
I would not describe the finetuning argument and the Fermi paradox as strong evidence in favour of the simulation hypothesis. I would instead say that they are open questions for which a lot of different explanations have been proposed, with the simulation offering only one of many possible resolutions.
As to the âimportanceâ argument, we shouldnât count speculative future events as evidence of the importance of now. I would say the mid-20th century was more important than today, because thatâs the closest we ever got to nuclear annihilation (plus like, WW2).
Iâve thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While itâs possible that you or I are alone in the simulation, we canât, realistically, know this. We canât know with much certainty that the apparently sentient beings who share our world arenât actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming youâre a Utilitarian), or have rights/âautonomy to be respected, etc.
We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for peopleâs minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, theyâll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.
Given all this, I donât see the point in trying to defy them or doing really anything differently than what youâd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.
If weâre alone in the sim, then it doesnât matter what we do anyway, so I focus on the possibility that we arenât alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.
At least, thatâs the way I see things right now. Your mileage may vary.
Got sent a set of questions from ARBOx to handle async; thought Iâd post my answers publicly:
Can you explain more about mundane utility? How do you find these opportunities?
Lots of projects need people and help! E.g. Can you contribute to EleutherAI, or close issues in Neuronpedia? Some more ideas:
Contribute to the projects within SKâs github follows and stars
Make some contributions within Big list of lists of AI safety project ideas 2025
Reach out to projects that you think are doing cool work and ask if you can help!
BlueDot ideas for SWEs
Iâm an experienced software engineer. How can I contribute to AI safety?
The software engineerâs guide to making your first AI safety contribution in <1 week
From a non-coding perspective, you could e.g.
Facilitate BlueDot courses
Give people feedback on their research proposals, drafts, etc.
Be accountability partners
Offer to talk to people and share what you know with those who know less than you
Check out these pieces from my colleagues:
How to have an impact when the job market is not cooperating by Laura G Salmeron
Your Goal Isnât Really to Get a Job by Matt Beard
What is your theory of change?
As an 80k advisor, my ToC is âTry and help someone to do something more impactful than if they had not spoken to me.â
Mainly, this is helping get people more familiar with/âexcited about/âdoing things related to AI safety. Itâs also about helping them with resources and sometimes warm introductions to people who can help them even more.
Are there any particular pipelines /â recommended programs for control research?
Just the things you probably already know about â MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work
What are the most neglected areas of work in the AIS space?
Hard question, with many opinions! Iâm particularly concerned that âmaking illegible problems legibleâ is neglected. See Wei Daiâs writing about this
Legible vs. Illegible AI Safety Problems
Problems Iâve Tried to Legibilize
More groundedly, Iâm concerned weâre not doing enough work on Gradual Disempowerment and more broadly questions of {how to have a flourishing future/âwhat is a flourishing future} even if we avoid catastrophic risks
In general, AI safety work needs to contend with a collection of subproblems. See davidadâs opinion â A list of core AI safety problems
There are many other such opinions, and itâs good to scan through them to work out how theyâre all connected, so that you can see the forest for the trees; and also to work out which problems youâre drawn to/âcompelled by, and seek out whatâs neglected within those đ
Some questions about ops-roles:
What metrics should I use to evaluate my performance in ops/âfieldbuilding roles? I find ops to be really scattered and messy, and so itâs hard to point to consistent metrics.
Hard to talk about this in concrete terms, because ops is so varied; every task can have its own set of metrics. Instead, think through this strategically:
Be clear on the theory(ies) of change, and your roles/âactivities/âtasks in it(them). Once you can articulate those things, the metrics worth measuring become a lot clearer
Sometimes weâre not tracking impact because impact evaluation is notoriously difficult. Look for proxies. Red-team them with people you admire
Fieldbuilding metrics can be easier to generate, but I donât claim to be an expert here â ask folks at BlueDot, or the fellowships for better input.
How many people completed the readings?
How many people did I get to sign up for the bluedot course?
How many of those finished the bluedot course?
How many people did I get into an Apart Hackathon?
Did any of my people win?
And so onâŠ
Likewise, I have a hard time discerning what âopsâ really means. What are the best tangible âopsâ skills I should go out of my way to skill up on if I want to work in the field building/âprogrammes space? Are there âhardâ ops skills I should become really good at (like, familiarity with certain software programmes, etc)
Ops is usually a âget stuff doneâ bucket of work. Yes, it can help to have functional experience in an ops domain like âFinanceâ or âIT/âoffice tech infra/âwebsiteâ (and especially âLegalâ), but a LOT of ops can be learned on the job/âon your own; AI safety is stacked full of folks who didnât let âI donât know anything about opsâ stop them from figuring it out and getting it done
Under what circumstances should a âtechnical personâ consider switching their career to fieldbuilding?
First things first:
Fieldbuilding is not a consolation prize. Do fieldbuilding if youâre really passionate about helping AI go well, and fieldbuilding is your comparative advantage.
And doubling down on that:
It really really really helps if fieldbuilders are very competent. A fieldbuilder who doesnât know their shit about AI risk and AI safety can propagate bad ideas among the people theyâre inducting into the field.
This can have incredibly high costs
Pollutes the commons
Wastes time downstream where all this would need to be corrected
Bounces people who might be able to quickly get up to speed, because their initial contact with these fieldbuilders is of poor quality, poor argumentation, poor epistemics
Conversely a great fieldbuilder is one who knows how to tend their flock, what they need to prosper and grow to become competent at thinking about AI safety properly, and being able to do AI safety things
How would you recommend going about doing independent project work for upskilling in-place of doing something like SPAR or MATS?
Why not both? In general, I want people to ask themselves this question when making decisions. You can do a lot more than you give yourself credit for.
At the current margins SPAR, MATS etc. are probably better than independent work
Some of these fellowships have pretty high signal to employers (based on evidence that has been generated over time)
There is a lot that these fellowships offer that are sometimes hard to get without them
Research support, mentorship, community engagement, well-scoped projects with deliverables and accountability
Also softer things like physical space , some money
But if youâre great at doing stuff independently, go for it! Neel Nanda didnât need a fellowship.
A key idea is to keep your eye on the ball â be productive!
The point is generate outputs
That make you learn
That show that you have learned
That are related to AI safety
That get feedback
That show that you update based on (relevant/âgood/âhigh-quality) feedback
lfg!
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months Iâve noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
Open Phil renaming itâs EA Community Growth (Longtermism) Team to GCR Capacity Building
This post from Claire Zabel (OP)
Giving What We Canâs new Cause Area Fund being named âRisk and Resilience,â with the goal of âReducing Global Catastrophic Risksâ
Longview-GWWCâs Longtermism Fund being renamed the âEmerging Challenges Fundâ
Anecdotal data from conversations with people working on GCRs /â X-risk /â Longtermist causes
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation /â working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I canât help but feel something is off about this framing. Some concerns (no particular ordering):
From a longtermist (~totalist classical utilitarian) perspective, thereâs a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
(see Parfit Reasons and Persons for the full thought experiment)
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesnât differentiate between âhumanity prevents GCRs and realises 1% of itâs potentialâ and âhumanity prevents GCRs realises 99% of its potentialâ
Preventing an extinction-level GCR might move us from 0% to 1% of future potential, but thereâs 99x more value in focusing on going from the âokay (1%)â to âgreat (100%)â future.
See Aird 2020 for more nuances on this point
From a longtermist (~suffering focused) perspective, reducing GCRs might be net-negative if the future is (in expectation) net-negative
E.g. if factory farming continues indefinitely, or due to increasing the chance of an S-Risk
See Melchin 2021 or DiGiovanni 2021 for more
(Note this isnât just a concern for suffering-focused ethics people)
From a longtermist perspective, a focus on GCRs neglects non-GCR longtermist interventions (e.g. trajectory changes, broad longtermism, patient altruism/âphilanthropy, global priorities research, institutional reform, )
From a âcurrent generationsâ perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people /â animals alive today
Iâm pretty uncertain about this, but my guess is that alleviating farmed animal suffering is more welfare-increasing than e.g. working to prevent an AI catastrophe, given the latter is pretty intractable (But I havenât done the numbers)
See discussion here
If GCRs actually are more cost-effective under a âcurrent generationsâ worldview, then I question why EAs would donate to global health /â animal charities (since this is no longer a question of âworldview diversificationâ, just raw cost-effectiveness)
More meta points
From a community-building perspective, pushing people straight into GCR-oriented careers might work short-term to get resources to GCRs, but could lose the long-run benefits of EA /â Longtermist ideas. I worry this might worsen community epistemics about the motivation behind working on GCRs:
If GCRs only go through on longtermist grounds, but longtermism is false, then impartial altruists should rationally switch towards current-generations opportunities. Without a grounding in cause impartiality, however, people wonât actually make that switch
From a general virtue ethics /â integrity perspective, making this change on PR /â marketing reasons aloneâwithout an underlying change in longtermist motivationâfeels somewhat deceptive.
As a general rule about integrity, I think itâs probably bad to sell people on doing something for reason X, when actually you want them to do it for Y, and youâre not transparent about that
Thereâs something fairly disorienting about the community switching so quickly from [quite aggressive] âyay longtermism!â (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention /â admission that this happened or why
Hi Tom.
I would be curious to know your thougths on my post arguing that decreasing the risk of human extinction is not astronomically cost-effective.
The same applies to preventing human extinction over a given period. Humans could go extinct just after the period, or go on to an astronomically valuable, and I believe the former is much more likely.
This also applies to reducing the risk of human extinction.
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think theyâre worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
This isnât mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didnât decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term âGCRâ over âlongtermismâ, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.
Iâve upvoted this comment, but weakly disagree that thereâs such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is:
That âifâ clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse of becoming interstellar (which, from a longtermist perspective, is what mattersânot ârecoveryâ). Iâve written a sequence on this here, and have a calculator which allows you to easily explore the simple modelâs implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, itâs easy to believe âlesserâ catastrophes are many times more likely, so could very well be where the main expected loss of value lies.
I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence theyâve ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make peopleâs lives better (generally lumping those in as âshorttermistâ). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points)
This seems to be treating âfocus only on current generationsâ and âfocus on Pascalian arguments for astronomical value in the distant futureâ as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite low, but one might still have considerable concern for the next few generations.
Counterpoint: I think the discourse before the purported shift to GCRs was substantially more dishonest. Nanda and Alexanderâs posts argued that we should talk about x-risk rather than longtermism on the grounds that it might kill you and everyone you knowâwhich is very misleading if you only seriously consider catastrophes that kill 100% of people, and ignore (or conceivably even promote) those that leave >0.01% behind (which, judging by Luisa Rodriguezâs work is around the point beyond which EAs would typically consider something an existential catastrophe).
I basically read Zabelâs post as doing the same, not as desiring a shift to GCR focus, but as desiring presenting the work that way, saying âIâd guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, itâs unlikely that weâd re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problemâ (emphasis mine).
Nanda, Alexander and Zabelâs posts all left a very bad taste in my mouth for exactly that reason.
This is as much an argument that we made a mistake ever focusing on longtermism as that we shouldnât now shift away from it. Oliver Habryka (canât find link offhand) and Kelsey Piper are two EAs whoâve publicly expressed discomfort with the level of artificial support WWOTF received, and Iâm much less notable, but happy to add myself to the list of people uncomfortable the business, especially since at the time he was a trustee of the charity that was doing so much to promote his career.
Great post, Tom, thanks for writing!
One thought is that a GCR framing isnât the only alternative to longtermism. We could also talk about caring for future generations.
This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And itâs pluralistic, connected to indigenous worldviews and environmentalist rhetoric.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. Iâm not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work weâve completed, wasnât designed to address this question directly. I donât think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.
We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/âlongtermist message testing, and broader work assessing public attitudes towards EA/âlongtermism (which I donât have linkable applications for).
I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you donât want to maximise persuasiveness, itâs still important to understand how different groups are understanding (or misunderstanding) your message.
One point that hasnât been mentioned: GCRâs may be many, many orders of magnitude more likely than extinctions. For example, itâs not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesnât make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanityâs long-term potential. It is not plausibleâat least not for the reasons providedâ that âGCRâs may be many, many orders of magnitude more likely thanâ existential catastrophes, on reasonable interpretations of âmany, manyâ.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
How?
I see it delaying things while the numbers recover, but itâs not like humans will suddenly become unable to learn to read. Why would humanity not simply pick itself up and recover?
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
This is very vague. What other threats? It seems like a virus wiping out most of humanity would decrease the likelihood of other threats. It would put an end to climate change, reduce the motivation for nuclear attacks and ability to maintain a nuclear arsenal, reduce the likelihood of people developing AGI, etc.
Humanityâs chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.
The framing âPR concernsâ makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague âPRâ.
Is your claim âImpartial altruists with ~no credence on longtermism would have more impact donating to AI/âGCRs over animals /â global healthâ?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/âGCRs
If No, then Iâm confused why one wouldnât donate to animals /â global health instead?
[I use âdonateâ rather than âwork onâ because donations arenât sensitive to individual circumstances, e.g. personal fit. Iâm also assuming impartiality because this seems core to EA to me, but of course one could donate /â work on a topic for non-impartial/â non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group youâre partial towards. (With the caveat that âno credence on longtermismâ is underspecified, since we havenât said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that âglobal health is most cost-effective on near-term timescalesâ but whatâs really happened is that âa well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.â Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we havenât yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Speaking personally, I have also perceived a move away from longtermism, and as someone who finds longtermism very compelling, this has been disappointing to see. I agree it has substantive implications on what we prioritise.
Speaking more on behalf of GWWC, where I am a researcher: our motivation for changing our cause area from âcreating a better futureâ to âreducing global catastrophic risksâ really was not based on PR. As shared here:
Essentially, weâre aiming to use the term âreducing global catastrophic risksâ as a kind of superset that includes reducing existential risk, and that is inclusive of all the potential motivations. For example, when looking for recommendations in this area, we would be happy to include recommendations that only make sense from a longtermist perspective. A large part of the motivation for this was based on finding some of the arguments made in several of the posts you linked (including âEA and Longtermism: not a crux for saving the worldâ) compelling.
Also, our decision to step down from managing the communications for the Longtermism Fund (now âEmerging Challenges Fundâ) was based on wanting to be able to more independently evaluate Longviewâs grantmaking, rather than brand association.
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I donât think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.
IMO, the main axis of variation for EA related cause prio is âhow far down the crazy train do we goâ not âperson affecting (current generations) vs otherwiseâ (though views like person affecting ethics might be downstream of crazy train stops).
Idk what I think about Longtermism --> GCR, but I do think that we shouldnât lose âthe future might be totally insaneâ and âthis might be the most important century in some longer viewâ. And I could imagine focus on GCR killing a broader view of history.
That said, if we literally just care about experiences which are somewhat continuous with current experiences, itâs plausible that speeding up AI outcompetes reducing GCRs/âAI risk. And itâs plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by âhave people survive until techno utopia and ensure that techno utopia happensâ. And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.
Meta: this should not have been a quick take, but a post (references, structure, tldr, epistemic status, âŠ)
This sounds like an accusation, when it could so easily have been a compliment. The net effect of comments like this is fewer posts and fewer quick takes.
I actually meant it as a compliment, thanks for pointing out that it can be received differently. I liked this âquick takeâ and believe it would have been a high-quality post.
I was not aware that my comment would reduce the number of quick takes and posts, but I feel deleting my comment now just because of the downvotes would also be weird. So, if anyone reads this and felt discouraged by the above, I hope you rather post your things somewhere rather than not at all.
Yeah thatâs fair. I wrote this somewhat off the cuff, but because it got more engagement than I thought Iâd make it a full post if I wrote again
Iâd be keen for great people to apply to the Deputy Director role ($180-210k/ây, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and Iâve been working with the MBDF team for a while now and am impressed by what theyâre getting done.
People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially by the scientific consensus that has formed against building it since then, but there is some remaining risk that the boundaries are not drawn far enough from the brink that bad actors could access it. Having a great person in this role would help ensure a wider safety margin.
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions
Context: As I started filling out this expression of interest form to be a mentor for Sentient Futuresâ project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!)
Animal-welfare-related research/âwork:
What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.)
How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.)
Robust ways to reduce wild animal suffering
New/âunderrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here).
Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research).
How do these and the safest consensual EAA interventions (see 1) interact?
Preventing the off-Earth replication of wild ecosystems.
Uncertainty on moral weights (some relevant context in this comment thread).
Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vasco Grilo, âŠ).
How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs?
What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.)
Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.)
The impact of agriculture on soil nematodes and other numerous soil animals, in terms of total population.
Evaluating the backfire risks of different welfare reforms for farmed insects, shrimp, fish, or chickens (see DiGiovanni 2025).
Other things related to deep uncertainty in animal welfare (see DiGiovanni 2025 and Graham 2025 for context).
Red-teaming the cost-effectiveness analyses made by key actors on different animal welfare interventions (especially those relevant to anything listed above).
More fundamental philosophical or psychological stuff relevant to cause prio:
A) Under cluelessness, what forms of bracketing[3] (or different solutions) make most sense to guide our actions?
B) New/âunderrated arguments for being particularly worried about the suffering of sentient beings (rather than about pleasure or other things).
C) What explains the fact that some EA animal advocates buy suffering-focused ethics and others donât? What are the cruxes? What persuaded them? Are there social backgrounds that determine someoneâs degree of (non-)sympathy for suffering-focused ethics?
D) How to avoid reducing the credibility of any of the (fairly niche) kinds of work in these two lists?
How do we anticipate very understandable reactions like this one when talking about nematodes and/âor indirect effects on wild animals? (e.g., how do we make clear what this work implies and does not imply?)
I.e., most ecologically inert, and most avoidant of substitution effects, funging, and other backfire risks.
See the last paragraph of this post section from Graham and this comment from Stevenson. This post section from DiGiovanni on an adjacent topic is also indirectly relevant.
Some challenges to consequentialist bracketing:
- defining a good criterion regarding what value locations get to bracketed in.
- what are these value locations? Whatâs the unit?
Some challenges to metanormative bracketing:
- potential sensitivity to the individuation of normative views.
- are normative views even non-arbitary units to bracket over?
What does âconsensualâ mean here (and to some extent above)? Consensual on the part of humans/âinstitutions?
Yup, something a variety of views can get behind. E.g., not âbuying beefâ.
For âconsensual EAA interventionsâ above, I think I was thinking more ânot something EAs see as ineffective like welfare reforms for circus animalsâ. If this turned out to be the safest animal intervention, I suspect this wouldnât convince many EAs to consider it. But if, say, developing alternatives to rodents as snake food turned out to be very safe, this could weigh a lot in its favor for them.
Thanks for sharing, Jim!
Nitpick. Vasco Grilo.
Damn, a friend just made me realize that I had mispelled your first name too. So sorry aha
No worries!
Aha oops very sorry, fixed ;)
Iâve donated about $150,000 over the past couple years. Here are some of the many (what I believe to be) mistakes in my past giving:
Donating to multiple cause areas. When I first started getting into philosophy more seriously, I adopted a vegan lifestyle and started identifying as EA within only a few weeks of each other. Deciding on my donation allocations across cause areas was painful, as I assign positive moral weights to both humans and animals and they might even be close in intrinsic value. I felt the urge to apologize to my vegan, non AI-worrier friends for increasing my ratio of AI safety donations to animal welfare donations, and my non-vegan, non-EA friends and family thought that donating to animals over humans was crazy. Now my view is something like: donations to AI safety are probably orders of magnitude more effective than to animal welfare or global health + development, so I should (and do) allocate 100% to AI safety.
Donating to multiple opportunities within the same cause area. Back in my early EA global health + development days, I found and still find the narrative of âsome organizations are 100x more effective than othersâ pretty compelling, but I internally categorized orgs into two buckets: high EV and low EV. I viewed GiveWell-recommended organizations as broadly âHigh EV,â assuming that even if their point estimates differed, their credence intervals overlapped sufficiently to render the choice between them negligible. This might even be true! However, I do not believe this to generalize to animal welfare and AI safety. Now Iâve come full circle in a way, and believe that actually, some things are multiple times (or even orders of magnitude) higher EV than other things, and have chosen to shut up and multiply. If you are a smaller donor, it is unlikely that your donation will sufficiently saturate a donation opportunity such that your nth dollar should go elsewhere.
Donating to opportunities that major organizations recommend/âfund publicly. Major organizations may face constraints that individual donors do not. Non-profits are limited in the political activity they can engage in. Large funders may face reputational constraints that make certain grantees a poor fit. For instance, CG has noted that right-of-center AI policy groups may not be a good match for their main funder despite potentially doing valuable work, and certain cause areas may be too weird for certain funders.
Donating at the end of the year. Major evaluators often post their public recommendations at the end of November because philanthropic activity spikes in December due to holidays and the end of the tax season. The best donation opportunities do not only appear in December! If Iâm donating to 501(c)(3)s and trying to optimize taxes, I use a DAF so that I can donate in a month other than December. But also, the tax status of an organization is not a proxy for impact. For example, in the US, donating to 501(c)(3)s and 501(c)(4)s may provide tax benefits. Assuming you would donate the funds saved on taxes, it still may be higher EV to donate to non-501(c)(3)/â501(c)(4) opportunities and just take the tax hit. Additionally, time discounting may be steep enough such that you should make sacrifices (tax or otherwise) to donate now rather than later.
I used to donate mid-year for the reasons you gave. The last couple years I donated at the end of the year because the EA Forum was running a donation election in early December, and I wanted to publish my âwhere Iâm donatingâ post shortly before the donation election, and I donât want to donate until after Iâve published the post. But perhaps syncing with the donation election is less important and I should publish and donate mid-year instead?
After hanging out with the local Moral Ambition group (sadly thereâs only one in Malmö), Iâve found a shorthand to exprss the difference in methodology compared to EA. Both movements aim to find people who aready have the âA,â and cultivate the other component in them.
Many effective altruism communities target people who already wish to help the world (Altruism), then guide and encourage them to reach further (be more Effective).
Moral Ambition meanwhile targets high achieving professionals and Ivy Leaguers (Ambition), then remind them that the world is burning and they should help put out the fire (be more Moral).
There is a new âForget Veganuaryâ campaign, apparently part-funded by the EA Animal Welfare Fund:
https://ââwww.forgetveganuary.com/ââ
https://ââwww.farmkind.giving/ââabout-us/ââwho#transparency (the âTransparencyâ link on the campaign page)
Reddit link to news article that calls this a âmeat-eating campaignâ and discussion: https://ââwww.reddit.com/ââr/ââunitedkingdom/ââcomments/ââ1px018m/ââveganuary_champion_quits_to_run_meateating/ââ
The idea seems to be to promote a message to not give up animal products, but rather donate to organisations that effectively campaign to improve farm animal welfare (including EA favourites like The Humane League, Fish Welfare Initiative and the Shrimp Welfare Project).
Promoting donating to such organisations seems all well and good, but it puts out very negative messages about being a vegan (which apparently means you will have âannoyed friends and familyâ and âgot bloating from plant proteinâ etc.). This has got a lot of negative attention from vegan groups that Iâve seen. The website seems a bit ridiculous in places e.g. its âexpertâ views are just those of some eating champions. [EditâOK that last bit was the authors being tongue-in-cheek.]
Interestingly the person who seems to be doing the PR, Toni Vernelli, used to do the PR for Veganuary, and wrote on the forum defending it less than a year ago: link. Itâs unclear if they actually changed their mind or have some other motivation to change their stance.
Anyway, it seems like quite a controversial initiative, unnecessarily negative about veganism and quite poorly put together [editâOK that last part was unfair, more effort had gone into it than Iâd initially realised]. As a donor to the EA Animal Welfare Fund, itâs not something Iâd expect to be paying towards myself [editâfollowing discussion, Iâll withhold judgement from here until we see how it all plays out].
As someone who worked with Thom to raise almost ÂŁ2000 for FarmKind charities for my birthday fundraiser a couple of months ago, I want to say publicly that I am disappointed by FarmKindâs communication with the animal movement over this campaign.
As far as I can tell, FarmKind misled the movement by initially saying (or at least heavily implying) that they cooperated with Veganuary on this campaign; and they havenât acknowledged the statement by the CEO of Veganuary which makes clear that FarmKind did not in fact cooperate with Veganuary on this campaign.
Misleading the movement, and then not acknowledging doing so, violates important movement norms relating to transparency and accountability.
I continue to be open to evidence that FarmKind did not mislead the movement, and if this evidence is presented I will retract this criticism and apologise on this comment.
The criticism in this comment is separate from public-facing questions about FarmKindâs campaign, such as âis this campaign likely to harm societal perceptions of veganism, and even if so, how would that trade off against the opportunity to bring more money into the animal movement?â.
Evidence
Thom and Aidan, co-founders of FarmKind, initially said this about FarmKindâs cooperation with Veganuary:
Thom referred to âcooperationâ and wrote âwe [FarmKind and Veganuary] are all on good terms and there is absolutely no infightingâ here
Aidan characterised the ostensible conflict between FarmKind and Veganuary as reported in the initial press articles as âpart of the bitâ here and âkayfabeâ (fake wrestling) here
But that seems misleading:
Wendy Matthews, CEO of Veganuary, wrote that Veganuary âwas not involved in developing the âForget Veganuaryâ campaign and had no role in shaping or approving its messaging or executionâ here
Jane Land, co-founder of Veganuary, said that FarmKindâs campaign led to âa morale dentâ in Veganuaryâs team here, going on to say (whilst wiping away tears):
Thom himself saying at 34:50 in this YouTube interview:
Thom and Aidan have also come close to deciding what is in Veganuaryâs interests. Given the absence of cooperation between FarmKind and Veganuary, I think this is inappropriate and disrespectful to Veganuary:
Why this matters
Iâve seen several people (like Lewis Bollard here, and several others in this thread) question FarmKind or wonder about their cooperation with Veganuary, and Iâve seen some people (like Aidan Kankoyku here) say that there was cooperation, based on what Thom and Aidan initially said. The former deserve an answer, and the latter ought to be told what actually happened â by FarmKind.
I am increasingly worried that FarmKindâs campaign, especially their handling of their relationship with Veganuary, is harming the vegan movementâs perception of effective altruism and EA animal advocacy (forgive the crude labels). You can see this in this LinkedIn comment by Alistair Currie of the Vegan Society. If you think that this is happening, that it matters and that some of the vegan movementâs concerns are reasonable, more EAs ought to challenge FarmKind publicly.
Next steps
Chris Bryant of Bryant Research apologised to Jane (and by extension Veganuary) here because his initial response to FarmKindâs campaign had focused on its public-facing impact (e.g. donations) in a positive way, and he had overlooked the campaignâs impact on the people at Veganuary and by extension the animal movement. I commend Chris for this response, which I think demonstrates personal integrity and promotes good movement norms. I would strongly encourage FarmKind to do something similar, and would commend them for it.
I am keen to discuss questions about the roles that veganism and offsetting do and ought to play in our movement; I am struggling to do so, and I think many others are, until FarmKind acknowledge their misleading communications and apologise.
Would it not make more sense that do a campaign encouraging the vegan community to donate (and donate more effectively)? It seems the vegan community is well primed to want to use their money to help animals, rather than meat eaters. So it seems like a much lower hanging fruit to hold a campaign for this purpose rather than hold an anti-vegan campaign to get meat eaters to donate to help animals. I also somehow feel anti-vegan meat eaters would simply resonate with the anti-vegan sentiment of the Forget Veganuary campaign, rather than actually end up donating (though this is just a hunch). It might also give them âlicenseâ to eat more meat as they can now simply âoffsetâ their consumption, but that sounds a lot like âstart a fire and donate to the fire brigadeâ kind of situation.
I could imagine that at this point this is quite a rough place to be in and to navigate going forward for FarmKind. One potential way might be:
apologise to the veganuary founders, CEO, and team for the impact on their brand, decades work, current campaign, and adding to their stresses on the dawn of veganuary 2026. Acknowledging that the campaign may have hurt many within the team at a personal level, and that undermining another org in the movement and their campaign is in hindsight unethical;
Really own and extend that apology to any offence and upset caused within the wider movement;
Show really remorse by taking down the campaign asap;
Make amends by helping correct damage to the veganuary brand and message by putting a good story to the press of how you called this wrong, and that thereâs value to the vegan diet as well as donatingâgood enough that the press covers it. Then do a future fundraiser specifically for Veganuary, or commit a proportion of your future fundraising to them.
None of thatâs easy, especially when under duress, but could well be the right thing for all parties long term, and regaining some goodwill from large parts of the movement.
Given the pitfalls of mass communication, I am worried that the âforget Veganuaryâ piece of this will be a bigger takeaway for most people than âdonate to help farmed animalsâ
Thank you to everyone on the EA Forum who has shared their thoughts and reflections so far.
We would like to clarify that Veganuary was not involved in developing the âForget Veganuaryâ campaign and had no role in shaping or approving its messaging or execution. While we were given advance notice that FarmKind was planning a campaign promoting offsetting as an alternative to trying vegan in January and were kept informed about media timing, we did not have sight of the website content until after it was launched, nor of the final PR framing. We share some of the concerns raised in this discussion about the potential risks associated with this approach.
Our organization supports both dietary change and effective philanthropy. We see the value in open discussion about the most effective ways to reduce harm to animals. We recognize that everyone involved shares a desire to end factory farming, even where we disagree on strategy/âtactics.
As this discussion has arisen during our busiest period of the year, our focus is now on executing an impactful Veganuary 2026 and delivering a norm-changing campaign that reduces demand for animal products at scale and incentivizes corporations to shift the food environment toward higher availability and visibility of cruelty-free options. We may not be able to engage deeply in real time now, but weâre open to further discussion and evaluation post-January.
Iâm not wild about this campaign either. Iâve shared this feedback privately with Aidan and Thom, but think thereâs value to doing so publicly to make clear that EA /â the animal movementâs moderate wing /â FarmKindâs funders donât uniformly endorse this approach. (To be clear: Iâm writing in my personal capacity and havenât discussed the following with anyone else at Coefficient Giving.)
Iâm a huge fan of FarmKindâs team. Iâve personally donated to them and directed funding to them via Coefficient Giving. I thought they did an incredible job during the Dwarkesh fundraiser earlier this year and I admire their ingenuity and grit in pursuing the very hard challenge of bringing in counterfactually new funds to effective animal advocacy. I appreciate that they meant well with this campaign, which I think they saw as using a a playful fake-feud with Veganuary to generate media.
But I thing this campaign was a mistake for three reasons:
This feels like an incitement to infighting, which has long plagued the animal movement. In recent years, Iâve seen the abolitionist /â more radical wing of the animal movement take major good faith steps to reduce this infighting (see, e.g., my session with Wayne Hsiung at this yearâs AVA). Whether Veganuary was in on this or not, Iâm seeing vegan activists reasonably interpreting this as an attack on their advocacy. I think we should have a very high bar for deliberately starting a fight in the movement, and I donât think this meets it.
This feels like an attack on vegans. I think we should also have a very high bar for attacking well-meaning people doing good in the world, whether vegans, EAs, organ donors, aid workers, or longtermists. I appreciate that attacking vegans wasnât the campaignâs intent, but I think it was the predictable result, and certainly how the folks in the Daily Mailâs comments sections have (gleefully) interpreted it.
This feels dishonest. To be clear: I donât think FarmKind intended it this way and I think the people behind it are deeply ethical people. But I think our movement is at its best when we hold ourselves to high standards and that includes not deliberately misleading people. And creating a fake âmeat-eating campaignâ feels like it crosses the line for me.
Again, this isnât to question the intent or abilities of FarmKindâs team. Instead, Iâm sharing how I personally feel about this campaign. I hope we can avoid campaigns like this in future, while continuing to pursue the innovation in tactics that the animal movement and EA needs.
As @NickLaing has pointed out, I think how people perceive the campaign or interpret its message is a lot more important than what the intentions are behind it. We can try and spin it however we like, but this is a straightforwardly anti-vegan campaign, maybe not in intent but in actuality. It is absolutely horrible in its attitude towards vegans, even though vegans are probably more likely to donate money to animals than any other group. Here are just a few choice snippets from the site:
1. Someone trying to go vegan had to plan every meal, give up her favourite foods, annoy friends and family, and get bloated. For all that, she helped far fewer animals than someone with an overflowing platter of meat.
2. âCan you survive Veganuary?â implying that its some terrible trial that someone needs to endure.
3. âEvery day can be hard when youâre veganâ. Hardly selling veganism.
4. Listing celebrities who couldnât âmake veganism stick,â including references to them feeling weak and struggling with ill health.
Honestly, youâd have a hard time finding a carnivore influencer who more passionately bashes veganism. People should donate money. They should also go vegan. If they canât do both, they should at least do one. But if they can do both, they should do both. That isnât implied anywhere; to the contrary, veganism is portrayed as a waste of time and vegans as weak, misguided, joyless fools.
Hi all,
Thom from FarmKind here. We at FarmKind wanted to provide a bit of context and explanation for the choices weâve made around this campaign.
Context
Cooperation: We let Veganuary know about our intention to launch this campaign at the very start of our planning process and have kept them informed throughout. Our campaign provides them with another opportunity to put forward the benefits of diet change. We are all on good terms and there is absolutely no infighting.
Origin: At this time of year, due to the annual Veganuary campaign, many people and the UK press debate the pros and cons of diet change, often with very entrenched views on both sides. This creates a unique opportunity to get people who are currently unwilling to change their diet to consider donating as an alternative entry-point into helping farmed animalsâsomething that is extremely hard to get media attention for most of the time.
Goal: The goal of this campaign is to get the question of âshould you do Veganuaryâ more media attention, and shift the focus from âis eating animals badâ to a focus on the question of which solution(s) to factory farming an individual will choose to participate in. In other words, we want the debate to be about whether to choose diet change or donating, rather than whether factory farming is a problem worth dealing with or not.
Our funders: FarmKind made the decision to launch this campaign. Organisations and individuals that have provided FarmKind with funding are not endorsing the campaign and it would be a mistake to equate past funding of FarmKind with support for our approach.
Campaign
The campaign encourages people to offset their meat this January by donating to help fix factory farming. As part of this, we hired three top competitive eaters to talk about donating to offset the animal welfare impact of their diet as they undertake one of their typical eating challenges.
By working with individuals who eat meat (but who would be undertaking these meat-eating challenges anyway), we can help reduce suspicion among entrenched meat eaters that our true motive is to make them vegan. It allows us to be authentic in our message that being unwilling to change your diet doesnât mean you canât start helping animals.
Our campaign aims to show that those who are unwilling to change their diet today can and should still begin their lifelong journey of helping animals by donating to charities working to change the food system.
Concerns
We know that some may have concerns about this approach and feel uncomfortable with the idea of paying competitive eaters who are eating meat, even in an effort to help farmed animals. However, to make change we have to start from where people are now. For most people, that starting point is eating and enjoying meat and being unwilling to change their diet.
Some media coverage has suggested that our campaign aims to encourage people to eat meat or that we are running a âmeat-eating campaignâ. This is untrue, and we have corrected them. Tapping into the pre-existing anti-Veganuary media narrative is a feature, not a bug, because this is why theyâre running stories about effective giving for farmed animals (which they would never touch otherwise) and giving Veganuary free media coverage.
As part of our commitment to being as transparent and effective as we can, weâre happy to answer specific questions anyone has about the campaign but as this campaign is ongoing we may have to answer some questions in the future or privately via email.
Thanks Thom for responding. I wasnât actually aware of who FarmKind were when I wrote my post above. It looks like a very good project overall, thanks for your work in the space.
Your response doesnât answer for me the question of why it was decided to create such an anti-vegan campaign (at least in its webpage). I can see there could be a lot of good done by persuading people who are unlikely to try a vegan diet to donate. But something along the lines of âIf you donât want to be vegan but want to help animals, try this insteadâ or even âIf you hate Veganuary, hereâs how to beat vegans at their own goalsâ or something would seem to suffice (but with better words...). Creating a webpage full of negative messages about being vegan doesnât seem necessary, and seems to me to actually be misinformation, given Iâm not aware of anything showing that the typical Veganuary participantâs experience is like what is presented.
Having read the article in the Telegraph, I didnât think it was actually that badâit seemed to be mainly arguing for promoting donations rather than diet change, and didnât actually seem to put veganism down (except for bringing up âvegan dogmaâ). (Though I wouldnât agree that putting on a meat-eating challenge is ethically OK.) So being negative about veganism doesnât seem to have been necessary to get publicity, so it makes it seem even stranger why the campaign web page takes this line.
It doesnât seem to have been picked up by any substantial media outlet other than the right wing UK pressâIâd have thought it would be desirable to get a broader reach, since Iâd guess that people on the political left would be more likely to donate, and I wonder if being less adversarial might have worked better.
It would be good to see follow up analysis of what impact on donations the campaign actually has.
Aidan says here that it is a âbitâ. That would seem to imply that Veganuary are collaborating with you on this. Can you say if thatâs accurate? If thereâs a follow up, it would seem good to highlight it to people here.
One of the things that people are going to do with a campaign like this is try to see who is funding it. Currently if you click the âTransparencyâ link at the bottom of the campaign page, it goes to a list of FarmKindâs funders, including the EA Animal Welfare Fund. Itâs then going to at least raise the possibility in peopleâs minds that these funders implicitly endorse the campaign. Unless youâve switched to self-funding, it does seem like these fundersâ money is being used to finance it (including individual donors to the EA AWF). Would it not be normal to check with funders before launching a campaign thatâs expected to be controversial? Particularly if their own donors might feel attacked by the campaign? It seems like it creates a fair amount of potential for blowback against the EA animal welfare movement.
If there is some complex strategy involving coordination with Veganuary or others, Iâd hope it was discussed with a diverse range of experienced people in the animal welfare space and got their endorsement.
I would also say that the campaign web page loses credibility by calling competitive eaters âexpertsâ (Iâve seen this in comments in non-EA spaces) - why would anyone go to such people for expertise on how to best help farm animals through donating? To me, relevant âexpertsâ would be people knowledgeable about welfare campaigns and ethics.
I think there should also be considerably more nuance around the idea of offsetting impacts of meat-eatingâcalling it âlike carbon offsettingâ seems misleading as they seem different in a number of significant ways, which may affect what people want to decide to do.
Thank you so much for your response, Thom. Would you be able to clarify whether the meat-eating challenge âin which three competitive eaters will consume nothing but animal products for a whole dayâ, as reported in the Telegraph and Daily Mail, were misrepresentations by these outlets, or was this originally part of the campaign and FarmKind then changed course in response to the backlash? The articles still have the same headlines and no corrections have been made with regards to the meat-eating competition in either of the articles, as far as I can tell.
FWIW:
I think it would be more useful to clarify if Veganuary supported you doing this campaign. If the answer is yes, that seems great! If the answer is no, this seems explicitly not cooperative, and in that case, it would be misleading to frame this as a cooperative effort (independent of if this was good or bad to do). I donât think whether or not Veganuary was informed was what folks were looking for, but if they endorsed the idea or did not endorse it /â anti-endorsed it.
I think this just seems like a clarification worth making here given how negative the reaction has been to the campaign (from within the movementâhopefully it had a positive reaction externally!)
Thanks for this replyâI agree with most of what you have written here.
I think though youâve missed some of the biggest problems with this campaign.
1. This seems to undermine vegans and vegetarians (see image above), and their efforts to help animals. It seems straightforwardly fair to interpreted this as anti-veganuary and anti-vegan, especially at a glance.
2. What matters in media is how you are portrayed, not what the truth is. Your initial campaign poster is ambiguous enough that its easy to interpret as a pro meat-eating campaign and anti-vegan campaign. I could have interpreted it as that myself, I donât think the media were grossly wrong here to report that.
The Telegraph article is pretty good actually overall and makes good points that could be good for animal welfare, although the first âclickbaityâ title and paragraph is unfortunate (see above)
Media lasts for a day, correcting it is the right thing to do but doesnât have much of an impact.
I can see what you are trying to do here, and its quite clever. I love most of your stuff, but this campaign seems like a mistake to me.
After having a quick look at this campaign, it pretty straightforwardly seems misguided and confusing. Farmkindâs efforts to appeal to regular people to donate rather than go vegan seems good and makes sense. This adversarial campaign looks and feels awful. Two reasons immediately jumped out as to why it feels off.
it undermines and even goads vegans and vegetarians doing their bit for animals
glorifying people who eat lots of meat feels bad in a guttaral almost âKantianâ kind of way, regardless of the utilitarian calculation.
In general i think complex utilitarian arguments struggle to be communicated well in pithy campaigns.
Iâm surprised the FarmKind people have made what seems like a pretty straightforward mistake like this, Iâve been super impressed by all the other material they have put out.
I think I understand the worries and discomfort people feel about this approach. But Iâm not sure how fruitful it is for all of us to have a vibes-based conversation about the possible merits of this campaign. It already exists. It might end up being good, it might end up being bad. We can make it better. If you think some of the risks taken and assumptions made by FarmKind are unaddressed, letâs talk about how we can mitigate those. Letâs also figure out how we can support FarmKind do what they intend to do for animals. And most importantly, letâs make sure we learn from this campaign.
How can we learn from this experiment?
Trying new approaches in this complex and relatively new space is great if you thoughtfully measure if it works or not. Measurement and evaluation are especially important because there are backfire risks and because this is a deeply underfunded cause area, so we cannot afford to be careless.
It can be easy to falsely attribute successes and failures. So, what are some indicators that this might demand pivoting /â repeating? Iâd love to hear from FarmKind, The Mission Motor, behavioral scientists, and ACE researchers who worked on the Better for Animals resource what they think would give us valuable insights.
What is the bar for money raised that would make this worth it? What is the cost of FarmKindâs Veganuary campaign, what else could have been done with those funds, how much money is raised through their platform specifically in response to this campaign?
Can we assess spillover effects?
Are there some PhD students out here who are willing to work with FarmKind to figure out some RCTs to learn some stuff? E.g. how long do people donate, do they change their diet, what do they think of factory farming, what were their priors, etc.
How can we mitigate possible harms?
Risk: discussion remains focused on individual diet change, not ending factory farming
Can FarmKind, now that they have the attention, redirect their messaging and no longer talk about diets but instead about the horrors of factory farming?
Can both vegans/âVeganuary and FarmKind state that what they care about is a more hospitable world for all and that industrial agriculture is the enemy.
Risk: moral circle expansion is slowed
Can Toni and FK and participants come out saying something like this, âDonât get us wrong, we are all actually bleeding hearts, we do care about animals, we donât think eating animals the way society does now is necessary, natural, or normal, but we are just being pragmatic. We think being vegan is good, but preaching veganism is not.â?
Can they direct some of the funds they raise to high-impact interventions that do things like education programs aimed at fostering compassion and empathy for animals, anti-speciesist policy work, actions promoting moral consideration of animals in public discourse, etc.?
Risk: time is wasted on infighting
Can both Veganuary and FarmKind state that what they agree on and care about is a more hospitable world for all and that industrial agriculture is the enemy?
Can Toni flip-flop some more, and in February say, âYou know what, I was wrong. Itâs not either/âor; it should be both or can be a little bit of each.â?
Can FarmKind share the metrics and results of their campaign and show up in vegan spaces like R/âvegan to explain their approach and solicit feedback?
I think AVA is planning to host a discussion about this at their Summit in Canada in May.
Risk: fewer people reduce their animal consumption or do it later
Meat producers can use this in their propaganda; can we use AI to find the conversations about this that misrepresent the arguments and counteract them?
Can Toni and other former vegans come out and say something like, âActually, after having hung out with all of these meat eaters and learning more about where their food comes from and having seen what it does to their bodies, I think itâs actually kinda gross/âdisgusting/âunsympathetic. Iâm happy they donate, but for their sake, I hope they eventually put their mouths where their money is.â
What would happen if Veganuary went on offense with aggressive angles like:
âWe applaud that FarmKind offers all the weak-willed meat-addicts out there a compassion cheat code against animal cruelty. We do hope that the people who listen to Toni and FarmKindâs advice 1) also talk with their doctors and nutritionists and 2) learn about the hidden truths about factory farming.â
âWe agree that there are multiple roads that lead to Rome, and the super-highway is one where we both do no harm and reduce harm as much as possible. So, we actually already recommend to people who participate in Veganuary also donate to high-impact pro-animal charities. Yes, we are even more holier-than-thou than you thought. We hope vegans put their money where their mouths are. And we hope that offsetters eventually put their mouths where their money is, for animalsâ sake and their own.â
âHow do you know someone is a meat eater? They will tell you. (And theyâre more likely to need GLP-1.)â
âIf youâre not one of these privileged people who can buy humanely raised meat and donate money, remember that beans are healthy, cheap, and cruelty free.â
(I donât particularly endorse any of these messages, but I could see people pulling up a chair and a popcorn bucket to watch this while being exposed to different arguments based on the same premise, that farming cruelty is bad.)
How can we increase the likelihood of success?
Opportunities to increase donation conversion
Is there a possibility for a follow-up press release by FarmKind or a pitch with testimonials of carnists who have made donations?
What would happen if FarmKind dares vegans and Veganuary supporters to donate? Can they do a donation contest with Veganuary? ACE can probably set up a fundraising page for vegans if Veganuary doesnât want to do it on the FarmKind site. (Happy to credit FarmKind for those donations, but Iâd like them to go where they are likely to do the most good.)
Can Toni share where she donates to?
Can we leverage the comment sections to encourage people to share where they donate and include donate links?
Opportunities for awareness increase
Can Toni talk about how Veganuary doesnât talk about animals enough and too much about health and climate, and how the big problem is factory farming?
Can FarmKind include and promote people in their pitches who also started reducing their meat intake after learning more about factory farming?
Can FarmKind or Toni talk about small-bodied animals and their Shrimpact work? What if Toni says, âSure, maybe itâs okay if some of these people want to eat some red meat and offset their donations, as long as they donât start eating chicken or salmon, or eggs.â
There are probably more and more productive ways to help FarmKind and Veganuary and the whole EAA movement in this endeavor. Please share your ideas. Also, what will you do this January, donate, go vegan, or both?
Three final thoughts that I didnât really know where to put:
If we think AI can soonish solve some of the big alt-protein questions (taste, scaling, price, etc.), then we will still need people to stop thinking they need animal products. If we think public discussions will affect alignment, then we need pro-animal messaging to be out there. Iâm wondering if this means that hard-to-measure interventions toward increased prevalence of anti-speciesist values might have become more important than I thought they were. On the other hand, if we think AI will solve factory farming, maybe in the meantime we need to focus as much of our time as possible on increasing the welfare of animals who are farmed until then, and thatâs more likely done through welfare campaigns than promoting veganism. Either way, we should probably be careful in how we talk about vegans and bring animals up more often, even in meat reduction work. However, Iâm very uncertain about all of this and curious what you think.
What could this offsetting approach to donating mean for effective giving? Is there a way to leverage this work to get people to make GWWC pledges or to get offsetters to think about how they use their donations in general. FarmKind wasnât successful in becoming the Giving Multiplier for animals and pivoted to offsetting, but maybe they can still direct offsetters to the Giving Multiplier?
I work fulltime in animal advocacy. I donât think that gives me an excuse to eat animals. I am vegan. I donât think that absolves me from donating to effective charities to reduce as much harm as possible. Itâs a privilege that I can do all three of these things. In this world, few people can. It seems good to encourage people to do everything they can, while also understanding that might be limited. So, letâs help people help more animals as best as they can. We need to understand better what works and work together to make that happen.
Edit: This is my personal take and not Animal Charity Evaluatorsâ opinion.
Just a quick word from me, Nicoll from The Mission Motor (TMM supports Monitoring, Evaluation, and Learning in the animal movement). Thanks Stien, for your balanced and clear thoughts and for asking for our take.
Based on what I read, I would consider this to be a novel and higher-risk intervention. Many of the more common interventions in the animal space could do with more robust data gathering, but a higher-risk/ânovel intervention would warrant an even stronger focus on MEL.
Common data gathering instruments, such as surveys, interviews, focus groups, etc. (when asking the right questions), can work well here to gather relevant data. And, saying this with a bit of caution, I donât think more elaborate MEL tools are needed.
Some of the challenges we foresee are reaching particular groups you might want data on (eg. people who read the campaign materials and donât actively engage, but could change their attitude or behaviour) and saying something sensible about the overall effect of the campaign. Particularly as it likely impacts another campaign (Veganuary), and because of comparing increased animal welfare through donations vs less animals in factory farming leading net negative lives as a result of less animal consumption.
I think it is possible to overcome these and other challenges, but this might come at too high a cost to still be a responsible use of resources.
To be able to properly comment on credible indicators, Iâd love to know the specific Theory of Change, so I wonât go into that now.
I totally assume FarmKind has done some MEL work already, but if we can be of assistance, weâd be happy to help!
Really well written, and an incredibly good breakdown of some of some of the strategic factors here that I wouldnât have come up with myself reading the above.
But I also think you may have partially missed the mark here. Statements like:
are utilitarian in flavor and really the whole of the comment is. What if you think this sort of thing is just promoting bad norms that just sort of feel deontologically wrong?
One way I can see that is violating a norm of kindness to others. Vegans sacrifice a lot, and to have someone highlighting the negatives from within the movement isnât great vibes. âBut theyâre not talking about current vegans, just those potentially thinking about changeâ Okay great, try telling the Christian that they should stop recruiting because Christians âannoy friends and familyâ leading a lifestyle thatâs a significant burden to everyone, themselves included. I doubt theyâll be enthused. To state what I mean here more clearly rather than leaving it to be inferred: casting sometimes thatâs a big part of someoneâs life in a negative life generally doesnât make their day better.
But they protest âNo no, you got us wrong. We really are pro vegans, we just think this is a more effective way to get eyes on the issue and increase exposure to AW topicsâ Now I think this is potentially violating some norm of trust or honesty. Maybe if the person comes to care about AW they wouldnât really care in the end, but I know if I decided to start donating rather than trying for diet change again, just to discover that this was all some ploy to drum up further controversy and reach, Iâd feel played and more than a bit disillusioned.
If I put on my utilitarian cap, everything you say above seems right. If I put on my deontologist cap, this campaign just doesnât seem quite right. The utilitarian in me feels compelled to say âbut I also donât know what itâs like to work in comms around AW, and maybe attention really is just some significant bottleneck standing between further animal lives savedâ. The deontologist then responds âyeah, maybe. But is this the type of thing youâd see in a healthy community of animal advocates?â [1]
I realize that youâre not endorsing the strategy and are just analyzing it, part of this speaks to the analysis but part of it is also aimed at those executing as well.
Love this comment so so much! Only minor disagreement is that I think the forum here isnât a bad place to have a bit of a âvibes basedâ conversation about a campaign like this. Then we can move into great analysis like yours right here.
I think promoting good norms and making them more âcommon knowledgeâ is one of the few ways that EA Forum conversations can maybe be useful.
As in, I think itâs good that âeveryone knows that everyone knowsâ that we should have a strong bias to be collaborative towards other projects with similar goals, and these threads can help a bit with that.
(To be clear, my sense is that FarmKind is already well aware of this and this is collaborative campaign, especially after reading their comment. I mean for the EA Forum readers community as a whole)
Edit: new comment from FarmKind
Contrarian marketing like this seems like it would only work well if the thing being opposed was extremely well known, which I donât think Veganuary is.
Seems true. Looking at google trends, âveganuaryâ is a lot less searched for than âmovemberâ.
And Iâd suspect that âmovemberâ isnât all that well-known either. For example, comparing it to black history month.
This might be a bit pedantic, but I would note that Veganuary is more popular in the UK. If we adjust the Google trends search to be UK-only, it looks more comparable.
Of course, I suspect Movember is more US-based, so this is now maybe too biased towards Veganuary, and even so, Movember still outpaces Veganuary, but it does look more competitive.
(I donât know if Black History Month is a fair comparable, especially considering itâs part of the US education system in a way the other two arenât.)
Again, I donât think this changes your larger point all that much, but figured additional context helps.
This feels like a very negative take on a lighthearted campaign that is trying to get across an important point. Itâs important to do outreach to people who disagree with youâeven people who think vegans are annoying.
I doesnât seem âlightheartedâ to meâit seems quite serious. OK, the browser âgameâ is quite silly. But if itâs meant to be lighthearted then that seems to have not come across to quite a lot of people⊠Trying to appeal to people who donât want to adopt a vegan diet is fine, but I donât think attacking another groupâs effort and the idea of veganism in general is.
No-one in this thread is the target audience for the campaign. And you are clearly attacking another groupâs effort right here!
Youâre right that we arenât the target audience. I take this as probably evidence in the other direction. I think if EAâs on the forum feel uncomfortable about this, the general public is likely to take it even worse than us.
I agree that its a light-hearted campaign, that is clever with good intentions. I just think its a mistake and might well do more harm than good. Thatâs OK, this is just one campaign among many great ones from FarmKind
âI think if EAâs on the forum feel uncomfortable about this, the general public is likely to take it even worse than usââI really disagree with this. EAâs values and sensibilities are very different to the average person. Things that EAs consider horrifically callous are normal to the average person and vice versa.
Examples of the former: eating meat, keeping all your wealth for yourself, âcharity begins at homeâ
Examples of the latter: measuring impact and saying we shouldnât give resources to organizations that donât perform well against these measurements, donating to help shrimp rather than people, donating to help strangers overseas rather than your local community, expressing support for billionaires who give away some of their wealth
There hasnât been backlash to this campaign from average people, only EAs and animal advocates.
Hi Aidan, two points:
Are FarmKind claiming that Veganuary is one of those organisations?
Depends what you mean by âbacklashââkind of unclear to me what backlash from average (non-vegan) people would look like, especially given I suspect most of them who have read a headline about it think this is just an anti-vegan campaign.
The comments on the Daily Mail piece (which should be taken with a huge pinch of salt, given itâs the Daily Mail + online comments in 2025) look quite a lot like backlash to me though.
âAre FarmKind claiming that Veganuary is one of those organisations?â â No
I think non-EA animal advocates count as being part of the general public in Nickâs usage? From what Iâve seen itâs been going down badly with them so far...
Why not go even further with outreach and diss the unpopular issue of animal welfare altogether? Then you can reach a huge crowd of people with your new modified message for good: âanimal welfare is irrelevantâ.
Because the authorâs objective is to promote animal welfare. They are jettisoning that which is unnecessary, but you need the payload.
Yes, Iâm joking, but keeping a payload, any payload, at the cost of the actual principles of your supposed cause, is pointless. Like, they could adjust their message to appeal to people who are alienated by appeals to animal welfare at all, and just advocate for meatless mondays in the name of reducing methane emissions. But that would be pretty ineffective, just like sending this bizarre, conflicted message and discouraging pro-animal advocacy is ineffective.
Oh. I find this negative and personally upsetting.
Effective altruism brought to animal advocacy a strong norm of collaboration and this feels like undermining years of work. I wrote about it some time ago:
This campaign seems like a well made one, but I think it contributes to polarization and I worry of alienating potential talent that is motivated by helping animals. It feels off to use a name for campaign that uses other charityâs name in a negative senseâfeels like an attack. Finally, very adversarial tone toward plant based choices undermines some of the charitiesâ work recommended by FarmKind, like Dansk Vegetarisk Forening.
So, overall it feels like optimizing for bringing money at the expense of collaborativeness and at the expense of other factors that contribute to the impact of the movement, like alienating talent.
I hope Iâm wrong and that Iâm missing some considerations, but I think effective altruists should have moral guardrails that make them unlikely to engage in certain behaviors and, to me, collaborativeness is one of the virtues that should not be discarded easily.
If anything, it feels a bit like a missed opportunity for some collab with Veganuary, but maybe FarmKind had reached out to Veganuary.
Edit: See Aidanâs comment below!
-
This seems right to me. The Telegraph article had a quote from Veganuary that was critical of the campaign. My understanding is that FK has been keeping Vegnaury informed throughout the process ,which is good, but it does not seem to be the case that this was a collaboration between the two.
Veganuary seeming against it is part of the bit. These media outlets hate Veganuary and wouldnât cover it if they thought it was what they wanted. We (FarmKind) have an announcement coming tomorrow explaining the context behind this campaign but the TL;DR is that it is not encouraging meat eating, itâs encouraging donating as another option for people who arenât willing to change their diet, and generating coverage for Veganuary who have a harder time getting in the media each year without a new hook
This seems to be contradicted by Wendyâs comment above.
Iâm pretty concerned (and confused) about the lack of alignment between FarmKindâs perspective and Veganuaryâs on the extent of cooperation between the two ahead of the campaign launch.
EDIT:
Thom says at 34:50 in this YouTube interview:
Thanks for engaging Aidan. Things may be clearer once we see any follow up I guess, but this strategy seems like it could come across as duplicitous, and rather risky not just for the organisations involved but also the wider EA movement, given the desire to seem trustworthy after the events of the past couple of years.
I get the good intentions here but it looks to have backfired badly. Obviously Iâm not deep in this but I hope that withdrawing the campaign and a quick apology is on the table for you guys at least. All the best figuring it out!
Thank you, thatâs good to know! If the campaign isnât encouraging meat-eating, why does it feature competitive meat eating? Are you concerned that itâs been reported as a âmeat-eating campaignâ in several outlets?
So this is . . . . ~EA kayfabe? (That term refers to âthe portrayal of staged elements within professional wrestling . . . . as legitimate or real.â).
Haha kayfabe is exactly right. Letâs not spoil it for the fans
Completely speculating here, but I wonder how much of the impetus for a campaign like this could be (emphasis on could!) illustrative of a broader disinterest in diet change work among some EAs. And so, even if vegnauary and adjacent efforts, or even veganism generally, are undermined in public discourse, some EAs might be ok with this because they basically donât think diet change is a serious way to help animals?
Like, to me, if this campaign successfully brings in a lot of donations that otherwise wouldnât be given, then that would be a success, assuming in the interim there arenât major fractures in the movement generally or other harms. But I wonder if some EAs basically round those fractures to zero regardless of how serious they are/âmay seem.
This could be completely wrong, though! This is a quick take afterall :).
Encouraging such donations could be good, and advocating for diet change doesnât seem to be favoured in EA. Advocating a âmoral offsettingâ approach to meat consumption is probably controversial I guess, but within realms of the plausibly reasonable. There doesnât seem to be anything gained by being negative about veganism though, and not doing that would seem robustly better.
Editâperhaps it could be argued that a campaign against veganism may more effectively raise attention than if no criticism were made. That would still seem to me to be an excessively risky and divisive strategy, though. And it makes claims that donât seem to generally be correct about veganism and says some other silly things, which doesnât seem like a good way to go.
Being seen as honest about the problems with veganism raises their credibility with their other recommendations. âOh yes, weâre not like those annoying people youâve already rejected, we have a different viewâ.
It doesnât really seem honest to me. It ignores all the experiences of people who didnât find it particularly problematic or even positive to do Veganuary.
Thank you for sharing this. Iâm personally very surprised to see this campaign from FarmKind after reading âWith friends like theseâ from Lewis Bollard and âprofessionalization has happened, differences have been put aside to focus on higher goals and the drama overall has gone down a lotâ from Joey Savoie.
I would have expected the ideal way to promote donations to animal welfare charities to be less antagonizing towards vegan-adjacent people.
@
Vasco Grilođžgiven that your name is on thehttps://ââwww.forgetveganuary.com/ââcampaign and youâre active on this forum, Iâm curious what you think about this. Were you informed?Edit: they will remove that section from the page
Hi Lorenzo.
I was not informed.
To clarify, it was just in a Google Reviews carousel they also have on the homepage, at the bottom of the page, and it was quickly removed
Woah! Agreed. I have a somewhat more positive view of go-vegan/âmeat reduction campaigns; but even disregarding that, this doesnât make sense. Current vegans are probably the best targets for a donate-more campaign and I can tell from experience reading r/âvegan that this is unlikely to go down well!
Friendly reminder that thereâs only 1 day left to apply for this upcoming round of SPAR!
Apply by January 14 to join our largest round yet â 130+ projects with mentors from Google DeepMind, RAND, UK AISI, Apollo Research, SecureBio, MIRI, and more!
This is likely the only SPAR round until fall later this year.
Work on a part-time AI safety, AI policy, AI security, or biosecurity project. Open to students & professionals; prior research experience not required for all projects.
Check out open projects and apply here
Weâve heard from a lot of people who feel theyâre getting rejected from jobs for being overqualified, which can be pretty frustrating. One thing that can help with this is to think about overqualification as an issue with poor fit for a particular role. Essentially, what feels like a general penalty for past success is usually about more specific concerns that your hiring manager might have, like:
Will you actually be good at this work? You might have years of experience in senior roles, or other impressive credentials, but this doesnât always mean youâll be able to perform well in a more junior role. For instance, if youâve been managing teams for years, they may worry you lack recent hands-on experience and donât know current best practices.
Will you stick around? If youâve been leading large teams but are applying for an individual contributor role, they might wonder if youâll actually find the work engaging or get bored without the higher-stakes responsibilities. They may worry youâre just using this as a stepping stone until something better comes along. Hiring is costly and time-consuming, so they donât want to invest in someone whoâll be gone in a few months.
Will you expect more than they can offer? If youâve worked in more senior roles, an organization might think youâll be looking for opportunities for growth, benefits, and a salary beyond what the organization is able to offer. If youâre likely to demand more than theyâre able to give, they wonât want to waste time advancing you through the process.
If youâre genuinely excited about a role, but are worried about being perceived as overqualified, the good news is that you can address these concerns in your application (especially your cover letter or application answers). For instance, if youâre stepping down in seniority, explain why you actually want to do this work. If youâve worked in management and are wanting a return to the hands-on work youâre really passionate about, then mention this.
You should also make sure to emphasize the parts of your background that are most relevant to the role, rather than the ones that seem most impressive in general. Your PhD might be impressive, for example, but unless itâs closely connected to the role youâre applying for, you might want to highlight other parts of your CV instead (like your operational experience if youâre applying for an ops role).
The important takeaway is to think about your fit for a specific role rather than your qualification level. Having more experience in a certain area isnât necessarily better if it doesnât help with the type of work youâd actually be doing, or if it implies youâll have expectations that an organization wonât be able to match.
If you want to learn more about this, you can read our full article on overqualification.
Iâm running a small fundraise match for Innovate Animal Ag until January 16th. IAA helped accelerate in-ovo sexing in the US, one of Lewisâ Ten big wins in 2024 for farmed animals. I think Robert and team have a thoughtful and different approach to welfare that seems tractable. At leastâitâs a bet worth placing. I imagine IAA bringing new welfare technologies above the line of commercial viability and providing the fuel for orgs like Humane League to push forward. Join me in my (small) match!
I made this simple high-level diagram of critical longtermist âroot factorsâ, âultimate scenariosâ, and âultimate outcomesâ, focusing on the impact of AI during the TAI transition.
This involved some adjustments to standard longtermist language.
âAccident Riskâ â âAI Takeover
âMisuse Riskâ â âHuman-Caused Catastropheâ
âSystemic Riskâ â This is spit up into a few modules, focusing on âLong-term Lock-inâ, which I assume is the main threat.
You can read interact with it here, where there are (AI-generated) descriptions and pages for things.
Curious to get any feedback!
Iâd love it if there could eventually be one or a few well-accepted and high-quality assortments like this. Right now some of the common longtermist concepts seem fairly unorganized and messy to me.
---
Reservations:
This is an early draft. Thereâs definitely parts I find inelegant. Iâve played with the final nodes instead being things like, âPre-transition Catastrophe Riskâ and âPost-Transition Expected Valueâ, for instance. I didnât include a node for âPre-transition valueâ; I think this can be added on, but would involve some complexity that didnât seem worth it at this stage. The lines between nodes were mostly generated by Claude and could use more work.
This also heavily caters to the preferences and biases of the longtermist community, specifically some of the AI safety crowd.
Iâll take this post off the frontpage later today. This is just a quick note to say that you can always message me (or use the intercom featureâchat symbol on the bottom right of your desktop screen) if youâd like to make suggestions or give feedback about the EA Forum.
I can attest that I message @Toby Tremlettđč quite a bit and heâs always really nice, even when my suggestions are kind of stupid or a little emotional.
Actually heâs polite and nice even when theyâre really stupid or extremely emotional as well.
I thought this could be relevant to a few people interested or working in bioethics:
The Bioethics Interest Group is one of several dozen Special Interest Groups that operate out of the Office of Intramural Research at the NIH. Its monthly virtual seminars âprovide a discussion forum, consider different views, and present research on complex ethical issues in medical research.â If you are interested in or working in bioethics, I thought you might find it interesting to sign up for its newsletter so that you have the opportunity to read about and consider attending its seminars.
(Half baked and maybe just straight up incorrect about peopleâs orientations)
I worry a bit about groups thinking about the post-AGI future (e.g., Forethought) will not want to push for something like super-optimized flourishing because this will seem weird and possibly uncooperative with factions that donât like the vibe of super-optimization. This might happen even if these groups thinking about the future do believe in their hearts that super-optimized flourishing is the best outcome.
It is very plausible to me that the situation is âconvexâ in the sense that it is better for the super-optimizers to optimize fully with their share of the universe, while the other groups do what they want with their share (with rules to prevent extreme suffering, pessimization etc). I think this approach might be better for all groups, rather than aiming for a more universal middle ground that leaves everyone disappointed. This bad middle ground might look like a universe that is both not very optimized for flourishing but is still super weird and unfamiliar.
It would be very sad if we miss out on the optimized flourishing because we were trying to not seem weird or uncooperative.
Hmm this is interesting.
Speculatively, I think there could actually just be convergence here, though, once you account for moral uncertainty and just very plausible situations where doing bad by everyoneâs lights are as bad as, say, utilitarian nightmares but just easier to get others on board for (ie extreme power).
Two hours before you posted this, MacAskill posted a brief explanation of viatopianism.
I think Iâm largely on board. I think Iâd favor doing some amount of utopian planning (aiming for something like hedonium and acausal trade). Viatopia sounds less weird than utopias like that. I wouldnât be shocked if Forethought talked relatively more about viatopia because it sounds less weird. I would be shocked if they push us in the direction of anodyne final outcomes. I agree with Peter that stuff is âconvexâ but I donât worry that Forethought will have us tile the universe with compromisium. But I donât have much private info.
Yeah, agreed on that point. Folks at Forethought arenât necessarily thinking about what a near-optimal future should look like, theyâre thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future.
Actually Jordan, better than âpretty okâ futures is explicitly something that folks at Forethought have been thinking about. Just not in the Viatopia piece. Check this out: https://ââwww.forethought.org/ââresearch/ââbetter-futures
I should read that piece. In general, I am very into the Long Reflection and I guess also the Viatopia stuff.
Question: Should I serve on the board of a non-EA charity?
I have an opportunity through work to help guide a charity doing work on childrenâs education and entertainment in the UK and US. It has an endowment in the tens of millions of pounds.
Has anyone else had experience serving on the board or guiding committee of a non-EA charity? Did you feel like you were able to have a positive influence? Do you have any advice?
Am a charity Trustee. (of a much smaller charity)
I would say: go for it! Try to learn a lot from your experience. Itâs a huge development opportunity for you.
I would ask myself something like these questions to figure this out, Iâm assuming by the picture you paint that you donât think their current work is necessarily wildly impactful?
1. Do I have the time and headspace to take this on? Will it negatively affect other things I do
2. Do I like the other board members (at least in theory), and will I work well with them?
3. Will this be something energy giving and enjoyable for me? Some work (even if not that impactful) can almost paridoxically give us more energy for the more impactful stuff. Iâve noticed this more and more over the years.
4. Is there perhaps an opportunity for me to shape the charityâs work towards something more impactful? Influencing the thought world of children has potential. Thereâs a saying attributed to the Jesuits which goes something like âGive us a child till they are 7 and weâll have them for lifeâ, so those years are importantly formative.
Is there a running list of small, impactful & very capacity-constrained giving opportunities somewhere?