Graduate student at Johns Hopkins. Looking for entry level work, feel free to message me about any opportunities!
Dylan Richardson
I accept that political donations and activism are among the best ways to do good as an individual.
But it is less obvious that EA as an academic discipline and social movement has the analytical frameworks that suit it to politics—we have progress studies and the abundance movement for that. Mainly, I think there is a big difference between consensus-building among experts or altruistically minded individuals and in the political sphere of the mass-public.
It is of course necessary for political donations to be analyzed as trade offs against donations to other cause areas. And there’s a lot of research that needs doing on the effectiveness of campaign donations and protest movements in achieving expected outcomes. And certain cause areas definitely have issue-specific reasons to do political work.
But I wouldn’t want to see an “EA funds for Democrats” or a “EAs Against Trump” campaign.
I don’t have a good data source on hand, but my understanding is that pollution from car travel is particularly harmful to local air quality. Whereas, for instance, emissions from plane travel less so.
But yes, I assume some portion of Giving Green’s grantees do work that benefit air quality at least second hand. It could be included in the calculator as a harm but just directed to Giving Green as well.
Yes, you are probably right. I just threw that out as a stand-in for what I’m looking for. Ending all factory farming is too high a bar (and might just happen due to paper clipping instead!).
Maybe 10-20x-ing donor numbers is closer? I’d reference survey data instead, but public opinions are already way ahead of actual motivations. But maybe “cited among top 10 moral problems of the day” would work. Could also be numbers of vegans.
I think that is both correct and interesting as a proposition.
But the topic as phrased seems more likely to mire it in more timelines debate. Rather than this proposition, which is a step removed from:
1. What timelines and probability distributions are correct
2. Are EAs correctly calibrated
And only then do we get to
3. EAs are “failing to do enough work aimed at longer than median cases”.
- arguably my topic “Long timelines suggest significantly different approaches than short timelines” is between 2 & 3
Perhaps “Long timelines suggest significantly different approaches than short timelines” is more direct and under discussed?
I think median EA AI timelines are actually OK, it’s more that certain orgs and individuals (like AI 2027) have tended toward extremity in one way or another.
I mean all of the above. I don’t want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk’s willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren’t. So let’s say just Open AI.
I would be very surprised if this doesn’t split discussion at least 60⁄40.
Good topic, but I think it would need to be opened to plant based as well and reduced to something like “more than 60%” to split debate adequately.
“Grok/xAI is a greater threat to AI Safety than either Open AI or Google DeepMind”
- (Controversial because the later presumably have a better chance of reaching AGI first. I take the question to mean “which one, everything else being equal and investment/human capital not being redistributed, would you prefer to not exist?”
Mostly I just want a way to provoke more discussion on the relative harms of Grok as a model, which has fallen into the “so obvious we don’t mention it” category. I would welcome better framings.)
“Policy or institutional approaches to AI Safety are currently more effective than technical alignment work”
Really cool! Easy to use and looks great. Some feedback:
The word “offsetting” seems to have bad PR. But I quite like “Leave no harm” and “a clean slate”. I think the general idea could be really compelling to certain parts of the population. There is at least some subsection of the population that thinks about charity in a “guilty conscious” sense. Maybe guilt is a good framing, especially since it is more generalizable here than most charities are capable of eliciting.I’m certainly not an expert on this, but I wonder if this could have particular appeal to religious groups? The concept of “Ahimsa” in Hinduism, Buddhism, and Jainism seems relevant.
Last suggestion: Air pollution may be a good additional category of harms. I’m not sure what the best charity target would be though, given that it is hyper regional. Medical research? Could also add second-hand cigarette smoke to that.
Seems like the best bet is to make it as comprehensive as possible, without overly diluting the most important and evidence backed stuff like farmed animal welfare.
“Mass Animal Welfare social change has at least a 40% chance of occurring before TAI”
(Social change, not necessarily material or policy change—hard to specify what qualifies, but maybe quadrupling the number of individual donors, or the sizes and frequency of protests.)
I actually started drafting a post called “Do Vegan Advocacy Like Yud” for this reason!
It seems to me that many orgs and individuals stick to language like “factory farming is very bad” when what they actually believe is that it is the biggest current moral catastrophe in the world. That and they side step the issue by highlighting environmental and conservation concerns.
Woah! Agreed. I have a somewhat more positive view of go-vegan/meat reduction campaigns; but even disregarding that, this doesn’t make sense. Current vegans are probably the best targets for a donate-more campaign and I can tell from experience reading r/vegan that this is unlikely to go down well!
Has anyone tried appending “Hire me, and I’ll donate 10% of my paycheck to charity” or something similar to their resume or LinkedIn?
I suspect it would just hurt non-EA applications, due to do-gooder derogation and other reasons. But maybe that’s just cynicism on my part?
I’m ranking the Animal Welfare Fund first—the Shrimp Welfare Project is already a grantee of the fund; and in general I don’t think that it is clearly much more effective than others on the fund. Much of these are emerging interventions and causes which plausibly benefit considerably from the marginal dollar (at least until we have better evidence for tractability).
I’m ranking Forethought higher than I usually would for a research org, primarily because I’ve been impressed by their research agenda which seems particularly on-point, as well as effective public communication.
I’m not familiar with the examples you listed @mal_graham🔸(anticoagulant bans and bird-safe glass), are these really robustly examples of palatability? I’m betting that they are more motivated by safety for dogs, children and predatory birds, not the rats? And I’m guessing that even the glass succeeded more on conservation grounds?
Certainly, even if so, it’s good to see that there are some palatability workarounds. But given the small-body problem, this doesn’t encourage great confidence that there could be more latent palatability for important interventions. Especially once the palatable low-hanging fruit are plucked.
Hi Aditi! My current level of involvement in the animal movement isn’t high enough to be very decision relevant.
As for others in the movement: The main appeal of the first question is to better draw out expectations about future moral patients. Might shed light on what the relative strength of given hypothetical sentience candidates in relation to each other are. My understanding is that the consensus view is that digital minds dominate far-future welfare. But regardless of whether that is the case, it’s not obvious that will be the case without concerted efforts to design these minds as such. And if it is necessary to design digital minds for sentience, then we might expect that other artificial consciousnesses are created before that point (which may deserve our concern).
The last two questions are rough attempts to aid prioritization efforts.
1. Farmed animals receive very little in philanthropic funding; so relatively minor changes may matter a lot.
2. Holden Karnofsky in his latest 80k episode appearance said something to the effect that corporate campaigns had in his view some of Open Phil’s best returns. Arguably, with less commitments being achieved overtime and other successes on the horizon (alt protein, policy, new small animal focused orgs), this could change. Predictions expecting that it will might in themselves help inform funders making inter cause prioritization decisions.
Of less immediate practical relevance than other questions, but nonetheless interesting and not before discussed in this context (to my knowledge):
Will the first artificial conscious mind be determined to be:
In the form of an LLM
A simulation of a non-human animal brain (such as nematodes, for instance)
A simulation/emulation of a human brain
T
here will not be any such determination by the resolution date(seems best to exclude this answer and have the question not resolve should this be the case, considering that it would dominate otherwise. A separate question on this would be better)Other
Also:
Something about octopus farm prevalence/output probably
Forecasts of overall farmed animal welfare spending for a given future year, inflation corrected. Not sure what the most current estimates are at the moment or what org would be best for resolution.
Might be interesting to do something like “according to reputable figure X (Lewis Bollard?), what will be judged to have been the most effective animal spending on the margin over the prior 5 years”. Options: Corporate campaigns, movement building, direct action, go-vegan advocacy, policy advocacy, alternative protein development, etc.
I take your point about “Welfareans” vs hedonium as beings rather than things, perhaps that would improve consensus building on this.That being said, I don’t really expect whatever these entities are to be anything like what we are accustomed to calling persons. A big part of this is that I don’t see any reason for experiences to be changing over time; they wouldn’t need to be aging or learning or growing satiated or accustomed.Perhaps this is just my hedonist bias coming through - certainly there’s room for compromise. But unfortunately my experience is that lots of people are strongly compelled by experience machine arguments and are unwilling to make the slightest concession to the hedonist position.Changed my mind, I like this. I’m going to call them Welfareans from now on.
Kudos for writing maybe the best article I’ve seen making this argument. I’ll focus on the “catastrophic replacement” idea. I endorse what @Charlie_Guthmann said, but it goes further.
We don’t have reason to be especially confident of the AI sentience y/n binary (I agree it is quite plausible, but definitely not as probable as you seem to claim). But you are also way overconfident that they will have minds roughly analogous to our own and not way stranger. They would not “likely go on to build their own civilization”, let alone “colonize the cosmos”, when there is (random guess) a 50% chance that they have only episodic mental states that perhaps form, emerge and end with discrete goals. Or simply fleeting bursts of qualia. Or just spurts of horrible agony that only subside with positive human feedback, where scheming is not even conceivable. Or that the AI constitutes many discrete minds, one enormous utility-monster mind, or just a single mind that’s relatively analogous to the human pleasure/suffering scale.
It could nonetheless end up being the case that once “catastrophic replacement” happens, ASI(s) fortuitously adopt the correct moral theory (total hedonistic utilitarianism btw!) and go on to maximize value, but I consider this less likely to come about from either rationality or the nature of ASI technology in question. The reason is roughly that there are many of us with different minds, which are under a constant flux due to changing culture and technology. A tentative analogy: consider human moral progress like sand in an hourglass; eventually it falls to the bottom. AIs may come in all shapes and sizes, like sand grains and pebbles. They may never fall into the correct moral theory by whatever process it is that could (I hope) eventually drive human moral progress to a utopian conclusion.