Are you interested in AI X-risk reduction and strategies? Do you have experience in comms or policy? Let’s chat!
aigsi.org develops educational materials and ads that most efficiently communicate core AI safety ideas to specific demographics, with a focus on producing a correct understanding of why smarter-than-human AI poses a risk of extinction. We plan to increase and leverage understanding of AI and existential risk from AI to impact the chance of institutions addressing x-risk.
Early results include ads that achieve a cost of $0.10 per click (to a website that explains the technical details of why AI experts are worried about extinction risk from AI) and $0.05 per engagement on ads that share simple ideas at the core of the problem.
Personally, I’m good at explaining existential risk from AI to people, including to policymakers. I have experience of changing minds of 3⁄4 people I talked to at an e/acc event.
Previously, I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions (including dozens of IMO and IOI gold medalists); have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.
I have a background in ML and strong intuitions about the AI alignment problem. I grew up running political campaigns and have a bit of a security mindset.
My website: contact.ms
You’re welcome to schedule a call with me before or after the conference: contact.ms/ea30
Note that we’ve only received a speculation grant from the SFF and haven’t received any s-process funding. This should be a downward update on the value of our work and an upward update on a marginal donation’s value for our work.
I’m waiting for feedback from SFF before actively fundraising elsewhere, but I’d be excited about getting in touch with potential funders and volunteers. Please message me if you want to chat! My email is ms@contact.ms, and you can find me everywhere else or send a DM on EA Forum.
On other organizations, I think:
MIRI’s work is very valuable. I’m optimistic about what I know about their comms and policy work. As Malo noted, they work with policymakers, too. Since 2021, I’ve donated over $60k to MIRI. I think they should be the default choice for donations unless they say otherwise.
OpenPhil risks increasing polarization and making it impossible to pass meaningful legislation. But while they make IMO obviously bad decisions, not everything they/Dustin fund is bad. E.g., Horizon might place people who actually care about others in places where they could have a huge positive impact on the world. I’m not sure, I would love to see Horizon fellows become more informed on AI x-risk than they currently are, but I’ve donated $2.5k to Horizon Institute for Public Service this year.
I’d be excited about the Center for AI Safety getting more funding. SB-1047 was the closest we got to a very good thing, AFAIK, and it was a coin toss on whether it would’ve been signed or not. They seem very competent. I think the occasional potential lack of rigor and other concerns don’t outweigh their results. I’ve donated $1k to them this year.
By default, I’m excited about the Center for AI Policy. A mistake they plausibly made makes me somewhat uncertain about how experienced they are with DC and whether they are capable of avoiding downside risks, but I think the people who run it are smart and have very reasonable models. I’d be excited about them having as much money as they can spend and hiring more experienced and competent people.
PauseAI is likely to be net-negative, especially PauseAI US. I wouldn’t recommend donating to them. Some of what they’re doing is exciting (and there are people who would be a good fit to join them and improve their overall impact), but they’re incapable of avoiding actions that might, at some point, badly backfire.
I’ve helped them where I could, but they don’t have good epistemics, and they’re fine with using deception to achieve their goals.
E.g., at some point, their website represented the view that it’s more likely than not that bad actors would use AI to hack everything, shut down the internet, and cause a societal collapse (but not extinction). If you talk to people with some exposure to cybersecurity and say this sort of thing, they’ll dismiss everything else you say, and it’ll be much harder to make a case for AI x-risk in the future. PauseAI Global’s leadership updated when I had a conversation with them and edited the claims, but I’m not sure they have mechanisms to avoid making confident wrong claims. I haven’t seen evidence that PauseAI is capable of presenting their case for AI x-risk competently (though it’s been a while since I’ve looked).
I think PauseAI US is especially incapable of avoiding actions with downside risks, including deception[1], and donations to them are net-negative. To Michael, I would recommend, at the very least, donating to PauseAI Global instead of PauseAI US; to everyone else, I’d recommend ideally donating somewhere else entirely.
Stop AI’s views include the idea that a CEV-aligned AGI would be just as bad as an unaligned AGI that causes human extinction. I wouldn’t be able to pass their ITT, but yep, people should not donate to Stop AI. The Stop AGI person participated in organizing the protest described in the footnote.
In February this year, PauseAI US organized a protest against OpenAI “working with the Pentagon”, while OpenAI only collaborated with DARPA on open-source cybersecurity tools and is in talks with the Pentagon about veteran suicide prevention. Most participants wanted to protest OpenAI because of AI x-risk and not because of Pentagon, but those I talked to have said they felt it was deceptive upon discovering the nature of OpenAI’s collaboration with the Pentagon. Also, Holly threatened me trying to prevent the publication of a post about this and then publicly lied about our conversations, in a way that can be easily falsified by looking at the messages we’ve exchanged.