Executive summary: The author plans to donate $40,000 in 2025 to PauseAI US based on a largely unchanged view that AI misalignment is the biggest existential risk and that pausing frontier AI—ideally a global ban on superintelligence until proven safe—is the least-bad path, alongside updated concerns about non-alignment problems and AI-for-animals.
Key points:
Prioritization is mostly unchanged: existential risk is a big deal, AI misalignment risk is the biggest, and within AI x-risk, policy/advocacy is much more neglected than technical research.
The donation goal is to increase the chances of a global ban on developing superintelligent AI until it is proven safe; moratoria are preferred to “softer” safety regulations, though certain regulations (e.g., whistleblower protections, compute monitoring, GPU export restrictions) are still supported as useful steps, with public advocacy and leading-country regulations as intermediate goals.
There is no good plan: “pause AI” is judged the least-bad option; P(doom) is ~50%, and if humanity survives it will likely be due to luck.
Updates since last year include greater concern about “non-alignment problems” and a renewed view that “AI-for-animals” may be more cost-effective on the margin despite lower probability because it is highly neglected.
Confidence increased that we should pause frontier AI and that peaceful protests probably help; evidence on disruptive protests is mixed; trust standards are higher, with SFF the most trusted grantmaker.
2025 giving: $40,000 to PauseAI US (valued for protests and messaging campaigns); positive views on MIRI (with a “stable preference bonus” and SFF match up to $1.3M) and Palisade (SFF match up to $900K); tentatively most favorable 501(c)(4) is ControlAI, with open questions about ARI, AI Policy Network, congressional campaigns, and Encode.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author plans to donate $40,000 in 2025 to PauseAI US based on a largely unchanged view that AI misalignment is the biggest existential risk and that pausing frontier AI—ideally a global ban on superintelligence until proven safe—is the least-bad path, alongside updated concerns about non-alignment problems and AI-for-animals.
Key points:
Prioritization is mostly unchanged: existential risk is a big deal, AI misalignment risk is the biggest, and within AI x-risk, policy/advocacy is much more neglected than technical research.
The donation goal is to increase the chances of a global ban on developing superintelligent AI until it is proven safe; moratoria are preferred to “softer” safety regulations, though certain regulations (e.g., whistleblower protections, compute monitoring, GPU export restrictions) are still supported as useful steps, with public advocacy and leading-country regulations as intermediate goals.
There is no good plan: “pause AI” is judged the least-bad option; P(doom) is ~50%, and if humanity survives it will likely be due to luck.
Updates since last year include greater concern about “non-alignment problems” and a renewed view that “AI-for-animals” may be more cost-effective on the margin despite lower probability because it is highly neglected.
Confidence increased that we should pause frontier AI and that peaceful protests probably help; evidence on disruptive protests is mixed; trust standards are higher, with SFF the most trusted grantmaker.
2025 giving: $40,000 to PauseAI US (valued for protests and messaging campaigns); positive views on MIRI (with a “stable preference bonus” and SFF match up to $1.3M) and Palisade (SFF match up to $900K); tentatively most favorable 501(c)(4) is ControlAI, with open questions about ARI, AI Policy Network, congressional campaigns, and Encode.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.