Executive summary: This exploratory post uses a simple quantitative model to estimate how much one should donate to AI safety charities to morally “offset” the harm of purchasing a large language model (LLM) subscription—concluding, with wide uncertainty, that offsetting may require about $0.87 in donations per $1 spent, though the author remains ambivalent about whether moral offsetting itself is a coherent ethical practice.
Key points:
The post treats LLM subscriptions as potentially harmful because they increase AI company revenue, accelerating AI capabilities and associated existential risks.
Using a basic Squiggle model, the author links subscription spending to company valuation, valuation to expenditures, and expenditures to overall harm, then compares this to the estimated cost of offsetting harm through AI safety donations.
Core model inputs include (a) revenue-to-valuation ratios (estimated at ~7x), (b) valuation-to-expenditure ratios (5–20%), (c) total frontier AI company spending (~$66B in 2026), and (d) the cost of fully offsetting AI company harms (roughly 10–100× current AI safety budgets).
The model’s mean result is $0.87 per dollar spent, but the median is just $0.06—indicating high uncertainty and fat-tailed risk: the harm could be minor or very large.
The author acknowledges major limitations: large uncertainties in the offset cost, the assumption that donations scale linearly with safety impact, and unresolved ethical concerns about whether moral offsetting is valid at all.
Personally, the author decides to donate $1 to AI safety for every $1 spent on LLMs, as a rough, good-faith gesture rather than a confident moral calculation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: This exploratory post uses a simple quantitative model to estimate how much one should donate to AI safety charities to morally “offset” the harm of purchasing a large language model (LLM) subscription—concluding, with wide uncertainty, that offsetting may require about $0.87 in donations per $1 spent, though the author remains ambivalent about whether moral offsetting itself is a coherent ethical practice.
Key points:
The post treats LLM subscriptions as potentially harmful because they increase AI company revenue, accelerating AI capabilities and associated existential risks.
Using a basic Squiggle model, the author links subscription spending to company valuation, valuation to expenditures, and expenditures to overall harm, then compares this to the estimated cost of offsetting harm through AI safety donations.
Core model inputs include (a) revenue-to-valuation ratios (estimated at ~7x), (b) valuation-to-expenditure ratios (5–20%), (c) total frontier AI company spending (~$66B in 2026), and (d) the cost of fully offsetting AI company harms (roughly 10–100× current AI safety budgets).
The model’s mean result is $0.87 per dollar spent, but the median is just $0.06—indicating high uncertainty and fat-tailed risk: the harm could be minor or very large.
The author acknowledges major limitations: large uncertainties in the offset cost, the assumption that donations scale linearly with safety impact, and unresolved ethical concerns about whether moral offsetting is valid at all.
Personally, the author decides to donate $1 to AI safety for every $1 spent on LLMs, as a rough, good-faith gesture rather than a confident moral calculation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.