Donation offsets for ChatGPT Plus subscriptions
I’ve decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).
I don’t have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:
They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.
It seems useful to recognize, to notice, when you’re contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.
A common Effective Altruism argument against offsets is that they don’t make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible. If you want to mitigate harms you are contributing to, you can offset by increasing your “doing good” budget, but it doesn’t make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.
I think this is a decently good point, but doesn’t move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you’re establishing a basis for coordination—other people can see you really care about the issue because you made a costly signal. This is similar for the reasons to be vegan or vegetarian—it’s probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.
After having used ChatGPT (3.5) and Claude for a few months, I’ve come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I’ve also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.
Unfortunately both can be true:
1) Language models are really useful and can help people learn, write, and research more effectively
2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential risk
I think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the “concerned about AI x-risk” reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models. But it is something, and I want to recognize that I’m contributing to this rapid scaling and deployment in some way.
Weighing all this together, I’ve decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets. I won’t dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.
I didn’t spend a lot of time deciding which orgs to donate to, but my reasoning is as follows: MIRI has a solid track record highlighting existential risks from AI and encouraging AI labs to act less recklessly and raise the bar for their alignment work. GovAI (the Center for AI governance) is working on regulatory approaches that might give us more time to solve key alignment problems. According to staff I’ve talked to, MIRI is not heavily funding constrained, but that they believe they could use more money. I suspect GovAI is in a similar place but I have not inquired.
- The Prospect of an AI Winter by 27 Mar 2023 20:55 UTC; 60 points) (LessWrong;
- The Prospect of an AI Winter by 27 Mar 2023 20:55 UTC; 53 points) (
- Tentative practical tips for using chatbots in research by 29 Mar 2023 15:01 UTC; 26 points) (
- Pros and Cons of boycotting paid Chat GPT by 18 Mar 2023 8:50 UTC; 14 points) (
I really appreciate the donation to GovAI!
For reference, for anyone thinking of donating to GovAI: I would currently describe us as “funding constrained” — I do current expect financial constraints to prevent us from making program improvements/expansions and hires we’d like to make over the next couple years. (We actually haven’t yet locked down enough funding to maintain our current level of operation for the next couple years, although I think that will probably come together soon.)
We’ll be putting out a somewhat off-season annual report soon, probably in the next couple weeks, that gives a bit of detail on our current resources and what we would use additional funding for. I’m also happy to share more detailed information upon request, if anyone might be interested in donating and wants to reach out to me at firstname.lastname@example.org.
I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.
My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.
I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.
I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services—it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.
Very harmful seems unreasonably strong. These products are insanely widely used and Jeffrey’s impact will be negligible. I generally think that tracking minor harms like this causes a lot more stress than its worth
Thanks for the response!
The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.
In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.
You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.
If it helps, I also hold this opinion and think that many EAs are also wrong about this. I particularly think that on the margin fewer EAs should be vegan, by their own lights (my impression is that most EAs do still fly when flying makes sense).
I agree with this argument in principle, but think that it just doesn’t check out—if you compare to the other options for reducing AI x risk (like Jeffrey’s day job!), I think his impact from that seems vastly higher than the impact of ChatGPT divided by several million (and the share given to openai, Microsoft, etc). And these both scale with the expected harm of AI x risk, so the ratio argument still holds regardless of the absolute scale. And I generally think that it’s a mistake to significantly stress over tiny fractions of your expected impact, and leads to poor allocations of time and mental energy.
Even if you’re not working on AI x risk directly, I would guess that eg donations to the longterm future fund still matter way more.
This isn’t an actual argument, I have a meta level suspicion that in many people this kind of reasoning is generated more by the virtue ethics of “I want to be a good person ⇒ avoid causing harm” than the utilitarian “I want to maximise the total amount of good done”, and in many people the utilitarian case is more post hoc justification (
I think the influencing other people argument does check out, but I’m just pretty skeptical that the average number of counterfactual coverts will be more than, say, 5,and this doesn’t change my argument. You also need to be careful to avoid double counting evidence—if being vegan convinces someone else to become vegan, and THEY convince someone else, then you need to split the credit between you two. If you think it leads to significant exponential growth then MAYBE the argument goes through even after credit sharing and accounting for counterfactuals?
In the concrete case of ChatGPT, I expect the models to just continue getting better and the context to shift far too rapidly for slow movement growth like that to be that important (concretely, I think that a mass movement boycott of these products is unlikely to be a contingent factor in AI products like this being profitable)
Thanks for the post.
These amounts are small.
Let’s say the value of your time is $500 / hour.
I’m not sure it was worth taking the time to think this through so carefully.
Honestly, if someone told me they’d done this, my first thought would be “huh, they’ve taken their eye off the ball”. My second would be “uh oh, they think it’s a good idea to talk about ethical offsetting”.
I think it’s worth pricing in the possibility of reactions like this when reflecting on whether to take small actions like this for the purpose of signalling.
J is thinking this through and posting it to give insight to others, not just for his own case.
If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question
@Daniel_Eth asked me why I choose 1:1 offsets. The answer is that I did not have a principled reason for doing so, and do not think there’s anything special about 1:1 offsets except that they’re a decent schelling point. I think any offsets are better than no offsets here. I don’t feel like BOTECs of harm caused as a way to calculate offsets are likely to be particularly useful here but I’d be interested in arguments to this effect if people had them.
Thanks for writing this. I’ve been wondering whether it’s ethical for me to have a ChatGPT plus subscription and it’s useful to see other folks thinking along similar lines + providing ‘solutions’.
As a side note, I’ve just written a shortform about how I believe more people should be integrating new AI tools into their workflows. for people worried about giving data and money to microsoft, I think offsetting is likely a great way to ensure you capture the benefits which I expect to be higher than the price of the offset
Can anyone tell me how to donate to GovAI?
I don’t see a donation button and their website.
It looks like you can donate via https://www.givingwhatwecan.org/charities/govai , but that only accepts paper checks (as far as I can tell) or credit cards (with about a ~2% fee).