I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.
My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.
I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.
I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services—it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.
Thanks for the response!
The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.
In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.
You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.