I’ve decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).
These amounts are small.
Let’s say the value of your time is $500 / hour.
I’m not sure it was worth taking the time to think this through so carefully.
To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets.
Agree.
By publicly making a commitment to offset a particular harm, you’re establishing a basis for coordination—other people can see you really care about the issue because you made a costly signal
...
I won’t dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.
Honestly, if someone told me they’d done this, my first thought would be “huh, they’ve taken their eye off the ball”. My second would be “uh oh, they think it’s a good idea to talk about ethical offsetting”.
I think it’s worth pricing in the possibility of reactions like this when reflecting on whether to take small actions like this for the purpose of signalling.
Thanks for the post.
These amounts are small.
Let’s say the value of your time is $500 / hour.
I’m not sure it was worth taking the time to think this through so carefully.
Agree.
Honestly, if someone told me they’d done this, my first thought would be “huh, they’ve taken their eye off the ball”. My second would be “uh oh, they think it’s a good idea to talk about ethical offsetting”.
I think it’s worth pricing in the possibility of reactions like this when reflecting on whether to take small actions like this for the purpose of signalling.
But:
J is thinking this through and posting it to give insight to others, not just for his own case.
If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question