I do basically agree we don’t have bargaining power, and that they most likely don’t care about having a good relationship with us.
The reason for the diplomatic “line of retreat” in the OP is more because:
it’s hard to be sure how adversarial a situation you’re in, and it just seems like generally good practice to be clear on what would change your mind (in case you have overestimated the adversarialness)
it’s helpful for showing others, who might not share exactly my worldview, that I’m “playing fairly.”
I’d probably imagine no-one much at OpenAI really losing sleep over my decision either way, so I’d tend to do just whatever seemed best to me in terms of the direct consequences.
I’m not sure about “direct consequences” being quite the right frame. I agree the particular consequence-route of “OpenAI changes their policies because of our pushback” isn’t plausible enough to be worth worrying about, but, I think indirect consequences on our collective epistemics are pretty important.
I do basically agree we don’t have bargaining power, and that they most likely don’t care about having a good relationship with us.
The reason for the diplomatic “line of retreat” in the OP is more because:
it’s hard to be sure how adversarial a situation you’re in, and it just seems like generally good practice to be clear on what would change your mind (in case you have overestimated the adversarialness)
it’s helpful for showing others, who might not share exactly my worldview, that I’m “playing fairly.”
I’m not sure about “direct consequences” being quite the right frame. I agree the particular consequence-route of “OpenAI changes their policies because of our pushback” isn’t plausible enough to be worth worrying about, but, I think indirect consequences on our collective epistemics are pretty important.