Instead of asking, “Is it net good or net bad”, I think it’s much more interesting to catalogue and understand all the ways it’s both good and bad.
Some negative takeaways:
OpenAI & Microsoft are bullish on releasing risky technologies quickly.
The market seems to encourage this behavior.
Google seems like it’s been encouraged to do similar work, faster.
Likely to inspire more people to invest in this sort of thing and make companies in the space.
Good things (as you mention):
Really good for failures to happen publicly
Might be indicative of a slow takeoff. My hunch is that we generally want as much AI progress to happen as possible before any hard takeoff, though I’d prefer it all to happen more slowly than quickly.
Something I’m confused about is why Microsoft hasn’t retracted Bing Chat by this point
It’s also highlighted for me the failure mode of “secondary releases”: even if a first release is done safely and responsibly, other actors may release their highly imperfect model “just to have a chance”. This in turn could force the first model to take more aggressive steps
Instead of asking, “Is it net good or net bad”, I think it’s much more interesting to catalogue and understand all the ways it’s both good and bad.
Some negative takeaways:
OpenAI & Microsoft are bullish on releasing risky technologies quickly.
The market seems to encourage this behavior.
Google seems like it’s been encouraged to do similar work, faster.
Likely to inspire more people to invest in this sort of thing and make companies in the space.
Good things (as you mention):
Really good for failures to happen publicly
Might be indicative of a slow takeoff. My hunch is that we generally want as much AI progress to happen as possible before any hard takeoff, though I’d prefer it all to happen more slowly than quickly.
Something I’m confused about is why Microsoft hasn’t retracted Bing Chat by this point
It’s also highlighted for me the failure mode of “secondary releases”: even if a first release is done safely and responsibly, other actors may release their highly imperfect model “just to have a chance”. This in turn could force the first model to take more aggressive steps
Seems like you should write these as answers.
Related to using the Virtue of Discernment:
https://www.lesswrong.com/posts/W2iwHXF9iBg4kmyq6/the-practice-and-virtue-of-discernment