This is a neat idea, and unlike many safety policy ideas it has scaling built in.
However, I think the evidence from the original GPL suggests that this wouldn’t work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can’t find stats, but I predict that GPL projects will have much less usage and dev activity.
It’s difficult to imagine software so good and difficult to replicate that Google would invite our virus into their proprietary repo. Sure, AI might be different from [Yet Another Cool AGPL Parser] - but then who has a bigger data moat and AI engineering talent than big tech, to just implement it for themselves?
This is a helpful counterpoint. From big tech companies’ perspective, I think that GPL (and especially aGPL) is close to the worst case scenario, since it destroys the ability to have proprietary software and can pose an existential risk to the company by empowering their competitors. Most of the specific clauses we discuss are not nearly so dangerous—they at most impose some small overhead on using or releasing the code. Corrigibility is the only clause that I can see being comparably dangerous: depending on the mechanism used to create future versions of the license, companies may feel they are giving too much control over their future to a third party.
I think I generalised too quickly in my comment; I saw “virality” and “any later version” and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!
One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.
I’m not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they’d be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).
This is a neat idea, and unlike many safety policy ideas it has scaling built in.
However, I think the evidence from the original GPL suggests that this wouldn’t work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can’t find stats, but I predict that GPL projects will have much less usage and dev activity.
It’s difficult to imagine software so good and difficult to replicate that Google would invite our virus into their proprietary repo. Sure, AI might be different from [Yet Another Cool AGPL Parser] - but then who has a bigger data moat and AI engineering talent than big tech, to just implement it for themselves?
https://www.theregister.com/2011/03/31/google_on_open_source_licenses/
** https://opensource.com/article/17/2/decline-gpl
This is a helpful counterpoint. From big tech companies’ perspective, I think that GPL (and especially aGPL) is close to the worst case scenario, since it destroys the ability to have proprietary software and can pose an existential risk to the company by empowering their competitors. Most of the specific clauses we discuss are not nearly so dangerous—they at most impose some small overhead on using or releasing the code. Corrigibility is the only clause that I can see being comparably dangerous: depending on the mechanism used to create future versions of the license, companies may feel they are giving too much control over their future to a third party.
I think I generalised too quickly in my comment; I saw “virality” and “any later version” and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!
One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.
I’m not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they’d be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).
I think you’re right, see my reply to Ivan.