Great write-up. However, the approaches suggested here sound too timid to be effective. Thank you for creating this post.
Nik Samoylov
Karma: 93
Updates from Campaign for AI Safety
Updates from Campaign for AI Safety
Updates from Campaign for AI Safety
Updates from Campaign for AI Safety
Updates from Campaign for AI Safety
Updates from Campaign for AI Safety
Update from Campaign for AI Safety
Nik Samoylov’s Quick takes
Together with a few volunteers, we prepared a policy document for the Campaign for AI Safety to serve as a list of demands by the campaign.
It is called “Strong and appropriate regulation of advanced AI to protect humanity”. It is currently geared towards Australiand and US policy-makers, and I think it’s not its last version.
I would appreciate any comments!
If you divvy up revenue of AI companies from diffusion models at current pricing, then yes.
But if creators’ consent is first sought for training and if they have a chance to do individual or collective bargaining (the choice of form of bargaining needs to be with the creators) with AI companies, then payouts may be meaningful.