Windfall Clause (under Global Catastrophic Risk (AI))
Justification:
Important as a wiki topic to give short description of this policy proposal and relevant links/papers/discussion- as it seems like an important output of AI Governance literature/studies.
Tag as potential future posts may discuss/critique the idea(e.g. second post below)
Posts that it could apply to:
Hi Peter!
Thank you for the write up!
You’re currently getting downvoted(unfortunately I think!), but I thought I would try to flesh out some reasons why this is the case currently, potentially to spur on discussion:
1. Whether unintentional or not, the ‘flat earth’ images do not seem to be a favourable presentation of your ideas and do not seem necessary to make the claims you are making.
2. There is not much structure to the post. I think we would appreciate it if you had some introduction and conclusion on what you are trying to address and how you’ve done so.
3. Some of the explanations are quite confusing (at least to me), e.g. it’s not clear what you mean exactly by
Does this mean ‘higher utility/welfare’?
4. I don’t think the post is sufficiently self-contained and free standing to make a credible case.
Also keen to hear whether people agree/disagree with the above!