Thank you for publishing this post. In which way is this different from what Optimism tries to achieve? Also, what if the public good is difficult to monitor? It is hard to observe reductions in existential risk. How will the protocol pay out if there is large uncertainty regarding the effects of an intervention, even afterwards?
It is not particularly different from what Optimism is trying to achieve, but importantly it is actually on-chain whereas Optimism’s initial test round was done informally with a google doc to notarize votes and project nominations and disbursement was manual. As things scale this isn’t really a great way to fund future rounds especially if more money is involved.
That’s a good point, observing risk reduction is hard and it was a can of worms I didn’t really open in the article. I am relying on sensible wisdom of the crowd type decisions to be implemented by groups of experienced assessors and forecasters. We’d like to come up with some broad traffic light system metrics to help guide voters, but ultimately this will require more research and development. What do you mean with difficult to monitor? Broad goods like “risk reduction research” may be difficult to monitor but individual contributions or projects which are nominated can still be assessed even if the overarching progress is hard to measure.
The payout is tied to the design decisions made at the round instantiation and the votes. The responsibility lies with the badge holders to assess those uncertainties and to potentially halt funding streams. See the discussion with ofer.
Thank you for publishing this post. In which way is this different from what Optimism tries to achieve? Also, what if the public good is difficult to monitor? It is hard to observe reductions in existential risk. How will the protocol pay out if there is large uncertainty regarding the effects of an intervention, even afterwards?
It is not particularly different from what Optimism is trying to achieve, but importantly it is actually on-chain whereas Optimism’s initial test round was done informally with a google doc to notarize votes and project nominations and disbursement was manual. As things scale this isn’t really a great way to fund future rounds especially if more money is involved.
That’s a good point, observing risk reduction is hard and it was a can of worms I didn’t really open in the article. I am relying on sensible wisdom of the crowd type decisions to be implemented by groups of experienced assessors and forecasters. We’d like to come up with some broad traffic light system metrics to help guide voters, but ultimately this will require more research and development. What do you mean with difficult to monitor? Broad goods like “risk reduction research” may be difficult to monitor but individual contributions or projects which are nominated can still be assessed even if the overarching progress is hard to measure.
The payout is tied to the design decisions made at the round instantiation and the votes. The responsibility lies with the badge holders to assess those uncertainties and to potentially halt funding streams. See the discussion with ofer.