I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I’m not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works—as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author’s ability.
Potential ways around this that come to mind:
Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
Maybe even without that, there’s still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?
Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. “You aren’t eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong.” This would ensure that only people from within the community can contribute to certain questions.
Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.
another incentive system/component I have seen is that forums will allow users not only to upvote but to give other incentives to good answers.
stackoverflow has bounty, and reddit coins
I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I’m not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works—as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author’s ability.
Potential ways around this that come to mind:
Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
Maybe even without that, there’s still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?
Good ideas. I have a few more,
Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. “You aren’t eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong.” This would ensure that only people from within the community can contribute to certain questions.
Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.
another incentive system/component I have seen is that forums will allow users not only to upvote but to give other incentives to good answers. stackoverflow has bounty, and reddit coins