Reputation systems are typically used by on-line platforms to help enable higher levels of trust between users.
1) My sense is that within EA there is a norm that we Do Favors For Each Other; ie, EAs often seem to have the subgoal ‘try to help other EAs, within reason’. This is both correct and lovely.
2) This norm may come under significant pressure as the community continues to scale. Will it be sustainable when the community has grown 10x? 100x? 1000x?
If both of these propositions are correct, then an EA reputation system may be worth thinking about. EA presents some interesting challenges as a big-tent social movement, spread across many different on- and off-line platforms. Some initial ideas of what a reputational system could look like:
Yet Another Webpage: eahub.org already supports profiles pages for EAs, with links to FB, this forum, lesswrong, etc. If most EAs have a page on eahub, with up-to-date links to their other on-line personas, maybe that’s enough?
A Score: something like karma or the rep systems reddit/stack-exchange use, but able to deal with the multi-platform nature of EA. There are significant technical and social challenges with scoring systems even when they are only on a single platform.
A web of trust: something like the PGP web-of-trust, where EAs could essentially vouch for each other.
We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.
If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.
That was probably the most load-bearing thought in my web-of-trust-based social network project. The lack of specificity about what endorsements mean is the reason twitter doesn’t work (but would if it allowed and encouraged having a lot more alts), and I believe that once you’ve distinguished the kinds of trust, you’ll have a very different, much more useful kind of thing.
Does EA need [a] reputation system[s]?
Reputation systems are typically used by on-line platforms to help enable higher levels of trust between users.
1) My sense is that within EA there is a norm that we Do Favors For Each Other; ie, EAs often seem to have the subgoal ‘try to help other EAs, within reason’. This is both correct and lovely.
2) This norm may come under significant pressure as the community continues to scale. Will it be sustainable when the community has grown 10x? 100x? 1000x?
If both of these propositions are correct, then an EA reputation system may be worth thinking about. EA presents some interesting challenges as a big-tent social movement, spread across many different on- and off-line platforms. Some initial ideas of what a reputational system could look like:
Yet Another Webpage: eahub.org already supports profiles pages for EAs, with links to FB, this forum, lesswrong, etc. If most EAs have a page on eahub, with up-to-date links to their other on-line personas, maybe that’s enough?
A Score: something like karma or the rep systems reddit/stack-exchange use, but able to deal with the multi-platform nature of EA. There are significant technical and social challenges with scoring systems even when they are only on a single platform.
A web of trust: something like the PGP web-of-trust, where EAs could essentially vouch for each other.
We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.
If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.
That was probably the most load-bearing thought in my web-of-trust-based social network project. The lack of specificity about what endorsements mean is the reason twitter doesn’t work (but would if it allowed and encouraged having a lot more alts), and I believe that once you’ve distinguished the kinds of trust, you’ll have a very different, much more useful kind of thing.