As a mathematician I think this is cool and interesting and I’d be glad to know what comes out of these experiments.
As a citizen I’m concerned about the potential to increase gatekeeping, groupthink and polarisation, and most of all of the major privacy risk. Like, if I open a new account and upvote a single other user, can I now figure out exactly who they have upvoted? Even if I can’t in this manner, what can I glean from looking, as your example suggests, at the most trusted individuals in a community etc.?
As it is currently set up, you could start a blank account and give someone a single upvote and then you would see something pretty similar to their trust graph. You would see whom they trust.
It could, I guess, be used to figure out attack vectors for a person—someone trusted that can be compromised. This does not seem like something that would be problematic in the contexts where this system would realistically be implemented over a short to medium term. But it is something to keep in mind as we iterate on the system with more users onboard.
I think the privacy risk is more prominent when you don’t know that information is being made public. A system like this would obviously tell users that their upvotes are counted towards the trust system and therefore people would upvote based on that. I guess we could make it all hidden, similar to Youtube’s algorythm where the trust graph is not publically available and only works in the backgrounds to change the upvotes you see for the people you trust (so it changes the upvotes but you don’t see how it changes and other people can’t see yours), but I believe that would be more concerning and likely to be abused then if the information is always available for everyone.
As for the gatekeeping and groupthink, I think that already happens regardlessly of a trust system, and is a consequence of tribalism in the real world rather than the cause of it. Honestly, I see it beneficial that people who despise each other get more separated, and usually it’s the forceful interaction between opposing groups that results in violent outcomes.
I have put some thought into the privacy aspect, and there are ways to make it non-trivial or even fairly difficult to extract someone’s trust graph, but nothing which actually hides it perfectly. That’s why the network would have to be opt-in, and likely would not cover negative votes.
I’d be interested to hear the unpacked version of your worries about “gatekeeping, groupthink and polarisation”.
As a mathematician I think this is cool and interesting and I’d be glad to know what comes out of these experiments.
As a citizen I’m concerned about the potential to increase gatekeeping, groupthink and polarisation, and most of all of the major privacy risk. Like, if I open a new account and upvote a single other user, can I now figure out exactly who they have upvoted? Even if I can’t in this manner, what can I glean from looking, as your example suggests, at the most trusted individuals in a community etc.?
As it is currently set up, you could start a blank account and give someone a single upvote and then you would see something pretty similar to their trust graph. You would see whom they trust.
It could, I guess, be used to figure out attack vectors for a person—someone trusted that can be compromised. This does not seem like something that would be problematic in the contexts where this system would realistically be implemented over a short to medium term. But it is something to keep in mind as we iterate on the system with more users onboard.
It does seem like an important point that your trust graph is effectively public even if you don’t expose it in the API.
I think the privacy risk is more prominent when you don’t know that information is being made public. A system like this would obviously tell users that their upvotes are counted towards the trust system and therefore people would upvote based on that. I guess we could make it all hidden, similar to Youtube’s algorythm where the trust graph is not publically available and only works in the backgrounds to change the upvotes you see for the people you trust (so it changes the upvotes but you don’t see how it changes and other people can’t see yours), but I believe that would be more concerning and likely to be abused then if the information is always available for everyone.
As for the gatekeeping and groupthink, I think that already happens regardlessly of a trust system, and is a consequence of tribalism in the real world rather than the cause of it. Honestly, I see it beneficial that people who despise each other get more separated, and usually it’s the forceful interaction between opposing groups that results in violent outcomes.
The literature on differential privacy might be helpful here. I think I may know a few people in the field, although none of them are close.
I have put some thought into the privacy aspect, and there are ways to make it non-trivial or even fairly difficult to extract someone’s trust graph, but nothing which actually hides it perfectly. That’s why the network would have to be opt-in, and likely would not cover negative votes.
I’d be interested to hear the unpacked version of your worries about “gatekeeping, groupthink and polarisation”.
I don’t have time to write it in detail, but basically I’m referring to two ideas here regarding what a user sees:
If determined by the trust graph of forum moderators, it’d contribute to groupthink because dissenting voices will be silenced.
If decided by the user’s trust graph, you’ll get polarization because opposing groups won’t see each other’s content.
Both lead to gatekeeping because new users aren’t trusted by anyone so their content can’t get through.