No conversation that I have been a part of yet. But it is of course something that would be very interesting to discuss.
Henrik Karlsson
There is a family resemblance with the way something like Twitter is set up. There are a few differences:
Their algorithm seems to do a lot of things, some which seem to point in this direction, and a lot that points in other directions. The aim of their algorithm is not to rank information based on the graph of your likes, but to use likes and everything else to maximize time spent.
EigenKarma allows you to port your trust graph between different communities, if they are connected.
You can control what you do with the information in your trust graph, ie how you use that to inform algorithmic decisions, like rank ordering content.
When you like a tweet, it is a more public-facing act. You can pile on with your tribe liking a particular message to push the algorithm to spread it. An upvote in EigenKarma is a more private act: it is an update of your personal trust graph. It will affect the trust graphs of people who trust you, but mainly indirectly, in how it affects future processes that rely on their trust graph.
Though I should add, that the way it is set up on the Discord bot, you can see what people upvote.
How does this affect the formation of bubbles? I’m not sure. My guess is that it should reduce some of the incentives that drive the tribe-forming behaviors at Twitter.
I’m also not sure that bubbles are a massive problem, especially for the types of communities that would realistically be integrated into the system. This last point is loosely held, and I invited strong criticism, and it is something we are paying attention to as we run trials with larger groups. You could combine EigenKarma with other types of designs that counteract these problems if they are severe (though I haven’t worked through that idea deeply).
As it is currently set up, you could start a blank account and give someone a single upvote and then you would see something pretty similar to their trust graph. You would see whom they trust.
It could, I guess, be used to figure out attack vectors for a person—someone trusted that can be compromised. This does not seem like something that would be problematic in the contexts where this system would realistically be implemented over a short to medium term. But it is something to keep in mind as we iterate on the system with more users onboard.
I think maybe the word “filter” which I use gives the impression that it is about hiding information. The system is more likely to be used to rank order information, so that information that has been deemed valuable by people you trust is more likely to bubble up to you. It is supposed to be a way to augment your abilities to sort through information and social cues to find competent people and trustworthy information, not a system to replace it.