Thanks for your post! It was interesting and I’m curious to learn more.
Naturally, if EigenKarma was used by everyone, which we are not aiming for, a lot of people would coalesce around charismatic leaders too. But to the extent that happens, these dysfunctional bubbles are isolated from the more well-functioning parts of the trust graph
My main worry reading this post was that EigenKarma might create social bubbles and group-think. Not just in a few isolated cases of charismatic leaders you mention, but more generally too. For example, a lot of social networks that have content curation based on people you already follow seem to have that dynamic (I’m unsure how big a problem this really is – but it gets mentioned a lot).
E.g. If I identify with Red Tribe and upvote Red Tribe, I will increasingly see other posts upvoted by Red Tribe (and not Blue Tribe). That would make it harder to learn new information or have my priors challenged.
How is EigenKarma different from social networks that use curation algorithms based on people’s previous likes and follows? Is this an issue in practice and if so how would you try to mitigate it?
[These are intended as genuine questions to check whether I am reading your post correctly :)]
In my understanding, EigenKarma only creates bubbles if it also acts as a default content filter. If, for example, it is just displayed near usernames, it shouldn’t have this effect but would still retain its use as a signal of trustworthiness.
Also, sometimes creating a bubble—a protected space—is exactly what you want to achieve, so it might be the correct tool to use in specific contexts.
It’s the first time I read about this, so please correct me if I’m misunderstanding.
One class of examples could be when there’s an adversarial or “dangerous” environment. For example:
Bots generating low-quality content.
Voting rings.
Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn’t be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
Right, the first class are the use cases that the OP put forward, and vote brigading is something that the admins here handle.
The second class is more what I asking about, so thank you for explaining why you would want a conversation bubble. I think if you’re going to go that far for that reason, you could consider a entrance quiz. Then people who want to “join the conversation” could take the quiz, or read a recommended reading list, and then take the quiz, to gain entrance to your bubble.
I don’t know how aversive people would find that, but if lack of technical knowledge were a true issue, that would be one approach to handling it while still widening the group of conversation participants.
There is a family resemblance with the way something like Twitter is set up. There are a few differences:
Their algorithm seems to do a lot of things, some which seem to point in this direction, and a lot that points in other directions. The aim of their algorithm is not to rank information based on the graph of your likes, but to use likes and everything else to maximize time spent.
EigenKarma allows you to port your trust graph between different communities, if they are connected.
You can control what you do with the information in your trust graph, ie how you use that to inform algorithmic decisions, like rank ordering content.
When you like a tweet, it is a more public-facing act. You can pile on with your tribe liking a particular message to push the algorithm to spread it. An upvote in EigenKarma is a more private act: it is an update of your personal trust graph. It will affect the trust graphs of people who trust you, but mainly indirectly, in how it affects future processes that rely on their trust graph.
Though I should add, that the way it is set up on the Discord bot, you can see what people upvote.
How does this affect the formation of bubbles? I’m not sure. My guess is that it should reduce some of the incentives that drive the tribe-forming behaviors at Twitter.
I’m also not sure that bubbles are a massive problem, especially for the types of communities that would realistically be integrated into the system. This last point is loosely held, and I invited strong criticism, and it is something we are paying attention to as we run trials with larger groups. You could combine EigenKarma with other types of designs that counteract these problems if they are severe (though I haven’t worked through that idea deeply).
tbh I feel like too much exposure to the bottom 50% of outgroup content is the main thing driving polarisation on twitter and it seems to me that EAs are the sort of people to upvote/promote the outgroup content that’s actually good
in general I feel like worries about social media bubbles are overstated. if people on the EA forum split into factions which think little enough of each other that people end up in bubbles I feel like this means we should split up into different movements.
Yeah it’d be cool if @Henrik Karlsson and team could come up with a way to defend against social bubbles while still having a trust mechanic. Is there some way to model social bubbles and show that eigenkarma or some other mechanism could prevent social bubbles but still have the property of trusting people who deserve trust?
For instance maybe users of the social network are shown anyone who has trust, and ‘trust’ is universal throughout the community, not just something that you have when you’re connected to someone else? Would that prevent the social bubble problem, while still allowing users to filter out the low quality content from untrusted users?
Thanks for your post! It was interesting and I’m curious to learn more.
My main worry reading this post was that EigenKarma might create social bubbles and group-think. Not just in a few isolated cases of charismatic leaders you mention, but more generally too. For example, a lot of social networks that have content curation based on people you already follow seem to have that dynamic (I’m unsure how big a problem this really is – but it gets mentioned a lot).
E.g. If I identify with Red Tribe and upvote Red Tribe, I will increasingly see other posts upvoted by Red Tribe (and not Blue Tribe). That would make it harder to learn new information or have my priors challenged.
How is EigenKarma different from social networks that use curation algorithms based on people’s previous likes and follows? Is this an issue in practice and if so how would you try to mitigate it?
[These are intended as genuine questions to check whether I am reading your post correctly :)]
In my understanding, EigenKarma only creates bubbles if it also acts as a default content filter. If, for example, it is just displayed near usernames, it shouldn’t have this effect but would still retain its use as a signal of trustworthiness.
Also, sometimes creating a bubble—a protected space—is exactly what you want to achieve, so it might be the correct tool to use in specific contexts.
It’s the first time I read about this, so please correct me if I’m misunderstanding.
Personally, I find the idea very interesting.
Can you explain with an example when a bubble would be a desirable outcome?
One class of examples could be when there’s an adversarial or “dangerous” environment. For example:
Bots generating low-quality content.
Voting rings.
Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn’t be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
Right, the first class are the use cases that the OP put forward, and vote brigading is something that the admins here handle.
The second class is more what I asking about, so thank you for explaining why you would want a conversation bubble. I think if you’re going to go that far for that reason, you could consider a entrance quiz. Then people who want to “join the conversation” could take the quiz, or read a recommended reading list, and then take the quiz, to gain entrance to your bubble.
I don’t know how aversive people would find that, but if lack of technical knowledge were a true issue, that would be one approach to handling it while still widening the group of conversation participants.
There is a family resemblance with the way something like Twitter is set up. There are a few differences:
Their algorithm seems to do a lot of things, some which seem to point in this direction, and a lot that points in other directions. The aim of their algorithm is not to rank information based on the graph of your likes, but to use likes and everything else to maximize time spent.
EigenKarma allows you to port your trust graph between different communities, if they are connected.
You can control what you do with the information in your trust graph, ie how you use that to inform algorithmic decisions, like rank ordering content.
When you like a tweet, it is a more public-facing act. You can pile on with your tribe liking a particular message to push the algorithm to spread it. An upvote in EigenKarma is a more private act: it is an update of your personal trust graph. It will affect the trust graphs of people who trust you, but mainly indirectly, in how it affects future processes that rely on their trust graph.
Though I should add, that the way it is set up on the Discord bot, you can see what people upvote.
How does this affect the formation of bubbles? I’m not sure. My guess is that it should reduce some of the incentives that drive the tribe-forming behaviors at Twitter.
I’m also not sure that bubbles are a massive problem, especially for the types of communities that would realistically be integrated into the system. This last point is loosely held, and I invited strong criticism, and it is something we are paying attention to as we run trials with larger groups. You could combine EigenKarma with other types of designs that counteract these problems if they are severe (though I haven’t worked through that idea deeply).
tbh I feel like too much exposure to the bottom 50% of outgroup content is the main thing driving polarisation on twitter and it seems to me that EAs are the sort of people to upvote/promote the outgroup content that’s actually good
in general I feel like worries about social media bubbles are overstated. if people on the EA forum split into factions which think little enough of each other that people end up in bubbles I feel like this means we should split up into different movements.
Yeah it’d be cool if @Henrik Karlsson and team could come up with a way to defend against social bubbles while still having a trust mechanic. Is there some way to model social bubbles and show that eigenkarma or some other mechanism could prevent social bubbles but still have the property of trusting people who deserve trust?
For instance maybe users of the social network are shown anyone who has trust, and ‘trust’ is universal throughout the community, not just something that you have when you’re connected to someone else? Would that prevent the social bubble problem, while still allowing users to filter out the low quality content from untrusted users?