But I think I don’t really buy the first point. We could come up with some kind of electorate that’s frustrating but better than the whole world. Forum users weighted by forum karma is a semi democratic system that’s better than any you suggest and pretty robust to takeover (though the forum would get a lot of spam)
My issue with that is I don’t believe the forum makes better decisions than OpenPhil. Heck we could test it, get the forum to vote on it’s allocation of funds each year and then compare in 5 years to what OpenPhil did and see which we’d prefer.
I bet that we’d pick OpenPhil’s slate from 5 years ago to the forum average from then.
So yeah, mainly I buy your second point that democratic approaches would lead to less effective resource allocation.
(As an aside there is democratic power here. When we all turned on SBF, that was democratic power—turns out that his donations did not buy him cover after the fact and I think that was good)
In short. I think the current system works. It annoys me a bit, but I can’t come up with a better one.
The proposal of forum users weighted by karma can be taken over if you have a large group of new users all voting for each other. You could require a minimum number of comments, lag in karma score by a year or more, require new comments within the past few months and so on to make it harder for a takeover, but if a large enough group is invested in takeover and willing to put in the time and effort, I think they could do it. I suppose if the karma lags are long enough and engagement requirement great enough, they might lose interest and be unable to coordinate the takeover.
You could stop counting karma starting from ~now (or some specific date), but that would mean severely underweighting legitimate newcomers. EDIT: But maybe you could just do this again in the future without letting everyone know ahead of time when or what your rules will be, so newcomers can eventually have a say, but it’ll be harder to game.
You could also try to cluster users by voting patterns to identify and stop takeovers, but this would be worrying, since it could be used to target legitimate EA subgroups.
I was trying to highlight a bootstrapping problem, but by no means meant it to be the only problem.
It’s not crazy to me to create some sort of formal system to weigh the opinions of high-karma forums posters, though as you say that is only semi-democratic, and so reintroduces some of the issues Cremer et al were trying to solve in the first place.
I am open-minded about whether it would be better than openphil, assuming they get the time to invest in making decisions well after being chosen (sortition S.O.P.).
I agree that some sort of periodic rules reveal could significantly mitigate corruption issues. Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
A simpler version of this is to have a system of membership, where existing members can nominate new members. Maybe every year some percentage of the membership gets chosen randomly and given the opportunity to nominate someone. In addition to having a process for becoming a member, there could also be processes for achieving higher levels of seniority, with more senior members granted greater input into membership decisions, and processes for nudging people who’ve lost interest in EA to let their membership lapse, and processes to kick out people found guilty of wrongdoing.
I assume there are a lot of membership-based organizations which could be studied: Rotary International, the Red Cross, national fraternities & sororities, etc.
A membership system might sound like a lot of overhead, but I think we’re already doing an ad-hoc, informal version of something like this. As NegativeNuno put it: “Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers.” My vague impression is that at least a few grantmakers like this system, and believe it is a good and necessary way for people to build trust. So if we step back and acknowledge that “building trust” is an objective, and it’s currently being pursued in an ad-hoc way which is probably not very robust, we can ask: “is there a better way to achieve that objective?”
How much are you thinking karma would be “worth”? It’s not that hard for an intelligent person to simulate being an EA if the incentives are right. If significant money were involved, you’d have to heavily restrict the list of organizations the user could vote for, which limits the point of a semi-democratic process in the first place.
E.g., if climate change were not out of bounds and karma were worth $10 a point, arguably the most impactful thing for climate change a non-EA moderately bright university student could do would be . . . mine karma by pretending to be an EA. I haven’t tried, but 40 to 60 karma per hour from someone consciously trying to mine karma sounds plausible.
I do potentially like the idea of karma giving the right to direct a very small amount of funding, as much for the information value as anything else.
Good comment.
But I think I don’t really buy the first point. We could come up with some kind of electorate that’s frustrating but better than the whole world. Forum users weighted by forum karma is a semi democratic system that’s better than any you suggest and pretty robust to takeover (though the forum would get a lot of spam)
My issue with that is I don’t believe the forum makes better decisions than OpenPhil. Heck we could test it, get the forum to vote on it’s allocation of funds each year and then compare in 5 years to what OpenPhil did and see which we’d prefer.
I bet that we’d pick OpenPhil’s slate from 5 years ago to the forum average from then.
So yeah, mainly I buy your second point that democratic approaches would lead to less effective resource allocation.
(As an aside there is democratic power here. When we all turned on SBF, that was democratic power—turns out that his donations did not buy him cover after the fact and I think that was good)
In short. I think the current system works. It annoys me a bit, but I can’t come up with a better one.
The proposal of forum users weighted by karma can be taken over if you have a large group of new users all voting for each other. You could require a minimum number of comments, lag in karma score by a year or more, require new comments within the past few months and so on to make it harder for a takeover, but if a large enough group is invested in takeover and willing to put in the time and effort, I think they could do it. I suppose if the karma lags are long enough and engagement requirement great enough, they might lose interest and be unable to coordinate the takeover.
You could stop counting karma starting from ~now (or some specific date), but that would mean severely underweighting legitimate newcomers. EDIT: But maybe you could just do this again in the future without letting everyone know ahead of time when or what your rules will be, so newcomers can eventually have a say, but it’ll be harder to game.
You could also try to cluster users by voting patterns to identify and stop takeovers, but this would be worrying, since it could be used to target legitimate EA subgroups.
As I say, seems like this isn’t the actual problem, even if we did get the right group I wouldn’t trust them to be better than openphil.
I was trying to highlight a bootstrapping problem, but by no means meant it to be the only problem.
It’s not crazy to me to create some sort of formal system to weigh the opinions of high-karma forums posters, though as you say that is only semi-democratic, and so reintroduces some of the issues Cremer et al were trying to solve in the first place.
I am open-minded about whether it would be better than openphil, assuming they get the time to invest in making decisions well after being chosen (sortition S.O.P.).
I agree that some sort of periodic rules reveal could significantly mitigate corruption issues. Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
A simpler version of this is to have a system of membership, where existing members can nominate new members. Maybe every year some percentage of the membership gets chosen randomly and given the opportunity to nominate someone. In addition to having a process for becoming a member, there could also be processes for achieving higher levels of seniority, with more senior members granted greater input into membership decisions, and processes for nudging people who’ve lost interest in EA to let their membership lapse, and processes to kick out people found guilty of wrongdoing.
I assume there are a lot of membership-based organizations which could be studied: Rotary International, the Red Cross, national fraternities & sororities, etc.
A membership system might sound like a lot of overhead, but I think we’re already doing an ad-hoc, informal version of something like this. As NegativeNuno put it: “Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers.” My vague impression is that at least a few grantmakers like this system, and believe it is a good and necessary way for people to build trust. So if we step back and acknowledge that “building trust” is an objective, and it’s currently being pursued in an ad-hoc way which is probably not very robust, we can ask: “is there a better way to achieve that objective?”
How much are you thinking karma would be “worth”? It’s not that hard for an intelligent person to simulate being an EA if the incentives are right. If significant money were involved, you’d have to heavily restrict the list of organizations the user could vote for, which limits the point of a semi-democratic process in the first place.
E.g., if climate change were not out of bounds and karma were worth $10 a point, arguably the most impactful thing for climate change a non-EA moderately bright university student could do would be . . . mine karma by pretending to be an EA. I haven’t tried, but 40 to 60 karma per hour from someone consciously trying to mine karma sounds plausible.
I do potentially like the idea of karma giving the right to direct a very small amount of funding, as much for the information value as anything else.