Thank you for doing this, and huge thanks to the anonymous donor!
Three quick questions on the implementation side of things:
How will you verify the conditions for the bonuses? E.g. as far as I know anyone can claim to organize a local EA group, itās not clear what counts as an āEA orgā or āEA grantā, and I donāt think lists of EAG(x) volunteers are public.
How do you define ācommunity memberā? E.g. what prevents someone from making 100 accounts or sharing this on X and encouraging hundreds of random people to vote for their favourite project?
Apologies if these were already answered somewhere, Iām really curious to see the results of this experiment!
Appreciate the questions! In general, Iām not super concerned about adversarial action this time around, since:
I generally trust people in the community to do the right thing
The money canāt be withdrawn to your own pocket, so the worst case is that some people get to direct more funding than they properly deserve
The total funding at stake is relatively small
We reserve the right to modify this, if we see people trying to exploit things
Specifically:
I plan to mostly rely on self-reports, plus maybe quick sanity checks that a particular person actually exists.
Though, if weāre scaling this up for future rounds, a neat solution I just thought of would be to require people to buy in a little bit, eg they have to donate $10 of their own money to unlock the funds. This would act as a stake towards telling the truthāif we determine that someone is misrepresenting their qualifications then they lose their stake too.
Haha, I love that post (and left some comments from our past experience running QF). We donāt have clever tricks planned to address those shortcomings; I do think collusion and especially usability are problems with QF in general (though, Vitalik has some proposal on bounded pairwise QF that might address collusion?)
Weāre going with QF because itās a schelling point/ārallying flag for getting people interested in weird funding mechanisms. Itās not perfect, but itās been tested enough in the wild for us to have some literature behind it, while not having much actual exposure within EA. If we run this again, Iād be open to mechanism changes!
We donāt permit people to create a bunch of accounts to claim the bonus multiple times; weād look to prevent this by tracking the signup behavior on Manifund. Also, all donation activity is done in public, so I think there will be other scrutiny of weird funding patterns.
Meanwhile I think sharing this on X and encouraging their followers to participate is pretty reasonableāwhile weāre targeting EA Community Choice at medium-to-highly engaged EAs, I do also hope that this would draw some new folks into our scene!
Iād also consider erring on the side of being clear and explicit about the norms you expect people to follow. For instance, someone who only skims the explanation of QF (or lacks the hacker/ālawyer instinct for finding exploits!) may not get that behavior that is pretty innocuous in other contexts can be reasonably seen as pretty collusive and corrupting in the QF context. For instance, in analogous contexts, I suspect that logrolling-type behaviors (āvoteā for my project by allocating a token amount of your available funds, and Iāll vote for yours!) would be seen as fine by most of the general population.[1]
I would have assumed logrolling type behaviours were basically fine here (at least, if the other person also somewhat cared about your charity), so +1 that explicit norms are good
Thank you for doing this, and huge thanks to the anonymous donor!
Three quick questions on the implementation side of things:
How will you verify the conditions for the bonuses? E.g. as far as I know anyone can claim to organize a local EA group, itās not clear what counts as an āEA orgā or āEA grantā, and I donāt think lists of EAG(x) volunteers are public.
Do you have plans to mitigate some of the main drawbacks of quadratic funding? E.g. Vulnerability to collusion
How do you define ācommunity memberā? E.g. what prevents someone from making 100 accounts or sharing this on X and encouraging hundreds of random people to vote for their favourite project?
Apologies if these were already answered somewhere, Iām really curious to see the results of this experiment!
Appreciate the questions! In general, Iām not super concerned about adversarial action this time around, since:
I generally trust people in the community to do the right thing
The money canāt be withdrawn to your own pocket, so the worst case is that some people get to direct more funding than they properly deserve
The total funding at stake is relatively small
We reserve the right to modify this, if we see people trying to exploit things
Specifically:
I plan to mostly rely on self-reports, plus maybe quick sanity checks that a particular person actually exists.
Though, if weāre scaling this up for future rounds, a neat solution I just thought of would be to require people to buy in a little bit, eg they have to donate $10 of their own money to unlock the funds. This would act as a stake towards telling the truthāif we determine that someone is misrepresenting their qualifications then they lose their stake too.
Haha, I love that post (and left some comments from our past experience running QF). We donāt have clever tricks planned to address those shortcomings; I do think collusion and especially usability are problems with QF in general (though, Vitalik has some proposal on bounded pairwise QF that might address collusion?)
Weāre going with QF because itās a schelling point/ārallying flag for getting people interested in weird funding mechanisms. Itās not perfect, but itās been tested enough in the wild for us to have some literature behind it, while not having much actual exposure within EA. If we run this again, Iād be open to mechanism changes!
We donāt permit people to create a bunch of accounts to claim the bonus multiple times; weād look to prevent this by tracking the signup behavior on Manifund. Also, all donation activity is done in public, so I think there will be other scrutiny of weird funding patterns.
Meanwhile I think sharing this on X and encouraging their followers to participate is pretty reasonableāwhile weāre targeting EA Community Choice at medium-to-highly engaged EAs, I do also hope that this would draw some new folks into our scene!
Iād also consider erring on the side of being clear and explicit about the norms you expect people to follow. For instance, someone who only skims the explanation of QF (or lacks the hacker/ālawyer instinct for finding exploits!) may not get that behavior that is pretty innocuous in other contexts can be reasonably seen as pretty collusive and corrupting in the QF context. For instance, in analogous contexts, I suspect that logrolling-type behaviors (āvoteā for my project by allocating a token amount of your available funds, and Iāll vote for yours!) would be seen as fine by most of the general population.[1]
Indeed, Iām not 100% sure on where you would draw the line on coordination /ā promotional activities.
I would have assumed logrolling type behaviours were basically fine here (at least, if the other person also somewhat cared about your charity), so +1 that explicit norms are good