Appreciate the questions! In general, I’m not super concerned about adversarial action this time around, since:
I generally trust people in the community to do the right thing
The money can’t be withdrawn to your own pocket, so the worst case is that some people get to direct more funding than they properly deserve
The total funding at stake is relatively small
We reserve the right to modify this, if we see people trying to exploit things
Specifically:
I plan to mostly rely on self-reports, plus maybe quick sanity checks that a particular person actually exists.
Though, if we’re scaling this up for future rounds, a neat solution I just thought of would be to require people to buy in a little bit, eg they have to donate $10 of their own money to unlock the funds. This would act as a stake towards telling the truth—if we determine that someone is misrepresenting their qualifications then they lose their stake too.
Haha, I love that post (and left some comments from our past experience running QF). We don’t have clever tricks planned to address those shortcomings; I do think collusion and especially usability are problems with QF in general (though, Vitalik has some proposal on bounded pairwise QF that might address collusion?)
We’re going with QF because it’s a schelling point/rallying flag for getting people interested in weird funding mechanisms. It’s not perfect, but it’s been tested enough in the wild for us to have some literature behind it, while not having much actual exposure within EA. If we run this again, I’d be open to mechanism changes!
We don’t permit people to create a bunch of accounts to claim the bonus multiple times; we’d look to prevent this by tracking the signup behavior on Manifund. Also, all donation activity is done in public, so I think there will be other scrutiny of weird funding patterns.
Meanwhile I think sharing this on X and encouraging their followers to participate is pretty reasonable—while we’re targeting EA Community Choice at medium-to-highly engaged EAs, I do also hope that this would draw some new folks into our scene!
I’d also consider erring on the side of being clear and explicit about the norms you expect people to follow. For instance, someone who only skims the explanation of QF (or lacks the hacker/lawyer instinct for finding exploits!) may not get that behavior that is pretty innocuous in other contexts can be reasonably seen as pretty collusive and corrupting in the QF context. For instance, in analogous contexts, I suspect that logrolling-type behaviors (“vote” for my project by allocating a token amount of your available funds, and I’ll vote for yours!) would be seen as fine by most of the general population.[1]
I would have assumed logrolling type behaviours were basically fine here (at least, if the other person also somewhat cared about your charity), so +1 that explicit norms are good
Appreciate the questions! In general, I’m not super concerned about adversarial action this time around, since:
I generally trust people in the community to do the right thing
The money can’t be withdrawn to your own pocket, so the worst case is that some people get to direct more funding than they properly deserve
The total funding at stake is relatively small
We reserve the right to modify this, if we see people trying to exploit things
Specifically:
I plan to mostly rely on self-reports, plus maybe quick sanity checks that a particular person actually exists.
Though, if we’re scaling this up for future rounds, a neat solution I just thought of would be to require people to buy in a little bit, eg they have to donate $10 of their own money to unlock the funds. This would act as a stake towards telling the truth—if we determine that someone is misrepresenting their qualifications then they lose their stake too.
Haha, I love that post (and left some comments from our past experience running QF). We don’t have clever tricks planned to address those shortcomings; I do think collusion and especially usability are problems with QF in general (though, Vitalik has some proposal on bounded pairwise QF that might address collusion?)
We’re going with QF because it’s a schelling point/rallying flag for getting people interested in weird funding mechanisms. It’s not perfect, but it’s been tested enough in the wild for us to have some literature behind it, while not having much actual exposure within EA. If we run this again, I’d be open to mechanism changes!
We don’t permit people to create a bunch of accounts to claim the bonus multiple times; we’d look to prevent this by tracking the signup behavior on Manifund. Also, all donation activity is done in public, so I think there will be other scrutiny of weird funding patterns.
Meanwhile I think sharing this on X and encouraging their followers to participate is pretty reasonable—while we’re targeting EA Community Choice at medium-to-highly engaged EAs, I do also hope that this would draw some new folks into our scene!
I’d also consider erring on the side of being clear and explicit about the norms you expect people to follow. For instance, someone who only skims the explanation of QF (or lacks the hacker/lawyer instinct for finding exploits!) may not get that behavior that is pretty innocuous in other contexts can be reasonably seen as pretty collusive and corrupting in the QF context. For instance, in analogous contexts, I suspect that logrolling-type behaviors (“vote” for my project by allocating a token amount of your available funds, and I’ll vote for yours!) would be seen as fine by most of the general population.[1]
Indeed, I’m not 100% sure on where you would draw the line on coordination / promotional activities.
I would have assumed logrolling type behaviours were basically fine here (at least, if the other person also somewhat cared about your charity), so +1 that explicit norms are good