Thanks fo the post, I really enjoyed this idea; please, let us know about its progress. I’d like to see something analogous for academic research and published papers.
As for your open questions:
Personal counseling, or online comprehensive guides?
I guess comprehensive guidelines are better, but they’re not exclusive alternatives, right?
RCTs are the golden standard of measuring impact, but are expensive and complicated. What easier alternatives do we allow? What’s the level of evidence and reliability we accept?
Maybe you should let your contestants and committee have this discussion; of course you can’t RCT everything—e.g., no one would do it for parachutes. Actually, it might even be an easier problem than finding a common metric to compare different interventions—e.g., take a look at GiveWell’s blog.
Evaluation process: Are charities required to file documents, do we pay for third-party analysts? Should we have a committee? How transparent can/should we be about the results?
Yes, yes, a lot. Suggestion: evaluate contestants in a two, perhaps three-level process:
a) approval voting in an online platform: you can either have an open website where people can vote for free (after identifying themselves), or you can charge voters a fee (two pros: voters will have some skin in the game, and partially fund your costs);
b) individual reviewers rate / select the n most voted contestants;
c) your committee decides the winners.
Risks: I guess reputational risk might be your real issue. Some sort of worst-case scenario: you lose your money and resources to this awful winner… but the real problem is that it goes viral, and everyone starts associating “Effective Altruism” with something like “Canadian Satanists chanting religious poetry in rich schools”. Even my illustrative example sucks. Is there any way of letting EA get the benefits of the exposure, but not its risks?
BTW, do you plan to do it just once, or do you intend people to expect it to become a periodic contest? Signalling it could be repeated could influence future projects (in case you end up being successful)
Thanks fo the post, I really enjoyed this idea; please, let us know about its progress. I’d like to see something analogous for academic research and published papers.
As for your open questions:
I guess comprehensive guidelines are better, but they’re not exclusive alternatives, right?
Maybe you should let your contestants and committee have this discussion; of course you can’t RCT everything—e.g., no one would do it for parachutes. Actually, it might even be an easier problem than finding a common metric to compare different interventions—e.g., take a look at GiveWell’s blog.
Yes, yes, a lot. Suggestion: evaluate contestants in a two, perhaps three-level process:
a) approval voting in an online platform: you can either have an open website where people can vote for free (after identifying themselves), or you can charge voters a fee (two pros: voters will have some skin in the game, and partially fund your costs);
b) individual reviewers rate / select the n most voted contestants;
c) your committee decides the winners.
Risks: I guess reputational risk might be your real issue. Some sort of worst-case scenario: you lose your money and resources to this awful winner… but the real problem is that it goes viral, and everyone starts associating “Effective Altruism” with something like “Canadian Satanists chanting religious poetry in rich schools”. Even my illustrative example sucks. Is there any way of letting EA get the benefits of the exposure, but not its risks?
BTW, do you plan to do it just once, or do you intend people to expect it to become a periodic contest? Signalling it could be repeated could influence future projects (in case you end up being successful)