As an ex-intern I should probably be excluded, but here’s a few ideas:
Perhaps we should have a bounty for the article that best addresses a misconception in AI safety—ideally judged by members of AI safety organisations. I know that the AI safety community has generally agreed that public outreach is not a priority, but I think we should make an exception for addressing misconceptions as otherwise these could poison the well. One downside of this bounty is that if people post these publicly, they may not be able to submit them elsewhere.
This would be meta too—but perhaps the best article arguing for a gap in the AI safety landscape. I’ve been chatting to people about this recently and lots of people have lots of ideas of what needs to be done, but this would provide motivation for people to write this up and to invest more effort than they would otherwise.
Perhaps we could have a bounty for the best article for the most persuasive article aimed at persuading critics of AI safety research. I guess the way to test this would be to see which article critics find most persuasive (but we would want to avoid the trap where the incentive would be to agree with critics only almost everything and then argue that they should shift one little point; not saying that this is invalid, but it shouldn’t be the dominant strategy) (Here’s a link to an experiment that was run to convince participants to donate to charity: https://schwitzsplinters.blogspot.com/2019/10/philosophy-contest-write-philosophical.html )
There seems to be a shortage of video content on AI Safety. It would be good for Robert Miles to have competition to spur him on to even greater heights. So perhaps there could be a prize for the best explainer video. (Potential downside: this could result in Youtube being flooded with low quality content—although I suspect that aren’t particularly good are likely to just be ignored).
This is still a vague idea, but it’s a shame that we can’t see into the future. What if there was a competition where we asked people to argue that a certain claim will seem obvious or at least much more plausible in reterospect (say in 5 years). In 5 years, the post that achieved this best would be awarded the prize. This wouldn’t just be about making a future prediction, like prediction markets entail, but providing a compelling argument for this, where the argument is supposed to seem compelling in reterospect. As an example, imagine if someone had predicted in advance that longtermism would become dominant among highly-engaged EAs or that we’d realise that we’d overrated earning to give.
Perhaps a prediction market on what area of AI safety a panel of experts will consider to be most promising in 5 years?
Seeing as Substack seems to be the new hot thing, perhaps we could create a Substack fellowship for EAs? The fellowships Substack offers don’t just provide funding, but other benefits too. Perhaps Substack might agree to provide these benefits to these fellows if EA were to fund the fellowship.
As an ex-intern I should probably be excluded, but here’s a few ideas:
Perhaps we should have a bounty for the article that best addresses a misconception in AI safety—ideally judged by members of AI safety organisations. I know that the AI safety community has generally agreed that public outreach is not a priority, but I think we should make an exception for addressing misconceptions as otherwise these could poison the well. One downside of this bounty is that if people post these publicly, they may not be able to submit them elsewhere.
This would be meta too—but perhaps the best article arguing for a gap in the AI safety landscape. I’ve been chatting to people about this recently and lots of people have lots of ideas of what needs to be done, but this would provide motivation for people to write this up and to invest more effort than they would otherwise.
Perhaps we could have a bounty for the best article for the most persuasive article aimed at persuading critics of AI safety research. I guess the way to test this would be to see which article critics find most persuasive (but we would want to avoid the trap where the incentive would be to agree with critics only almost everything and then argue that they should shift one little point; not saying that this is invalid, but it shouldn’t be the dominant strategy) (Here’s a link to an experiment that was run to convince participants to donate to charity: https://schwitzsplinters.blogspot.com/2019/10/philosophy-contest-write-philosophical.html )
There seems to be a shortage of video content on AI Safety. It would be good for Robert Miles to have competition to spur him on to even greater heights. So perhaps there could be a prize for the best explainer video. (Potential downside: this could result in Youtube being flooded with low quality content—although I suspect that aren’t particularly good are likely to just be ignored).
This is still a vague idea, but it’s a shame that we can’t see into the future. What if there was a competition where we asked people to argue that a certain claim will seem obvious or at least much more plausible in reterospect (say in 5 years). In 5 years, the post that achieved this best would be awarded the prize. This wouldn’t just be about making a future prediction, like prediction markets entail, but providing a compelling argument for this, where the argument is supposed to seem compelling in reterospect. As an example, imagine if someone had predicted in advance that longtermism would become dominant among highly-engaged EAs or that we’d realise that we’d overrated earning to give.
Perhaps a prediction market on what area of AI safety a panel of experts will consider to be most promising in 5 years?
Seeing as Substack seems to be the new hot thing, perhaps we could create a Substack fellowship for EAs? The fellowships Substack offers don’t just provide funding, but other benefits too. Perhaps Substack might agree to provide these benefits to these fellows if EA were to fund the fellowship.