Thanks so much for doing all of this work! I agree that the initiative was valuable. I also think that the new additions and mentions are exceptional ideas and worthy of more exploration.
I want to share my feedback and reflections as a competition participant and post reader. Note that these are weakly held views, shared quickly, and mainly for transparency.
I was personally surprised by how few awards and mentions were given and the relatively small overall value of the pay-out. I thought that there were easily 20+ good ideas proposed, maybe more.
I would have probably liked if more submissions were at least explicitly classed as having ‘high potential’ etc. Part of this view comes from recently hearing how EA is flooded with funding etc. This leads me to feel that that foregrounding good ideas is increasingly important. I suppose I worry that we might miss out on potential value from what I perceived to be a very valuable and fruitful innovation ideation exercise.
Related to that, I’d really like to see funders/stakeholder proactively nudge the development of any ideas that they liked. IMO many deserve a full forum post and more examination and some should be funded for a trial. I would discourage people from assuming that anyone who proposed an idea will be proactive in trying to progress it in the absence of feedback and commitments for funding, even if it was awarded a prize.
In future rounds/similar competitions, it could be valuable to give even very short feedback on propositions to indicate your receptiveness and reasoning. At this stage, those who took time out of their work and leisure to contribute ideas but didn’t win an award may feel that they gained insufficient impact/reward/insight for that work. If so, that does not optimally motivate them to invest time in future ideation projects (though they may anyway if sufficiently intrinsically motivated, etc). I think that giving even a small amount of feedback could reduce that risk. Getting feedback would show the person offering the idea that it was engaged with and give them an update on its fit for funding etc.
I was also surprised by the low number of awards. I was expecting ~5-10x as many winners (50-100).
Also it’s interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted “by popular demand”), minimum 16. None of the winners were in the top 75 highest rated comments; 8⁄9 of the publicly posted honourable mentions were (including 4 in the top 11).
There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner. See spreadsheet for details.
Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality.
I note that the karma figures are confounded by posting date (and possibly popularity of the poster), and a better system for showing them would likely have produced different results, as per the considerations Nathan Young outlines in the second most upvoted comment on the initial competition announcement. Also karma is an imperfect measure (so maybe the discrepancy isn’t that surprising).
Maybe giving feedback on the idea could’ve been outsourced by also giving monetary rewards for especially useful responses regarding the usefulness of an idea. :D
(by the way, I read all public suggestions and I remember liking many of your ideas, Peter)
Thanks so much for doing all of this work! I agree that the initiative was valuable. I also think that the new additions and mentions are exceptional ideas and worthy of more exploration.
I want to share my feedback and reflections as a competition participant and post reader. Note that these are weakly held views, shared quickly, and mainly for transparency.
I was personally surprised by how few awards and mentions were given and the relatively small overall value of the pay-out. I thought that there were easily 20+ good ideas proposed, maybe more.
I would have probably liked if more submissions were at least explicitly classed as having ‘high potential’ etc. Part of this view comes from recently hearing how EA is flooded with funding etc. This leads me to feel that that foregrounding good ideas is increasingly important. I suppose I worry that we might miss out on potential value from what I perceived to be a very valuable and fruitful innovation ideation exercise.
Related to that, I’d really like to see funders/stakeholder proactively nudge the development of any ideas that they liked. IMO many deserve a full forum post and more examination and some should be funded for a trial. I would discourage people from assuming that anyone who proposed an idea will be proactive in trying to progress it in the absence of feedback and commitments for funding, even if it was awarded a prize.
In future rounds/similar competitions, it could be valuable to give even very short feedback on propositions to indicate your receptiveness and reasoning. At this stage, those who took time out of their work and leisure to contribute ideas but didn’t win an award may feel that they gained insufficient impact/reward/insight for that work. If so, that does not optimally motivate them to invest time in future ideation projects (though they may anyway if sufficiently intrinsically motivated, etc). I think that giving even a small amount of feedback could reduce that risk. Getting feedback would show the person offering the idea that it was engaged with and give them an update on its fit for funding etc.
I am interested to hear other people’s responses.
I was also surprised by the low number of awards. I was expecting ~5-10x as many winners (50-100).
Also it’s interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted “by popular demand”), minimum 16. None of the winners were in the top 75 highest rated comments; 8⁄9 of the publicly posted honourable mentions were (including 4 in the top 11).
There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner. See spreadsheet for details.
Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality.
I note that the karma figures are confounded by posting date (and possibly popularity of the poster), and a better system for showing them would likely have produced different results, as per the considerations Nathan Young outlines in the second most upvoted comment on the initial competition announcement. Also karma is an imperfect measure (so maybe the discrepancy isn’t that surprising).
Maybe giving feedback on the idea could’ve been outsourced by also giving monetary rewards for especially useful responses regarding the usefulness of an idea. :D
(by the way, I read all public suggestions and I remember liking many of your ideas, Peter)