I do think it’s possible that we might award more prizes retroactively if we recognize that we receive a lot of valuable submissions! Maybe an “honorable mentions” category.
Ah, I think my worry is that it feels difficult for me to find a standard to rate that actually tracks quality. If I give a couple of examples, people may feel limited to having their work look like those examples. I might say “make your distillation 1,000 words and explain two papers and I’ll give you a prize” but 1,500 words on one paper might have made an optimal submission and I would have limited people’s abilities. I think I find it hard to quantify a bar on writing since everyone has such different approaches. I think the real bar is something more like “the judges who know more about AI Safety than me believe that you have communicated this idea really well” and because of that it feels wrong for me to try to say “and if you do x you will definitely win something.”
If they already get a price, I wouldn’t call it “honorable mentions” because that unnecessarily diminishes it in my eyes. Just have anything that seems that would get at B- in school be in the same category as the 250$ price?
Ah, I think my worry is that it feels difficult for me to find a standard to rate that actually tracks quality.
Ah, interesting, I have the opposite intuition!:D I completely agree that you shouldn’t give advice about the length of the distillations, but the criteria you mention here just seem really useful and like I’d be surprised if e.g. you find something clearly presented and accessible, and I wouldn’t.
Depth of understanding
Clarity of presentation
Rigor of work
Concision/Length (longer papers will need to present more information than shorter papers)
Originality of insight
Accessibility
And I feel like somebody who has spend like ~40 hours reading and discussing AI Safety material (e.g. as part AGI Safety Fundamentals course) could do a reasonably coherent job at rating the understanding and rigor. Originality seems maybe the trickiest, as you probably have to have some grasp of what ideas/framings are already in the water and which aren’t.
I do think it’s possible that we might award more prizes retroactively if we recognize that we receive a lot of valuable submissions! Maybe an “honorable mentions” category.
Ah, I think my worry is that it feels difficult for me to find a standard to rate that actually tracks quality. If I give a couple of examples, people may feel limited to having their work look like those examples. I might say “make your distillation 1,000 words and explain two papers and I’ll give you a prize” but 1,500 words on one paper might have made an optimal submission and I would have limited people’s abilities. I think I find it hard to quantify a bar on writing since everyone has such different approaches. I think the real bar is something more like “the judges who know more about AI Safety than me believe that you have communicated this idea really well” and because of that it feels wrong for me to try to say “and if you do x you will definitely win something.”
If they already get a price, I wouldn’t call it “honorable mentions” because that unnecessarily diminishes it in my eyes. Just have anything that seems that would get at B- in school be in the same category as the 250$ price?
Ah, interesting, I have the opposite intuition!:D I completely agree that you shouldn’t give advice about the length of the distillations, but the criteria you mention here just seem really useful and like I’d be surprised if e.g. you find something clearly presented and accessible, and I wouldn’t.
And I feel like somebody who has spend like ~40 hours reading and discussing AI Safety material (e.g. as part AGI Safety Fundamentals course) could do a reasonably coherent job at rating the understanding and rigor. Originality seems maybe the trickiest, as you probably have to have some grasp of what ideas/framings are already in the water and which aren’t.