Here are a few ideas relevant to an AI contest that could be helpful:
make the goal specific to developments in AI safety, rather than AI
if checking for a prediction’s probability, choose a prediction that is specific to AI safety (for example, “When would P(AGI is aligned with at least one human’s values|AGI is developed) > .50”).
if asking for specific content, choose content helpful to AI safety (for example, “What cause area of AI safety research do you feel is currently neglected, tractable, and important to AI safety, and why?”).
browse the comments on the FTX contest announcement post for ideas and complaints about that prize’s requirements (for example, I think someone suggested that contest submissions should have up to a year to be made, for a substantial reward, that makes a lot of sense, and would encourage more outside entries and original research from experts in the space).
commit to concrete submission standards that you feel are minimum requirements for you to read each submission, whatever those might be (academic credentials, format, content requirements, research approach, etc), and publish those along with the formal announcement. Then commit to reading each entry that meets the standards.
make the $$ amount used for the prizes guaranteed to go to some contestant, rather than optional for you and only if you think some entry deserves it. The grand prize should go to an entrant, I think that’s fair and honest in a competition.
I think it’s a good idea to continue the prize to the extent that it encourages AI safety research directly. My impression of the original prize was that it could encourage AGI development without necessarily encouraging AI Safety development, because its questions required more knowledge and consideration of AGI development than of AGI safety.
Here are a few ideas relevant to an AI contest that could be helpful:
make the goal specific to developments in AI safety, rather than AI
if checking for a prediction’s probability, choose a prediction that is specific to AI safety (for example, “When would P(AGI is aligned with at least one human’s values|AGI is developed) > .50”).
if asking for specific content, choose content helpful to AI safety (for example, “What cause area of AI safety research do you feel is currently neglected, tractable, and important to AI safety, and why?”).
browse the comments on the FTX contest announcement post for ideas and complaints about that prize’s requirements (for example, I think someone suggested that contest submissions should have up to a year to be made, for a substantial reward, that makes a lot of sense, and would encourage more outside entries and original research from experts in the space).
commit to concrete submission standards that you feel are minimum requirements for you to read each submission, whatever those might be (academic credentials, format, content requirements, research approach, etc), and publish those along with the formal announcement. Then commit to reading each entry that meets the standards.
make the $$ amount used for the prizes guaranteed to go to some contestant, rather than optional for you and only if you think some entry deserves it. The grand prize should go to an entrant, I think that’s fair and honest in a competition.
I think it’s a good idea to continue the prize to the extent that it encourages AI safety research directly. My impression of the original prize was that it could encourage AGI development without necessarily encouraging AI Safety development, because its questions required more knowledge and consideration of AGI development than of AGI safety.