The announcement raises the possibility that “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.” But it seems to me that if someone successfully argues for this position, they won’t be able to win any of the offered prizes.
Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark—in this case an essay’s persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios—makes it easier to judge across the board. But as someone who works in this field, I can’t say I’m excited by the competition or feel it will help advance things.
Basically, I don’t know if this prize is incentivising things which matter most. Here’s why:
The focus is squarely on likelihood of things going wrong against different timelines. It has nothing to do with solutions space
But solutions are still needed, even if the likelihood reduces / increases by a large amount, because the impact would be so high.
Take Proposition 1: humanity going extinct or drastically curtailing its future due to loss of control of AGI. I can see how a paper which changes your probabilities from 15% to either 7% or 35% would lead to FTX changing the amount invested in this risk relative to other X risks—this is good. However, I doubt it’d lead to a full on disinvestment, let alone that you still wouldn’t want to fund the best solutions, or be worried if the solutions to hand looked weak
Moreover, capabilities advancements have rapidly changed priors of when AGI / transformative AI would be developed, and will likely continue to do so iteratively. Once this competition is done, new research could have shifted the dial again. Solutions space will likely be the same
So long as the capabilities-alignment advancements gap persists, solutions will more likely come from the AI governance space than AI alignment research space just yet
The solution space is pretty sparse still in terms of governance of AI. But given the argument in 2), I think this is a big risk and one where further work should be stimulated. There’s likely loads of value off the table, people sitting on ideas, especially people outside the EA community who have worked in governance / non-proliferation negotiations etc.
I’d be more assured if this competition encouraged submissions on how “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem”. The way the prize criteria is written out, if I had an argument about taking a new approach to AI alignment (incl. ‘it’s likely intractable’) I wouldn’t submit to this competition as I’d think it isn’t for me. But arguments on achievability of alignment—even it’s theoretical possibility—are central to what gets funded in this field, and have flow-through effects for AI governance interventions. This feels like a missed opportuity, and a much bigger loss than the governance interventions bit
Basically, we probably need more solutions on the table regardless of changes in probabilities of AGI being developed sooner / later, and this won’t draw them out.
Would be good to know why this was the focus if you have time, or at least something to consider if you do decide to do another competition off the bat of this.
(Sorry if any of this seems a bit rough as feedback, I think it’s better not to be a nodding dog, esp. for things so high consequence.)
Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark—in this case an essay’s persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios—makes it easier to judge across the board. But as someone who works in this field, I can’t say I’m excited by the competition or feel it will help advance things.
Basically, I don’t know if this prize is incentivising things which matter most. Here’s why:
The focus is squarely on likelihood of things going wrong against different timelines. It has nothing to do with solutions space
But solutions are still needed, even if the likelihood reduces / increases by a large amount, because the impact would be so high.
Take Proposition 1: humanity going extinct or drastically curtailing its future due to loss of control of AGI. I can see how a paper which changes your probabilities from 15% to either 7% or 35% would lead to FTX changing the amount invested in this risk relative to other X risks—this is good. However, I doubt it’d lead to a full on disinvestment, let alone that you still wouldn’t want to fund the best solutions, or be worried if the solutions to hand looked weak
Moreover, capabilities advancements have rapidly changed priors of when AGI / transformative AI would be developed, and will likely continue to do so iteratively. Once this competition is done, new research could have shifted the dial again. Solutions space will likely be the same
So long as the capabilities-alignment advancements gap persists, solutions will more likely come from the AI governance space than AI alignment research space just yet
The solution space is pretty sparse still in terms of governance of AI. But given the argument in 2), I think this is a big risk and one where further work should be stimulated. There’s likely loads of value off the table, people sitting on ideas, especially people outside the EA community who have worked in governance / non-proliferation negotiations etc.
I’d be more assured if this competition encouraged submissions on how “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem”. The way the prize criteria is written out, if I had an argument about taking a new approach to AI alignment (incl. ‘it’s likely intractable’) I wouldn’t submit to this competition as I’d think it isn’t for me. But arguments on achievability of alignment—even it’s theoretical possibility—are central to what gets funded in this field, and have flow-through effects for AI governance interventions. This feels like a missed opportuity, and a much bigger loss than the governance interventions bit
Basically, we probably need more solutions on the table regardless of changes in probabilities of AGI being developed sooner / later, and this won’t draw them out.
Would be good to know why this was the focus if you have time, or at least something to consider if you do decide to do another competition off the bat of this.
(Sorry if any of this seems a bit rough as feedback, I think it’s better not to be a nodding dog, esp. for things so high consequence.)