Thanks for laying out these points! After having engaged with many people’s thoughts on these issues I’m similarly unconvinced about the very unfavourable odds many people seem to assign, so I really look forward to the discussion here.
I’m particularly curious about this point, because when I think about AI risk scenarious I put quite some stock into the potential for very direct government interventions when the risks are more obvious and more clearly a near term problem:
Some decisive demonstration of danger is achieved, and AIs also help to create a successful campaign to persuade key policymakers to aggressively work toward a standards and monitoring regime. (This could be a very aggressive regime if some particular government, coalition or other actor has a lead in AI development that it can leverage into a lot of power to stop others’ AI development.)
AI seems to me to already clearly be among the top priorities for geopolitical considerations for the US, and it seems like when this is the case the space of options is fairly unrestricted.
Thanks for laying out these points! After having engaged with many people’s thoughts on these issues I’m similarly unconvinced about the very unfavourable odds many people seem to assign, so I really look forward to the discussion here.
I’m particularly curious about this point, because when I think about AI risk scenarious I put quite some stock into the potential for very direct government interventions when the risks are more obvious and more clearly a near term problem:
AI seems to me to already clearly be among the top priorities for geopolitical considerations for the US, and it seems like when this is the case the space of options is fairly unrestricted.