Altman was critical of Toner’s recent paper, discussed outing her, and wanted to expand the board. The board disagreed on which people to add, leading to a stalemate. Ilya suddenly changed position, and the board took abrupt action.
They don’t offer an explanation what the ‘dishonesty’ would’ve been about.
How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? AI technologies are evolving rapidly and enable a wide range of civilian and military applications. Private sector companies lead much of the innovation in AI, but their motivations and incentives may diverge from those of the state in which they are headquartered. As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition.
In this brief, we explore a crucial policy lever that has not received much attention in the public debate: costly signals.
New York Times suggesting a more nuanced picture: https://archive.li/lrLzK
Altman was critical of Toner’s recent paper, discussed outing her, and wanted to expand the board. The board disagreed on which people to add, leading to a stalemate. Ilya suddenly changed position, and the board took abrupt action.
They don’t offer an explanation what the ‘dishonesty’ would’ve been about.
This is the paper in question, which I think will be getting a lot of attention now: https://cset.georgetown.edu/publication/decoding-intentions/