The current results show that I’m the most favorable to accelerating AI out of everyone who voted so far. I voted for “no regulations, no subsidy” and “Ok to be a capabilities employee at a less safe lab”.
However, I should clarify that I only support laissez faire policy for AI development as a temporary state of affairs, rather than a permanent policy recommendation. This is because the overall impact and risks of existing AI systems are comparable to, or less than, that of technologies like smartphones, which I also favor remaining basically unregulated. But I expect future AI capabilities will be greater.
After AI agents get significantly better, my favored proposals to manage AI risks are to implement liability regimes (perhaps modeled after Gabriel Weil’s proposals) and to grant AIs economic rights (such as a right to own property, enter contracts, make tort claims, etc.). Other than these proposals, I don’t see any obvious policies that I’d support that would slow down AI development—and in practice, I’m already worried these policies would go too far in constraining AI’s potential.
I’d like to point out that Ajeya Cotra’s report was about “transformative AI”, which had a specific definition:
My personal belief is that a median timeline of ~2050 for this specific development is still reasonable, and I don’t think the timelines in the Bio Anchors report have been falsified. In fact, my current median timeline for TAI, by this definition, is around 2045.