I think the context of the Jack Clarke quote matters:
What if we’re right about AI timelines? What if we’re wrong?
Recently, I’ve been thinking a lot about AI timelines and I find myself wanting to be more forthright as an individual about my beliefs that powerful AI systems are going to arrive soon – likely during this Presidential Administration. But I’m struggling with something – I’m worried about making short-timeline-contingent policy bets.So far, the things I’ve advocated for are things which are useful in both short and long timeline worlds. Examples here include:
Building out a third-party measurement and evaluation ecosystem.
Encouraging governments to invest in further monitoring of the economy so they have visibility on AI-driven changes.
Advocating for investments in chip manufacturing, electricity generation, and so on.
Pushing on the importance of making deeper investments in securing frontier AI developers.
All of these actions are minimal “no regret” actions that you can do regardless of timelines. Everything I’ve mentioned here is very useful to do if powerful AI arrives in 2030 or 2035 or 2040 – it’s all helpful stuff that either builds institutional capacity to see and deal with technology-driven societal changes, or equips companies with resources to help them build and secure better technology.
But I’m increasingly worried that the “short timeline” AI community might be right – perhaps powerful systems will arrive towards the end of 2026 or in 2027. If that happens we should ask: are the above actions sufficient to deal with the changes we expect to come? The answer is: almost certainly not!
[Section that Mikhail quotes.]
Loudly talking about and perhaps demonstrating specific misuses of AI technology: If you have short timelines you might want to ‘break through’ to policymakers by dramatizing the risks you’re worried about. If you do this you can convince people that certain misuses are imminent and worthy of policymaker attention – but if these risks subsequently don’t materialize, you could seem like you’ve been Chicken Little and claimed the sky is falling when it isn’t – now you’ve desensitized people to future risks. Additionally, there’s a short- and long-timeline risk here where by talking about a specific misuse you might inspire other people in the world to pursue this misuse – this is bound up in broader issues to do with ‘information hazards’.
These are incredibly challenging questions without obvious answers. At the same time, I think people are rightly looking to people like me and the frontier labs to come up with answers here. How we get there is going to be, I believe, by being more transparent and discursive about these issues and honestly acknowledging that this stuff is really hard and we’re aware of the tradeoffs involved. We will have to tackle these issues, but I think it’ll take a larger conversation to come up with sensible answers.
In context Jack Clark seems to be arguing that he should be considering short timeline, ‘regretful actions’ more seriously.
An even more neglected problem: low-floating fruit. Seagrass produces fruit[1], some of which (halophila decipiens) has been found hanging at depths of 190 feet (58 meters)[2]. This is an absurdly submerged fruit, not even reachable for giraffes. Somebody should be on this.
https://en.wikipedia.org/wiki/Seagrass#Sexual_recruitment
https://ocean.si.edu/ocean-life/plants-algae/seagrass-and-seagrass-beds