Hi, I’m Max :)
background in cognitive science & biology
most worried about AI going badly for technical & coordination reasons
vegan for the animals
forecasts at Metaculus: https://www.metaculus.com/accounts/profile/110500/
currently exploring AI governance roles, currently currently research contractor at Rethink Priorities’ AI Governance and Strategy team
I think that’s a valid worry and I also don’t expect the standards to end up specifying how to solve the alignment problem. :P I’d still be pretty happy about the proposed efforts on standard setting because I also expect standards to have massive effects that can be more or less useful for
a) directing research in directions that reduce longterm risks (e.g. pushing for more mechanistic interpretability),
b) limiting how quickly an agentic AI can escape our control (e.g. via regulating internet access, making manipulation harder),
c) enabling strong(er) international agreements (e.g. shared standards could become basis for international monitoring efforts of AI development and deployment).