A list of ideas:
We need breadth-first AI safety plans
Is it possible to persuade AI companies to slow down?
AI companies owe the public an explanation of what they will do with ASI once they have it
A frontier AI company should shut down as a costly signal that x-risk is a big deal
Some AI safety trolley problems
Pausing AI is the best general solution to every non-alignment problem with ASI (?)
You only get one shot at making ASI safe, even in a gradual takeoff
AI safety regulations would not be that onerous and I don’t understand why people believe otherwise
A compilation of evidence that AI companies can’t be trusted to abide by voluntary commitments
Literature review on the effectiveness disruptive or violent protests
Protest cost-effectiveness BOTEC
What evidence would convince us that LLMs are conscious?
Pascal’s Mugging is rarely relevant
What do you think of: https://www.lesswrong.com/posts/35vPhn5fiKgA5gEj8/kabir-kumar-s-shortform?commentId=82HgY4iEkfC2MSggJ
I wrote a response on your shortform
A list of ideas:
We need breadth-first AI safety plans
Is it possible to persuade AI companies to slow down?
AI companies owe the public an explanation of what they will do with ASI once they have it
A frontier AI company should shut down as a costly signal that x-risk is a big deal
Some AI safety trolley problems
Pausing AI is the best general solution to every non-alignment problem with ASI (?)
You only get one shot at making ASI safe, even in a gradual takeoff
AI safety regulations would not be that onerous and I don’t understand why people believe otherwise
A compilation of evidence that AI companies can’t be trusted to abide by voluntary commitments
Literature review on the effectiveness disruptive or violent protests
Protest cost-effectiveness BOTEC
What evidence would convince us that LLMs are conscious?
Pascal’s Mugging is rarely relevant
What do you think of: https://www.lesswrong.com/posts/35vPhn5fiKgA5gEj8/kabir-kumar-s-shortform?commentId=82HgY4iEkfC2MSggJ
I wrote a response on your shortform