I’m also practicing how to give good presentations and introductions to AI Safety. You can see my YouTube channel here:
You might also be interested in one of my older presentations, number 293, which is closer to what you are working on.
Feel free to book a half-hour chat about this topic with me on this link:
This seems to be of questionable effectiveness. Brief answers/challenges:
Evaluations are key input to ineffective governance. The safety frameworks presented by the frontier labs are “safety-washing”, more appropriately considered roadmaps towards an unsurvivable future.
Disagreement on AI capabilities underpin performative disagreements on AI Risk. As far as I know, there’s no recent published substantial such disagreement—I’d like sources for your claim, please.
We don’t need more situational awareness of what current frontier models can and cannot do in order to respond appropriately. No decision-relevant conclusions can be drawn from evaluations in the style of Cybench and Re-Bench.