My goal is to do work that counterfactually reduces AI risk from loss-of-control scenarios. My perspective is shaped by my experience as the founder of a VC-backed AI startup, which gave me a firsthand understanding of the urgent need for safety.
I have a B.S. in Artificial Intelligence from Carnegie Mellon and am currently a CBAI Fellow at MIT/âHarvard. My primary project is ForecastLabs, where Iâm building predictive maps of the AI landscape to improve strategic foresight.
I subscribe to Crockerâs Rules and am especially interested to hear unsolicited constructive criticism. http://ââsl4.org/ââcrocker.htmlâinspired by Daniel Kokotajlo.
(xkcd meme)
This is a very well thought out post and thank you for posting it. We cannot depend on warning shots and a mindset where we believe people will âwake upâ due to warning shots is unproductive and misguided.
I believe to spread awareness, we need more work that does the following:
Continue to package existing demos of warning shots into a story that people can understand, re-shaping their world view on the risks of AI
Communicate these packages to stakeholders
Find more convincing demos of warning shots
This cannot be overstated, lets put in the work to make this happen, ensuring that our understanding of AI risk is built from first principles so we are resilient to negative feedback!