Generally speaking, I believe in longer timelines and slower takeoff speeds. But short timelines seem more dangerous, so I’m open to alignment work tailored to short timelines scenarios. Right now, I’m looking for research opportunities on risks from large language models.
Collected Thoughts on AI Safety
Here are of some of my thoughts on AI timelines:
Who believes what about AI timelines
Why I have longer timelines than the BioAnchors report
Reasons to expect gradual takeoff rather than sudden takeoff
Why market activity and data constraints lengthen my timelines
Three scenarios for AI progress
And here are some thoughts on other AI Safety topics:
Questions about Decision Transformers and Deepmind’s Gato
Why AI policy seems valuable (selected quotes from Richard Ngo)
Why AI alignment prizes are so difficult
Generally speaking, I believe in longer timelines and slower takeoff speeds. But short timelines seem more dangerous, so I’m open to alignment work tailored to short timelines scenarios. Right now, I’m looking for research opportunities on risks from large language models.