Here’s Teddy Tantum Collins’ LinkedIn, a recent interview and short bio.
Main topic is AI but we could also talk about other things.
What should I ask?
Here’s Teddy Tantum Collins’ LinkedIn, a recent interview and short bio.
Main topic is AI but we could also talk about other things.
What should I ask?
I’d be fascinated to hear a White House insider comment on the likelihood that the AI safety issue will become politicized into a partisan issue, along party lines in the US. Specifically, whether Democrats or Republicans are more likely to adopt anti-AI policies such as a ‘pause/stop AI’ moratorium, or advocate stronger government regulations, or morally stigmatize AI research as evil and reckless.
Personally, I think the chances that AI safety remains a bipartisan issue are pretty close to zero, but I’m not sure which party is likely to advocate stronger constraints on the AI industry.
How unusual does he think the current policy interest in AI safety is? Will this be a temporary window or an ever-increasing level of interest?
Best policy idea for AI safety? Best one I won’t have heard of? Best 10? (Any policy ideas floating around in AI safety that are bad/doomed?) If we live in a world where people can accidentally kill everyone by making powerful AI, what policy levers should we pull?
Takes on the track hardware, mandatory licensing for large training runs, monitor large training runs with capability evals & red-teaming & audits, pause training runs with concerning eval results plan? Takes on other plans, like training compute cap that gradually grows over time or the underspecified-but-evocative IAEA for AI?
How concerned is he about China’s AI progress and how plausible is it that China could win the race to AGI/ASI?
How important does he think it is to be friendly and cooperate with China, and other countries on AI?
How important does he think U.S. high-skilled immigration reform is? (of particular interest to me)
On the one hand allowing more high-skilled people into the US means they don’t go to China which seems good. Also more talent could help us solve safety issues
On the other hand, some argue that more high-skilled talent could speed up AI progress and therefore be bad overall for safety