Yes we’ve heard this concern as well, and it’s a fair one. The challenge is that public outreach on AI has already begun (witness Elon Musk’s warnings) and holding back won’t stop that.
Our approach is to engage with people across the political spectrum (framing the issue accordingly) and reinforce the message that when it comes to ASI risks we’re quite literally all in this together.
As for specific government actions we’d be advocating for, this is something we are currently defining but the three areas we’ve flagged as most likely to help human success this century are technology governance, societal resilience and global coordination.
Hi Dony,
Great questions! My name is Wyatt Tessari and I am the founder.
1) We are doing that right now. Consultations is a top priority for us before we start our advocacy efforts. It’s also part of the reason we’re reaching out here.
2) Our main comparative advantage is that (to the best of our research) there is no one else in the political/advocacy sphere openly talking about the issue in Canada. If there are better organisations than us, where are they? We’d gladly join or collaborate with them.
3) There are plenty of risks—causing fear or misunderstanding, getting hijacked by personalities or adjacent causes, causing backlash or counterproductive behaviour—but the reality is they exist anyway. The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns.
4) This is a tough question. There would likely be a number of metrics—feedback from AI & governance experts, popular support (or lack thereof), and a healthy dose of ongoing critical thought. But if you (or anyone else reading this) has better ideas we’d love to hear them.
In any case, thanks again for your questions and we’d love to hear more (that’s how we’re hoping to grow...).