Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles
I hate reading articles like this; but we need to, over and over and over.
Well done.
Apply to be a Safety Engineer at Lockheed Martin!
A periodic reminder that you can just email politicians and then meet them (see screenshot below).
My previous take on writing to Politicians got numbers, so I figured I’d post the email I send below.
I am going to make some updates, but this is the latest version:
---Hi [Politician]
My name is Yanni Kyriacos, I live in Coogee, just down the road from your electorate.
If you’re up for it, I’d like to meet to discuss the risks posed by AI. In addition to my day job building startups, I do community / movement building in the AI Safety / AI existential risk space. You can learn more about AI Safety ANZ by joining our Facebook group here or the PauseAI movement here. I am also a signatory of Australians for AI Safety—a group that has called for the Australian government to set up an AI Commission (or similar body).
Recently I worked with Australian AI experts (such as Good Ancestors Policy) in making a submission to the recent safe and response AI consultation process. In the letter, we called on the government to acknowledge the potential catastrophic and existential risks from artificial intelligence. More on that can be found here.
There are many immediate risks from already existing AI systems like ChatGPT or Midjourney, such as disinformation or improper implementation in various businesses. In the not-so-distant future, certain safety nets will need to be activated (such as a Universal Basic Income policy) in the event of mass unemployment due to displacement of jobs with robots and AI systems.
But of greatest concern is the speed at which we are marching towards AGI (artificial general intelligence) – systems that will have cognitive abilities at or above human level.
Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. And these levels of risk aren’t just a concern for people in the far-distant future, with prediction markets such as Metaculus showing these kinds of AI could be invented in the next term of government.
Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU.
To make a long story short: we don’t know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk it will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI remains under human control, the person (or government) wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments.
The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures.
Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That’s why we need an international solution.
The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders would work towards sensible solutions that prevent the very worst of the risks that AI poses. As such I was excited to see that Australia signed the The Bletchley Declaration, agreeing that this risk is real and warrants coordinated international action. However, the recent policy statements by Minister Husic don’t seem to align with the urgency that experts are seeing. The last safe moment to act could be very soon.
The Summit has not yet produced an international agreement or policy. We have seen proposals being written by the US Senate, and even AI company CEOs have said there is “overwhelming consensus” that regulation is needed. But no proposal so far has seriously considered ways to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective.
It’s essential that the government follows through on its commitment at Bletchley Park to create a national or regional AI safety body. We have such bodies for everything from the risk of plane crashes to the risk of tsunamis. We urgently need one on ensuring the safety of AI systems.
Anyway, I’d love to discuss this more in person or via zoom if you’re in town soon.
Let me know what you think.
Cheers,,Yanni
An Update On The Campaign For AI Safety Dot Org
[Question] Am I taking crazy pills? Why aren’t EAs advocating for a pause on AI capabilities?
I have a some questions for the people at 80,000 Hours
[Question] Do people who ask for anonymous feedback via admonymous.co actually get good feedback?
[Question] Who is testing AI Safety public outreach messaging?
Regarding 2 - Hammers love Nails. EAs as Hammers, love research, so they bias towards seeing the need for more research (after all, it is what smart people do). Conversely, EAs are less likely (personality-wise) to be comfortable with advocacy and protests (smart people don’t do this). It is the wrong type of nail.
RIP to any posts on anything earnest over the last 48 hours. Maybe in future we don’t tag anything April Fools and it is otherwise a complete blackout on serious posts 😅
Be skeptical of EAs giving advice on things they’ve never actually been successful in themselves
I think if you work in AI Safety (or want to) it is very important to be extremely skeptical of your motivations for working in the space. This applies to being skepticism of interventions within AI Safety as well.
For example, EAs (like most people!) are motivated to do things they’re (1) good at (2) see as high status (i.e. people very quietly ask themselves ‘would someone who I perceive as high status approve of my belief or action?‘). Based on this, I am worried that (1) many EAs find protesting AI labs (and advocating for a Pause in general) cringy and/or awkward (2) Ignore the potential impact of organisations such as PauseAI.
We might all literally die soon because of misaligned AI, so what I’m recommending is that anyone seriously considering AI Safety as a career path spends a lot of time on the question of ‘what is really motivating me here?’
I have written 7 emails to 7 Politicians aiming to meet them to discuss AI Safety, and already have 2 meetings.
Normally, I’d put this kind of post on twitter, but I’m not on twitter, so it is here instead.I just want people to know that if they’re worried about AI Safety, believe more government engagement is a good thing and can hold a decent conversation (i.e. you understand the issue and are a good verbal/written communicator), then this could be an underrated path to high impact.
Another thing that is great about it is you can choose how many emails to send and how many meetings to have. So it can be done on the side of a “day job”.
[Question] Imagine AGI killed us all in three years. What would have been our biggest mistakes?
[GIF] A feature I’d love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)
I think acting on the margins is still very underrated. For e.g. I think 5x the amount of advocacy for a Pause on capabilities development of frontier AI models would be great. I also think in 12 months time it would be fine for me to reevaluate this take and say something like ‘ok that’s enough Pause advocacy’.
Basically, you shouldn’t feel ‘locked in’ to any view. And if you’re starting to feel like you’re part of a tribe, then that could be a bad sign you’ve been psychographically locked in.
I met Australia’s Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry.
All because I sent one email asking for a meeting + had said meeting.
Advocacy might be the lowest hanging fruit in AI Safety.