Crossposting a comment: As co-author of one of the mentioned pieces, I’d say it’s really great to see the AGI xrisk message mainstreaming. It doesn’t nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn’t been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals which are currently lacking. And those who can should work on international coordination, which is currently also lacking.
A lot of work to be done. But the good news is that the window of opportunity is opening, and a lot of people could work on this which currently aren’t. This could be a path to victory.
Otto
I hope that this article sends the signals that pausing the development of the largest AI-models is good, informing society about AGI xrisk is good, and that we should find a coordination method (regulation) to make sure we can effectively stop training models that are too capable.
What I think we should do now is:
1) Write good hardware regulation policy proposals that could reliably pause the development towards AGI.
2) Campaign publicly to get the best proposal implemented, first in the US and then internationally.
This could be a path to victory.
I don’t know if everyone should drop everything else right now, but I do agree that raising awareness about AI xrisks should be a major cause area. That’s why I quit my work on the energy transition about two years ago to found the Existential Risk Observatory, and this is what we’ve been doing since (resulting in about ten articles in leading Dutch newspapers, this one in TIME, perhaps the first comms research, a sold out debate, and a passed parliamentary motion in the Netherlands).
I miss two significant things on the list of what people can do to help:1) Please, technical people, work on AI Pause regulation proposals! There is basically one paper now, possibly because everyone else thought a pause was too far outside the Overton window. Now we’re discussing a pause anyway and I personally think it might be implemented at some point, but we don’t have proper AI Pause regulation proposals, which is a really bad situation. Researchers (both policy and technical), please fix that, fix it publicly, and fix it soon!
2) You can start institutes or projects that aim to inform the societal debate about AI existential risk. We’ve done that and I would say it worked pretty well so far. Others could do the same thing. Funders should be able to choose from a range of AI xrisk communication projects to spend their money most effectively. This is currently really not the case.
I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:
What kind of regulation would be sufficiently robust to slow down, or even pause, all AGI capabilities actors? This should include research/software regulation, hardware regulation, and data regulation. I think a main reason why many people think this strategy is unlikely to work is that they don’t believe any practical regulation would be sufficiently robust. But to my knowledge, that key assumption has never been properly investigated. It’s time we do so.
How could we practically implement sufficiently robust regulation? What would be required to do so?
How can we inform sufficiently large portions of society about AI xrisk to get robust regulation implemented? We are planning to do more research on this topic at the Existential Risk Observatory this year (we already have some first findings).
If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.
Great work, thanks a lot for doing this research! As you say, this is still very neglected. Also happy to see you’re citing our previous work on the topic. And interesting finding that fear is such a driver! A few questions:
- Could you share which three articles you’ve used? Perhaps this is in the dissertation, but I didn’t have the time to read that in full.
- Since it’s only one article per emotion (fear, hope, mixed), perhaps some other article property (other than emotion) could also have led to the difference you find?
- What follow-up research would you recommend?
- Is there anything orgs like ours (Existential Risk Observatory) (or, these days, MIRI, that also focuses on comms) should do differently?
As a side note, we’re conducting research right now on where awareness has gone after our first two measurements (that were 7% and 12% in early/mid ’23, respectively). We might also look into the existence and dynamics of a tipping point.
Again, great work, hope you’ll keep working in the field in the future!
Great idea, congrats on the founding and looking forward to working with you!
Enough happened to write a small update about the Existential Risk Observatory.
First, we made progress in our core business: informing the public debate. We have published two more op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).
A change w.r.t. the previous update, is that we see serious, leading journalists become interested in the topic. One leading columnist has already written a column about AI existential risk in a leading newspaper. Another journalist is planning to write a major article about it. This same person proposed having a debate about AI xrisk at the leading debate center, which would be well-positioned to influence yet others, and he proposed to use his network for the purpose. This is definitely not yet a fully-fledged informed societal debate yet, but it does update our expectations in relevant ways:
The idea of op-eds translating into broader media attention is realistic.
That attention is generally constructive, and not derogatory.
Most of the informing takes place in a social, personal context.
From our experience, the process is really to inform leaders of the societal debate, who then inform others. We have for example organized an existential risk drink, where thought leaders, EAs, and journalists could talk to each other, which worked very well. Key figures should hear accurate existential risk information from different sides. Social proof is key. Being honest, sincere, coherent, and trying to receive as well as send, goes a long way, too.
Another update is that we will receive funding from the SFF and are in serious discussions with two other funds. We are really happy that this proves that our approach, reducing existential risk by informing the public debate, has backing in the existential risk community. We are still resource-constrained, but also massively manpower- and management-constrained. Our vision is a world where everyone is informed about existential risk. We cannot achieve this vision alone, but would like other institutes (new and existing) to join us in the communication effort. That we have received funding for informing the societal debate is evidence that others can, too. We are happy to share information about what we are doing and how others could do the same at talks, for example for local EA groups or at events.
Our targets for this year remain the same:
Publish at least three articles about existential risk in leading media in the Netherlands.
Publish at least three articles about existential risk in leading media in the US.
Receive funding for stability and future upscaling.
We will start working on next year’s targets in Q4.
Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You’re deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.
Congratulations on a great prioritization!
Perhaps the research that we (Existential Risk Observatory) and others (e.g. @Nik Samoylov, @KoenSchoen) have done on effectively communicating AI xrisk, could be something to build on. Here’s our first paper and three blog posts (the second includes measurement of Eliezer’s TIME article effectiveness—its numbers are actually pretty good!). We’re currently working on a base rate public awareness update and further research.
Best of luck and we’d love to cooperate!
Recordings are now available!
As someone who has worked in sustainable energy technology for ten years (wind energy, modeling, smart charging, activism) before moving into AI xrisk, my favorite neglected topic is carbon emission trading schemes (ETS).
ETSs such as implemented by the EU, China, and others, have a waterbed effect. The total amount of emissions is capped, and trading sets the price of those emissions for all sectors under the scheme (in the EU electricity, heavy industry, expanding to other sectors). That means that:
Reducing emissions for sectors under an ETS is pointless, climate-wise.
Deciding to reduce the amount of emission rights within an ETS should directly lead to lower emissions, without any need to understand the technologies involved.
It’s just crazy to think about all the good-hearted campaigning, awareness creation, hard engineering work, money, etc that is being directed to decreasing emissions for a sector that’s covered by an ETS. To my best understanding, as long as ETS is working correctly, this effort is completely meaningless. At the same time, I knew of exactly one person trying to reduce ETS emission rights based in my country, the Netherlands. This was the only person potentially actually achieving something useful for the climate.
If I would want to do something neglected in the climate space, I would try to inform all those people currently wasting their energy that what they should really do is trying to reduce the amount of ETS emission rights and let the market figure out the rest. (Note that several of the trajectories recommended above, such as working on nuclear power, reducing industry emissions, and deep geothermal energy (depending on use case) are all contained in ETS (at least in the EU) and improvements would therefore not benefit the climate).
If countries or regions have an ETS system, successful emission reduction should really start (and basically stop) there. It’s also quite a neglected area so plenty of low hanging fruit!
Hi Vasco, thank you for taking the time to read our paper!
Although we did not specify this in the methodology section, we addressed the “mean variation in likelihood” between countries and surveys throughout the research such as in section 4.2.2. I hope this clarifies your question. This aspect should have been better specified in the methodology section.
If you have any more questions, do not hesitate to ask.
Thanks Peter for the compliment! If there is something in particular you’re interested in, please let us know and perhaps we can take it into account in future research projects!
Great idea to look into this!
It sounds a lot like what we have been doing at the Existential Risk Observatory (posts from us, website). We’re more than willing to give you input insofar that helps, and perhaps also to coordinate. In general, we think this is a really positive action and the space is wide open. So far, we have good results. We also think there is ample space for other institutes to do this.
Let’s further coordinate by email, you can reach us at info@existentialriskobservatory.org. Looking forward to learn from each other!
Hey I wasn’t saying it wasn’t that great :)
I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like ‘let all odd rows of your answer be empty’. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the ‘we haven’t found the asymptote’ argument from OpenAI (intelligence does increase with model size and that increase doesn’t seem to stop, implying that we’ll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn’t seem likely to me, but from a risk management perspective, that’s not the point. An interesting perspective I’d never heard before from intelligent people is that AGI might actually need embodiment to gather the relevant data. (They also think it would need social skills first—also an interesting thought.)
While it’s hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what’s in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it’s most forecastable, not because it would be the only part that’s important.
Still, there is a large gap between what I think are essentially thought experiments (relevant ones though!) leading to concepts such as AGI and the singularity, and actual present AI. I’m definitely interested in ideas filling that gap. I think ‘AGI safety from first principles’ by Richard Ngo is a good try, I guess you’ve read that too since it’s part of the AGI Safety Fundamentals curriculum? What did you think about it? Do you know any similar or even better papers about the topic?
It could be that belief too, yes! I think I’m a bit exceptional in the sense that I have no problem imagining human beings achieving really complex stuff, but also no problem imagining human beings failing miserably at what appear to be really easy coordination issues. My first thought when I heard about AGI, recursive self-improvement, and human extinction was ‘ah yeah that sounds like typically the kind of thing engineers/scientists would do!’ I guess some people believe engineers/scientists could never make AGI (I disagree), while others think they could, but would not be stupid enough to screw up badly enough to actually cause human extinction (I disagree).
Thanks for the reply, and for trying to attach numbers to your thoughts!
So our main disagreement lies in (1). I think this is a common source of disagreement, so it’s important to look into it further.
Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?
High impact startup idea: make a decent carbon emissions model for flights.
Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn’t even be there if people could be spread over existing indirect flights more efficiently, which is why they’re cheaper too. Emission models should be relative to counterfactual.
The startup can be for-profit. If you’re lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd.
My guess is that a few man-years work could have a big carbon emissions impact here.
Thanks Gabriel! Sorry for the confusion. TE stands for The Economist, so this item: https://www.youtube.com/watch?v=ANn9ibNo9SQ
It’s definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we’ll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.
With evals, however, we’ll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we’re in thick mist and we can’t see the abyss (this is true even when we apply evals, since we don’t know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).
And even if we would know where to halt: we’ll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren’t getting leaked, that we’ll implement the policy that’s needed, that we’ll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.
Pausing isn’t as simple as pressing a button, it’s a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don’t think so. The sooner we start, the higher our chance of survival.
Also, there’s a separate point that I think is not sufficiently addressed yet: we don’t know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don’t have these, so I think working on them should be a much higher priority.