The thing to see is if the media attention translates into action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected, but that doesn’t seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested) but it has been rejected because it’s still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.
AIS has always had outsized media coverage relative to people actually doing something about it, and that may continue.
CarlShulman
I actually do every so often go over the talks from the past several EAGs on Youtube and find it does better. Some important additional benefits are turning on speedup and subtitles, being able to skip forward or bail more easily if the talk turns out bad, and not being blocked from watching two good simultaneous talks.
In contrast, a lot of people really love in-person meetings compared to online video or phone.
As Tom says, sorry if I wasn’t clear.
I disagree with the idea that short AI timelines are not investable (although I agree interest rates are a bad and lagging indicator vs AI stocks). People foreseeing increased expectations of AI sales as a result of scaling laws, shortish AI timelines, and the eventual magnitude of success have already made a lot of money investing in Nvidia, DeepMind and OpenAI. Incremental progress increases those expectations, and they can increase even in worlds where AGI winds up killing or expropriating all investors so long as there is some expectation of enough investors thinking ownership will continue to matter. In practice I know lots of investors expecting near term TAI who are betting on it (in AI stocks, not interest rates, because the returns are better). They also are more attracted to cheap 30 year mortgages and similar sources of mild cheap leverage. They put weight on worlds where society is not completely overturned and property rights matter after AGI, as well as during an AGI transition (e.g. consider that a coalition of governments wanting to build AGI is more likely to succeed earlier and more safely with more compute and talent available to it, so has reason to make credible promises of those who provide such resources actually being compensated for doing so post-AGI, or the philanthropic value of being able to donate such resources).
And at the object level from reading statements from investors and talking to them, investors weighted by trading in AI stocks (and overwhelmingly for the far larger bond market setting interest rates) largely don’t have short AI timelines (confident enough to be willing to invest on) or expect explosive growth in AI capabilities. There are investors like Cathy Woods who do with tens or hundreds of billions of dollars of capital, but they are few enough relative to the investment opportunities available that they are not setting e.g. the prices for the semiconductor industry. I don’t see the point of indirect arguments from interest rates for the possibility that investors or the market as a whole could believe in AGI soon but only versions where owning the AI chips or AI developers won’t pay off, when at the object level that possibility is known to be false.
If you haven’t read this piece by Ajeya Cotra, Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover I would highly recommend it. Some of the post on AI alignment here (aimed at a general audience) might also be helpful.
How much should governments pay to prevent catastrophes? Longtermism’s limited role
This tweet seems like vague backtracking on the long timelines.
Well Musk was the richest, who notably pulled out and then the money seems mostly not to have manifested. I haven’t seen a public breakdown of commitments those sorts of statements were based on.
The kind of examples people used to use to motivate frame problem stories in the days of GOFAI in the 20th century are routinely solved by AI systems today.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
In this 2022 ML survey the median credence on extinction-level catastrophe from AI is 5%, with 48% of respondents giving 10%. Some generalist forecaster platforms put numbers significantly lower, some forecasting teams or researchers with excellent forecasting records and more knowledge of the area put more (with I think the tendency being for more information to yield higher forecasts, and my own expectation). This scale looks like hundreds of millions of deaths or equivalent this century to me, although certainly many disagree. The argument below goes through with 1%.
Expected damages from climate over the century in the IPCC and published papers (which assume no drastic technological advance, which is in tension with forecasts about AI development) give damages of several percent of world product and order 100M deaths.
Global absolute poverty affects most of a billion people, with larger numbers somewhat above those poverty lines, and life expectancy many years shorter than wealthy country averages, so it gets into the range of hundreds of millions of lives lost equivalent. Over half a million die from malaria alone each year.
So without considering distant future generations or really large populations or the like, the scales look similar to me, with poverty and AI ahead of climate change but not vastly (with a more skeptical take on AI risk, poverty ahead of the other two).
”Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.”
How exactly could that be true? Total FTEs working on AI alignment, especially scalable alignment are a tiny, tiny fraction. Google Deepmind has a technical safety team with a few handfuls of people, central Alphabet has none as such. Safety teams at OAI and Anthropic are on the same order of magnitude. Aggregate expenditure on AI safety is a few hundreds of millions of dollars, orders of magnitude lower.
$200B includes a lot of aid aimed at other political goals more than humanitarian impact, , with most of a billion people living at less than $700/yr, while the global economy is over $100,000B and cash transfer programs in rich countries are many trillions of dollars. That’s the neglectedness that bumps of global aid interventions relative to local rich country help to the local relative poor.
You can get fairly arbitrarily bad cost-effectiveness in any area by taking money and wasting on it things that generate less value than the money. E.g. spending 99.9% on digging holes and filling them in, and 0.1% on GiveDirectly. But just handing over the money to the poor is a relevant attainable baseline.
Helping the global poor is neglected, and that accounts for most bednet outperformance. GiveDirectly, just giving cash, is thought by GiveWell/GHW to be something like 100x better on direct welfare than rich country consumption (although indirect effects reduce that gap), vs 1000x+ for bednets. So most of the log gains come from doing stuff with the global poor at all. Then bednets have a lot of their gains as positive externalities (protecting one person also protects others around them), and you’re left with a little bit of ’being more confident about bednets than some potential users based on more investigation of the evidence (like vaccines), and some effects like patience/discounting.
Really exceptional intervention-within-area picks can get you a multiplier, but it’s hard to get to the level of difference you see on cause selection, and especially so when you compare attempts to pick out the best in different causes.
Here’s an example of a past case where a troll (who also trolled other online communities) made up multiple sock-puppet accounts, and assorted lies about sources for various arguments trashing AI safety, e.g. claiming to have been at events they were not and heard bad things, inventing nonexistent experts who supposedly rejected various claims, creating fake testimonials of badness, smearing people who discovered the deception, etc.
But the stocks are the more profitable and capital-efficient investment, so that’s where you see effects first on market prices (if much at all) for a given number of traders buying the investment thesis. That’s the main investment on this basis I see short timelines believers making (including me), and has in fact yielded a lot of excess returns since EAs started to identify it in the 2010s.
I don’t think anyone here is arguing against the no-trade theorem, and that’s not an argument that prices will never be swayed by anything, but that you can have a sizable amount of money invested on the AGI thesis before it sways prices. Yes, price changes don’t need to be driven by volume if no one wants to trade against. But plenty of traders not buying AGI would trade against AGI-driven valuations, e.g. against the high P/E ratios that would ensue. Rohin is saying not that the majority of investment capital that doesn’t buy AGI will sit on the sidelines but will trade against the AGI-driven bet, e.g. by selling assets at elevated P/E ratios. At the moment there is enough $ trading against AGI bets that market prices are not in line with the AGI bet valuations. I recognize that means the outside view EMH heuristic of going with the side trading more $ favors no AGI, but I think based on the object level that the contrarian view here is right.
It’s just a simple illustration that you can have correct minorities that have not yet been able to grow by profit or imitation to correct prices. And the election mispricings also occurred in uncapped crypto prediction markets (although the hassle of executing very quickly there surely deterred or delayed institutional investors), which is how some made hundreds of thousands or millions of dollars there.
If investors with $1T thought AGI soon, and therefore tried to buy up a portfolio of semiconductor, cloud, and AI companies (a much more profitable and capital-efficient strategy than betting on real interest rates) they could only a buy a small fraction of those industries at current prices. There is a larger pool of investors who would sell at much higher than current prices, balancing that minority.
Yes, it’s weighted by capital and views on asset prices, but still a small portion of the relevant capital trying to trade (with risk and years in advance) on a thesis impacting many trillions of dollars of market cap aren’t enough to drastically change asset prices against the counter trades of other investors.
There is almost no discussion of AGI prospects by financial analysts, consultants, etc (generally if they mention it they just say they’re not going to consider it). E.g. they don’t report probabilities it would happen or make any estimates of the profits it would produce.
Rohin is right that AGI by the 2030s is a contrarian view, and that there’s likely less than $1T of investor capital that buys that view and selects investments based on it.I, like many EAs, made a lot of money betting in prediction markets that Trump wouldn’t overturn the 2020 election. The most informed investors had plenty of incentive to bet, and many did, but in the short term they were swamped by partisan ‘dumb money.’ The sane speculators have proportionally a bit more money to correct future mispricings after that event, but not much more. AI bets have done very well over the last decade but they’re still not enough for the most informed people to become a large share of the relevant pricing views on these assets.
Same here.
They still have not published. You can email Jan Brauner and Fabienne Sandkuehler for it.
I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.