Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://ââmetr.org/ââhiring
Ben_Westđ¸
If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous.
Iâm the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but donât really feel like those fears were realized.
Firstly, I donât think there are actually very many people who sincerely think that timelines are short but arenât scared by that. I think what you are referring to is people who think âtimelines are shortâ means something like âAI companies will 100x their revenue in the next five yearsâ, not âAI companies will be capable of instituting a global totalitarian state in the next five years.â There are some people who believe the latter and arenât bothered by it but in my experience they are pretty rare.
Secondly, when VCs get the âAI companies will 100x their revenue in the next five yearsâ version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]
Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.
- ^
Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didnât notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.
- ^
Of course, thereâs probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesnât cause much harm.
- ^
Do you think that distancing is ever not in the interest of both parties? If so, what is special about Anthropic/âEA?
(I think itâs plausible that the answer is that distancing is always good; the negative risks of tying your reputation to someone always exceed the positive. But Iâm not sure.)
Thanks for doing this Saulius! I have been wondering about modeling the cost effectiveness of animal welfare advocacy under assumptions of relatively short AI timelines. It seems like one possible way of doing this is to to change the âYearly decrease in probability that commitment is relevantâ numbers in your sheet (cells I28:30). Do you have any thoughts on that approach?
You had never thought through âwhether artificial intelligence could be increasing faster than Mooreâs law.â Should we conclude that AI risk skeptics are âinsular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupâs norms), and closed-off to even reasonable, relatively gentle criticism?â
I have to say, the bad part supports my observation!
Steven was responding to this:
The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupâs norms), and closed-off to even reasonable, relatively gentle criticism
None of Stevenâs bullet points support this. Many of them say the exact opposite of this.
More seriously, I didnât really think through precisely whether artificial intelligence could be increasing faster than Mooreâs law.
Fair enough, but in that case I feel kind of confused about what your statement âProgress does not seem like a fast exponential trend, faster than Mooreâs lawâ was intended to imply.
If the claim you are making is âAGI by 2030 will require some growth faster than Mooreâs lawâ then the good news is that almost everyone agrees with you but the bad news is that everyone already agrees with you so this point is not really cruxy to anyone.
Maybe you have an additional claim like â...and growth faster than mooreâs law is unlikely?â If so, I would encourage you to write that because I think that is the kind of thing that would engage with peopleâs cruxes!
If you drew a chart for the GPT models on ARC-AGI-2, it would mostly just be a flat line.. Itâs only with the o3-low and o1-pro models we see scores above 0%
⌠which is what (super)-exponential growth looks like, yes?
Specifically: Weâve gone from o1 (low) getting 0.8% to o3 (low) getting 4% in ~1 year, which is ~2 doublings per year (i.e. 4x Mooreâs law). Forecasting from this few data points sure seems like a cursed endeavor to me, but if you want to do it then I donât see how you can rule out Mooreâs-law-or-faster growth.
- Apr 9, 2025, 4:30 PM; 1 point) 's comment on On JanÂuary 1, 2030, there will be no AGI (and AGI will still not be imÂmiÂnent) by (
I would be curious to know what the best benchmarks are which show a sub-Mooreâs-law trend.
Progress does not seem like a fast exponential trend, faster than Mooreâs law and laying the groundwork for an intelligence explosion
Mooreâs law is ~1 doubling every 2 years. Barnesâ law is ~4 doublings every 2 years:
This post is focused on what the government can do but Iâm curious if you have thoughts about what the private sector can do to meet the government where it is.
I imagine that palantir is making a killing off of adapting generative AI to work for government requirements, but I assume there are still gaps in the marketplace? Do you have a sense for what these gaps are? is there some large segment of the government which would use generative AI if only it was compliant with standard X?
the other hand though some leadership jobs might not be the right job fit if theyâre not up for that kind of critique
Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me.
There are trade-offs to everything đ¤ˇââď¸
Only the most elite 0.1 percent of people can even have a meaningful âpublic private disconnectâ as you have to have quite a prominent public profile for that to even be an issue.
Hmm yeah, thatâs kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like âmy coworker Jane Doe sucks for these reasonsâ would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most peopleâs coworkers arenât trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people Iâve wanted to hire for leadership positions in EA have declined for reasons like âI donât want people critiquing my personal life on the EA Forumâ though.)
fwiw I think in any circle Iâve been a part of critiquing someone publicly based on their wedding website would be considered weird/âa low blow. (Including corporate circles.) [1]
- ^
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump canât really expect a public/âprivate communication disconnect. I donât think thatâs true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trumpâs.
- ^
Sad to see such a cult-like homogeneity of views. I blame Eliezer.
My guess is that the people quoted in this article would be sad if e.g. 80k started telling people not to work at Anthropic. But maybe Iâm wrongâwould be good to know if so!
(And also yes, âpeople having unreasonably high expectations for epistemics in published workâ is definitely a cost of dealing with EAs!)
Great points, I donât want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people arenât aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.
Iâm sympathetic to wanting to keep your identity small, particularly if you think the person asking about your identity is a journalist writing a hit piece, but if everyone takes funding, staff, etc. from the EA commons and donât share that they got value from that commons, the commons will predictably be under-supported in the future.
I hope Anthropic leadership can find a way to share what they do and donât get out of EA (e.g. in comments here).
Thanks for all your work Joey! If it is the case that your counterfactual impact is lower now, it is coming down from a high place, because I have been impressed with AIM for a while and my impression is that you were pivotal in founding and running it.
This is cool, I like BHAGs in general and this one in particular. Do you have a target for when you want to get to 1M pledgers?