Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
kokotajlod
No, alas. However I do have this short summary doc I wrote back in 2021: The Master Argument for <10-year Timelines—Google Docs
And this sequence of posts making narrower points: AI Timelines—LessWrong
Perhaps this should be a top-level comment.
The XPT forecasters are so in the dark about compute spending that I just pretend they gave more reasonable numbers. I’m honestly baffled how they could be so bad. The most aggressive of them thinks that in 2025 the most expensive training run will be $70M, and that it’ll take 6+ years to double thereafter, so that in 2032 we’ll have reached $140M training run spending… do these people have any idea how much GPT-4 cost in 2022?!?!? Did they not hear about the investments Microsoft has been making in OpenAI? And remember that’s what the most aggressive among them thought! The conservatives seem to be living in an alternate reality where GPT-3 proved that scaling doesn’t work and an AI winter set in in 2020.
I haven’t considered all of the inputs to Cotra’s model, most notably the 2020 training computation requirements distribution. Without forming a view on that, I can’t really say that ~53% represents my overall view.
Sorry to bang on about this again and again, but it’s important to repeat for the benefit of those who don’t know: The training computation requirements distribution is by far the biggest cruxy input to the whole thing; it’s the input that matters most to the bottom line and is most subjective. If you hold fixed everything else Ajeya inputs, but change this distribution to something I think is reasonable, you get something like 2030 as the median (!!!) Meanwhile if you change the distribution to be even more extreme than Ajeya picked, you can push timelines arbitrarily far into the future.
Investigating this variable seems to have been beyond scope for the XPT forecasters, so this whole exercise is IMO merely that—a nice exercise, to practice for the real deal, which is when you think about the compute requirements distribution.
Another nice story! I consider this to be more realistic the previous one about open-source LLMs. In fact I think this sort of ‘soft power takeover’ via persuasion is a lot more probable than most people seem to think. That said, I do think that hacking and R&D acceleration are also going to be important factors, and my main critique of this story is that it doesn’t discuss those elements and implies that they aren’t important.
In addition to building more data centers, MegaAI starts constructing highly automated factories, which will produce the components needed in the data centers. These factories are either completely or largely designed by the AI or its subsystems with minimal human involvement. While a select few humans are still essential to the construction process, they are limited in their knowledge about the development and what purpose it serves.
It would be good to insert some paragraphs, I think, about how FriendlyFace isn’t just a single model but rather is a series of models being continually improved in various ways perhaps, and how FriendlyFace itself is doing an increasing fraction of the work involved in said improvement. By the time there are new automated factories being built that humans don’t really understand, presumably basically all of the actual research is being done by FriendlyFace and presumably it’s far smarter than it was at the beginning of the story.
I think it mostly means that you should be looking to get quick wins. When calculating the effectiveness of an intervention, don’t assume things like “over the course of an 85-year lifespan this person will be healthier due to better nutrition now.” or “this person will have better education and thus more income 20 years from now.” Instead just think: How much good does this intervention accomplish in the next 5 years? (Or if you want to get fancy, use e.g. a 10%/yr discount rate)
See Neartermists should consider AGI timelines in their spending decisions—EA Forum (effectivealtruism.org)
FWIW I think that it’s pretty likely that AGI etc. will happen within 10 years absent strong regulation, and moreover that if it doesn’t, the ‘crying wolf’ effect will be relatively minor, enough that even if I had 20-year medians I wouldn’t worry about it compared to the benefits.
Normally I’d say yes, but my AGI timelines are now 50% in ~4 years, so there isn’t much time for R&D to make a difference. I’d recommend interventions that pay off quickly, therefore. Bed nets, GiveDirectly, etc.
Ouch, I wasn’t aware of those rules, they do seem quite restrictive. If it’s a website rather than an app, how easy would it be to set it up so that you can access it with a single button press? I guess you can have favorites, default sites, etc.
But really, before anyone actually goes and invests significant effort into building this, you should coordinate with me + other people in this comment section.
Minimum acceptable features are the bits prior to “That’s it really.”
Oh dang. How about: When you press the button it doesn’t donate the money right away, but just adds it to an internal tally, and then once a quarter you get a notification saying ‘time to actually process this quarter’s donations, press here to submit your face for scanning, sorry bout the inconvenience’
Oh dang. I definitely want it to be the former, not the latter. Maybe we can get around the iOS platform constraints somehow, e.g. when you press the button it doesn’t donate the money right away, but just adds it to an internal tally, and then once a quarter you get a notification saying ‘time to actually process this quarter’s donations, press here to submit your face for scanning, sorry bout the inconvenience’
Everyone please downvote this comment of mine if they want to support the app idea but don’t want to give me karma as a byproduct of my polling strategy; this cancels out the karma I get from the OP.
Good idea. Everyone please downvote this comment of mine if they want to support the app idea but don’t want to give me karma as a byproduct of my polling strategy.
Hmmm. I really don’t want the karma, I was using it as a signal of how good the idea is. Like, creating this app is only worth someone’s time and money if it becomes a popular app that lots of people use. So if it only gets like 20 karma then it isn’t worth it, and arguably even if it gets 50 karma it isn’t worth it. But if it blows up and hundreds of people like it, that’s a signal that it’s going to be used by lots of people.
Maybe I should have just asked “Comment in the comments if you’d use this app; if at least 30 people do so then I’ll fund this app.” Idk. If y’all think I should do something like that instead I’m happy to do so.
ETA: Edited the OP to remove the vote-brigady aspect.
Simple charitable donation app idea
Yep! I love when old threads get resurrected.
Not sure I’m assuming that. Maybe. The way I’d put it is, selection pressure towards grabby values seems to require lots of diverse agents competing over a lengthy period, with the more successful ones reproducing more / acquiring more influence / etc. Currently we have this with humans competing for influence over AGI development, but it’s overall fairly weak pressure. What sorts of things are you imagining happening that would strengthen the pressure? Can you elaborate on the sort of scenario you have in mind?
Also, if you do various searches on LW and Astral Codex Ten looking for comments I’ve made, you might see some useful ones maybe.