I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication.
The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It seems like LLM progress is slowing down as they run out of public data and resource demands become too high. I predict gpt-5, if it is released, will be impressive to people in the AI space, but it will still hallucinate, will still be limited in generalisation ability, will not be AGI and the average joe will not much notice the difference. Generative AI will be big business and play a role in society and peoples lives, but in the next decade will be much less transformative than, the introduction of the internet or social media.
I expect that sometime in the next decade it will be widely agreed that AI progress has stalled, that most of the current wave of AI bandwagon jumpers will be quietly ignored or shelved, and that the current wave of LLM hype might look like a financial bubble that burst (ala dotcom bubble but not as big).
Both AI doomers and accelerationists will come out looking silly, but will both argue that we are only an algorithmic improvement away from godlike AGI. Both movements will still be obscure silicon valley things that the average joe only vaguely knows about.
I’m hearing this claim everywhere. I’m curious to know why you think so, given that OpenAI hasn’t released GPT-5.
Sam said multiple times that GPT-5 is going to be much better than GPT-4. It could be just hype but this would hurt his reputation as soon as GPT-5 is released.
I think you should update approximately not at all from Sam Altman saying GPT-5 is going to be much better. Every CEO says every new version of their product is much better—building hype is central to their job.
That’s true for many CEOs (like Elon Musk) but Sam Altman did not over-hype any of the big OpenAI launches (ChatGPT, gpt3.5, gpt4, gpt4o, dall-e, etc.).
It’s possible that he’s doing it for the first time now, but I think it’s unlikely.
But let’s ignore Sam’s claims. Why do you think LLM progress is slowing down?
Would you be interested in making quantitative predictions on the revenue of OpenAI/Anthropic in upcoming years, and/or when various benchmarks like these will be saturated (and OSWorld, released since that series was created), and/or when various Preparedness/ASL levels will be triggered?
Both AI doomers and accelerationists will come out looking silly, but will both argue that we are only an algorithmic improvement away from godlike AGI.
A common view is a median around 2035-2050 with substantial (e.g. 25%) mass in the next 6 years or so.
This view is consistent with both thinking:
LLM progress is likely (>50%) to stall out.
LLMs are plausibly going to quickly scale into very powerful AI.
(This is pretty similar to my view.)
I don’t think many people think “we are only an algorithmic improvement away from godlike AGI”. In fact, I can’t think of anyone who thinks this. Some people think that 1 substantial algorithmic advance + continued scaling/general algorithmic improvement, but the continuation of other improvements is key.
Upvoted for making your prediction. Disagree vote because I think it’s wrong.
Even if we expect AI progress to be “super fast”, it won’t always be “super fast”. Sometimes it’ll be “extra, super fast” and sometimes it’ll merely be “very fast”.
I think that some people are over-updating on AI progress now only being “very fast” thinking that it this can only happen within the model where AI is about to cap out, whilst I don’t think this is the case at all.
Why I disagree that this video insightful/entertaining: The YouTuber quite clearly has very little knowledge of the subject they are discussing—it’s actually quite reasonable for the Zoom CEO to simply say that fixing hallucinations will “occur down the stack”, given that they are not the ones developing AI models, and would instead be building the infrastructure and environments that the AI systems operate within.
From what I watched of the video, she also completely misses the real reason that the CEOs claims are ridiculous; if you have an AI system with a level of capability that allows it to replicate a person’s actions in the workplace, then why would we go to the extra effort of having Zoom calls between these AI clones?
I.e. It would be much more efficient to build information systems that align with the strengths & comparative advantages of the AI systems - presumably this would not involve having “realistic clones of real human workers” talking to each other, but rather a network of AI systems that communicate using protocols and data formats that are designed to be as robust and efficient as possible.
FWIW if I were the CEO of Zoom, I’d be pushing hard on the “Human-in-the-loop” idea. E.g. building in features that allow you send out AI agents to fetch information and complete tasks in real time as you’re having meetings with your colleagues. That would actually be a useful product that helps keep Zoom interesting and relevant.
With regards to AI progress stalling, I think it depends on what you mean by “stalling”, but I think this is basically impossible if you mean “literally will not meaningfully improve in a way that is economically useful”
When I first learned how modern AI systems worked, I was astonished at how absurdly simple and inefficient they are. In the last ~2 years there has been a move towards things like MoE architectures & RNN hybrids, but this is really only scratching the surface of what is possible with more complex architectures. We should expect a steady stream of algorithmic improvements that will push down inference costs and make more real-world applications viable. There’s also Moore’s Law, but everyone already talks about that quite a lot.
Also, if you buy the idea that “AI systems will learn tasks that they’re explicitly trained for”, then incremental progress is almost guaranteed. I think it’s hilarious that everyone in industry and Government is very excited about general-purpose AI and its capacity for automation, but there is basically no large-scale effort to create high-quality training data to expedite this process.
The fact that pre-training + chatbot RLHF is adequate to build a system with any economic value is dumb luck. I would predict that if we actually dedicated a not-insignificant chunk of society’s efforts towards training DL systems to perform important tasks, we would make quite a lot of progress very quickly. Perhaps a central actor like the CCP will do this at some stage, but until then we should expect incremental progress as small-scale efforts gradually build up datasets and training environments.
I think you’re mostly right, especially about LLMs and current hype (though I do think a couple innovations beyond current technology could get us AGI). but I want to point out that AI progress has not been entirely fruitless. The most salient example in my mind is AlphaFold which is actually used for research, drug discovery etc.
I want to say just “trust the market”, but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean “almost certainly will be quite useful and profitable, chance of near-term AGI almost zero’ or it could mean “probably won’t be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonetheless” or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment?
For publically-traded US companies there are ways to figure out the variance of their future value, not just the mean, mostly by looking at option prices. Unfortunately, OpenAI isn’t publically-traded and (afaik) has no liquid options market, but maybe other players (Nvidia? Microsoft?) can be more helpful there.
If you know how to do this, maybe it’d be useful to do it. (Maybe not though, I’ve never actually seen anyone defend “the market assigns a non-negligible probability to an intelligence explosion.)
It’s not really my specific area, but I had a quick look. (Frankly, this is mostly me just thinking out loud to see if I can come up with anything useful, and I don’t promise that I succeed.)
Yahoo Finance has option prices with expirations in Dec 2026. We’re mostly interested in upside potential rather than downside, so we look at call options, for which we see data up to strike prices of 280.[fn 1]
In principle I think the next step is to do something like invert Black-Scholes (perhaps (?) adjusting for the difference between European- and American-style options, assuming that these options are the latter), but that sounds hard, so let’s see if I can figure out something simpler from first principles:
The 280 strike Dec 2026 call option is the right to buy Nvidia stock, on Dec 18th 2026, for a price of $280. Nvidia’s current price is ~$124, so these options only have value if the stock more than doubles by then. They’re currently trading at $14.50, while the 275 call trades at $15.
The value of a particular option is the integral of the option’s payoff profile multiplied by the stock price’s probability density. If we want something like “probability the stock is at least X on date Y”, the ideal option payoff profile would be an indicator function with a step at X, but we can’t exactly get that. Instead, by buying a call struck at A and selling a call struck at B, we get a zero function up to A, then a linear increase from A to B, then a constant function from B. Picking A and B close together seems like the best approximation. It means looking at prices for very low-volume options, but looking at the nearby prices including for higher-volume options, they look superficially in line, so I’ll go with it.
More intuitively, if the stock was definitely going to be above both A and B, then the strike-A option would be B—A more valuable than the strike B option (that is, the right to buy a stock worth $10 for a price $1 is worth exactly $3 more than the right to do so for $4). If the stock was definitely going to be below both A and B, then both options would be worthless.
So the value of the two options differ by (B—A)P(the price is above B), plus some awkward term for when the price is between A and B, which you can hopefully make ignorable by making that interval small.
From this I hesitantly conclude that the options markets suggest that P(NVDA >= 280-ish) = 10%-ish?
[fn 1]: It looks like there are more strike prices than that, but all the ones after 280 I think aren’t applicable: you can see a huge discontinuity in the prices from 280 to 290, and all the “last trade date” fields for the higher options are from before mid-June, so I think these options don’t exist anymore and come from before the 10-to-1 stock split.
Which examples do you think of when you say this? (Not necessarily disagreeing, I’m just interested in the different interpretations of ‘LLMs are poor at general reasoning’ ).
I also think that LLM reasoning can be significantly boosted with scaffolding—i.e: most hard reasoning problems can be split up into a a handful of easier reasoning problems; this can be done recursively until your LLM can solve a subproblem, then build back up the full solution. So whilst scale might not get us to a level of general reasoning that qualifies as AGI, perhaps GPT-5 (or 6) plus scaffolding can.
FWIW, even if AGI arrives ~ 2050 I still think it* would be the thing I’d want to work on right now. I would need to be really confident it wasn’t arriving before then for me not to want to work on it.
I took a break from engaging with EA topics for like a month or two, and I think it noticeably improved my mental health and productivity, as debating here frequently was actually stressing me out a lot. Which is weird, because the stakes for me posting here are incredibly low: I’m pseudonymous, have no career or personal attachments to EA groups, and I’m highly skeptical that EA efforts will have any noticeable effect on the future of humanity. I can’t imagine how stressful these discussions are for people with the opposite positions!
I still have plenty of ideas I want to write up, so I’m not going anywhere, but I’ll try to be more considered in where I put my effort.
You are a good and smart commenter, but that is probably generally a sign that you could be doing something more valuable with your time than posting on here. In your case though, that might not actually be true, since you also represent a dissenting perspective that makes things a bit less of an echo chamber on topics like AI safety, and it’s possible that does have some marginal influence on what orgs and individuals actually do.
As a pseudonymous poster with a non-EA job who created his account a few months after yours, I’ve needed to update on the value of prolonged debate on at least meta topics. There was a lot of going in late 2022 / early 2023 where I think having outside voices deeply engaging with debates in the comments was of significant value. I think that is considerably less true for, e.g., the whole Nonlinear controversy in late 2023 / early 2024.
I actually don’t really debate on the forums for this very reason. I too am EA-adjacent (yes I’m aware that’s a bit of a meme!) and do not work in the EA sphere. I share insights and give feedback, but generally if people disagree I’m happy to leave it at that. I have a very stressful (non-EA) job and so rarely have the bandwidth for something that has no real upside like forum debate. I may make exceptions if someone seems super receptive, but I totally understand why you feel how you do.
Edit: The bug I mentioned below has since been fixed. The default values still do not seem to match with the figures of RP’s report here, and I believe there is also an error in said report that underestimates the impact by ~a factor of 2. See the extended discussion on this post for details.
I would advise being careful with RP’s Cross-cause effectiveness tool as it currently stands, especially with regards to the chicken campaign. There appears to be a very clear conversion error which I’ve detailed in the edit to my comment here. I was also unable to replicate their default values from their source data, but I may be missing something.
I think comments like these are valuable when they are made after the relevant parties have all had enough time to respond, the discussion is largely settled, and readers are in a position to make up their minds about the nature, magnitude and importance of the problems reported, by having access to all the information that is likely to emerge from the exchange in question. Instead, your comment cautions people to be careful in using a tool based on some issues you found and reported less than two days ago, when the discussion appears to be ongoing and some of the people involved have not even expressed an opinion, perhaps because they haven’t yet seen the thread or had enough time to digest your criticisms. Maybe these criticisms are correct and we should indeed exercise the degree of caution you advise when using the tool, but it seems not unlikely that we’ll be in a better epistemic position to know this, say, a week or so from now, so why not just wait for all the potential evidence to become available?
In the linked thread, the website owners have confirmed that there is indeed an error in the website. If you try to make calculations using their site as currently made you will be off by a factor of a thousand. They have confirmed this and have stated that this will be fixed soon. When it is fixed I will edit the shortform.
Would you prefer that for the next couple of days, during the heavily publicised AW vs GHD debate week, in which this tool has been cited multiple times, people continue to use it as is despite it being bugged and giving massively wrong results? Why are you not more concerned about flawed calculations being spread than about me pointing out that flawed calculations are being spread?
In your original shortform, you listed three separate criticisms, but your reply now focuses on just one of those criticisms, in a way that makes it look that my concerns would be invalidated if one granted the validity of that specific criticism. This is the sort of subtle goalpost moving that makes it difficult to have a productive discussion.
Why are you not more concerned about flawed calculations being spread than about me pointing out that flawed calculations are being spread?
Because there is an asymmetry in the costs of waiting. Waiting a week or so to better understand the alleged problems of a tool that will likely be used for years is a very minor cost, compared to the expected improvement in that understanding that will occur over that period.
(ETA: I didn’t downvote any of your comments, in accordance with my policy of never downvoting comments I reply to, even if I believe I would normally have downvoted them. I mention this only because your most recent comment was downvoted just as I posted this one.)
I list exactly 2 criticisms. One of them was proven correct, the other I believe to be correct also but am waiting on a response.
I agree with the asymettry in the cost of waiting, but the other way. If these errors are corrected a week from now, after the debate week has wrapped up, then everybody will have stopped paying attention to the debate, and it will become much harder to correct any BS arising from the faulty tool.
Do you truly not care that people are accidentally spreading misinformation here?
Do you truly not care that people are accidentally spreading misinformation here?
Why do you attribute to me a view I never stated and do not hold? If I say that one cost is greater than another, it doesn’t mean that I do not care about the lesser cost.
I’d probably agree with this if the tool were not relevant for Debate Week and/or RP hadn’t highlighted this tool in a recent post for Debate Week. So there’s a greater risk of any errors cascading into the broader discussion in a way that wouldn’t be practically fixable by a later notice that the tool was broken.
I’m a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I’m particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at −16 at one point) . Fact checking should take precedence over fandoms.
But… your post was quite inaccurate and strawmanned people extensively?
Eliezer and other commenters compellingly demonstrated this in the comments. I don’t think you should get super downvoted, but your post includes a lot of straightforwardly false sentences, so I think the reaction makes sense.
But… your post was quite inaccurate and strawmanned people extensively?
I am legitimately offended by this accusation. I had an organic chemist fact-check the entire thing, and I have included his fact-check into the actual post. Yudkowsky admitted that at least one of his claims was wrong. I explained my exact reasoning for the other problems, and he did not debunk those.
If you can point to any further factual errors in my post after the recent edits, I’m happy to edit those as well.
Yudkowsky’s response persuaded me that he didn’t intend to say factually incorrect things. (It is also unsourced and full of factual errors, but I don’t know if it’s worth doing a fact check of a mere comment). But even if he was just badly putting forth an analogy, he is still saying scientifically incorrect things.
I think this sets a pretty terrible precedent about the response to public figures making errors.
Sure. Here are some quotes from the original version of your post:
As a result, I am now certain that the statement “proteins are held together by van der Waals forces rather than covalent bonds” is false. It’s false even if you put hydrogen bonding in the “Van der Waals forces” category, which would be misleading in this context. Nobody who knew about the actual structure of proteins would use the phrase “covalently bonded alternatives to biology”. The entire field of organic chemistry arises from carbon’s thirst for covalent bonds.
This paragraph clearly shows you misunderstood Eliezer. Different proteins are held together almost exclusively by non-covalent forces.
Nobody who knew about the actual structure of proteins would use the phrase “covalently bonded alternatives to biology”.
This is also evidently false, since like dozens of people I know have engaged with Drexlers and Eliezers thoughts on this space, many of which have a pretty deep understanding of chemistry, and would use similar (or the same) phrase. You seem to be invoking some expert consensus that doesn’t exist. Indeed, multiple people with PhD level chemistry background have left comments saying they understood Eliezer’s point here.
Anybody with a chemistry or biology background who hears someone confidently utter the phrase “covalently bonded equivalents to biology” will immediately have their bullshit alarm triggered, and will probably dismiss everything else you say as well. This also goes for anyone with enough skeptical instinct to google claims to ensure they have the bare minimum of scientific backing.
This is also false. The point makes sense, many people with chemistry or biology background get it, as shown above.
Look, I appreciate the post about the errors in the quantum physics sequence, but you are again vastly overstating the expert consensus here. I have talked with literally 10+ physics PhDs about the quantum physics sequence. Of course there are selection effects, but most people liked it and thought it was great. Yes, it was actually important to add a renormalization term, as you said in your critique, but really none of the points brought up in the sequences depended on it at all.
Like look, when people read your post without actually reading Eliezer’s reply, they get the very strong sense that you are claiming that Eliezer is making an error at the level of high-school biology. That somehow he got so confused about chemistry that he didn’t understand that a single protein of course is made out of covalent bonds.
But this is really evidently false and kind of absurd. As you can see in a lot of Eliezer’s writing, and also his comment level response, Eliezer did not at any point get confused that proteins are made out of covalent bonds. Indeed, to me and Eliezer it seemed so obvious that proteins internally are made out of covalent bonds that I did not consider the possibility that people could somehow interpret this as a claim about the atoms in proteins being held together by Van der Waals forces (how do you even understand what Van der Waals forces are, but don’t understand that proteins are internally covalently bonded?). But that misinterpretation seems really what your post was about.
Now, let me look at the most recent version of your post:
Well, it still includes:
As a result, I am now certain that the statement “proteins are held together by van der Waals forces rather than covalent bonds” is false. It’s false even if you put hydrogen bonding in the “Van der Waals forces” category, which would be misleading in this context. Nobody who knew about the actual structure of proteins would use the phrase “covalently bonded alternatives to biology”. The entire field of organic chemistry arises from carbon’s thirst for covalent bonds.
This still seems wrong, though like, you did add some clarifications around it that make it more reasonable.
You did add a whole new section which is quite dense in wrong claims:
> human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces
This is just straight up, explicitly false. Biology does not “stick to ionic bonds and proteins”. As I pointed out, biology is made up of covalent bonds at it’s very core, and uses them all the time.
Look, this is doubling down on a misinterpretation which at this point you really should have avoided. We are talking about what you call the tertiary structure here. At the level of tertiary structures, and the bonds between proteins, biology does almost solely stick to ionic bonds and proteins held together by Van der Waals forces.
It is the case that sometimes the tertiary structures also use covalent bonds, like in the case of lignin, and I think that’s a valid point. It’s however not one you had made in your post at all, and just one that Eliezer acknowledges independently. The most recent version of your post now does have a fraction of a sentence, in a quote by your chemistry fact-checker, saying that sometimes tertiary protein structures to use covalent bonds, and I think that’s an actual real point that responds to what Eliezer is saying. A post I wouldn’t downvote would be one that had that as its main focus, since I think there is a valid critique to be made that biology is in some circumstances capable of using covalent bonds for tertiary structures (as Eliezer acknowledges), but it’s not the one you made.
The phrase “covalently bonded equivalents to biology” implicitly states that biology is not covalently bonded. This is false.
Look man, I think you really know by know what Eliezer means by this. Eliezer is talking about alternatives to biology where most of the tertiary structure leverages covalent bonds.
The context of this claim is that Yudkowsky is trying to come up with a new name for deadly Drexler-style nanomachines. He has chosen “covalently bonded bacteria”, implying that “covalently bonded bacteria” and normal bacteria are different things. Except that’s not true, because bacteria is completely full of covalent bonds.
This is also doubling down on the same misunderstanding. The machinery and tertiary structure of bacteria do not use covalent bonds very much. This is quite different from most current nanomachine designs, which the relevant books hypothesize would be substantially more robust than present biological machinery due to leveraging mostly covalent bonds.
Okay, I just saw this one, but ribosomes are not “general” assemblers, and they cannot replicate “most other products of biology”. They do literally one thing, and that is read instructions and link together amino acids to form proteins.
I don’t understand what you are talking about here. Basically everything in biology is either made out of proteins or manufactured by proteins. If you can make proteins, you can basically make anything. Proteins are the way most things get done in a cell. The sentence above reads as confused as saying “a CPU is not a general purpose calculator. It does exactly one thing, and that is to read instructions and return the results”. Yes, ribosomes read instructions and link together amino acids to form proteins, and that is how biological systems generally assemble things.
For the next two, let’s establish the principle that if you say “X is held together by Y instead ofZ”, you are implicitly making the statement that “X is not held together by Z”, or perhaps that “Z is irrevelant compared to Y when talking about how X is held together”, or that “Y is the dominant structural force compared to Z”. Otherwise you would not have used the word instead of. Would you utter the phrase “animal bodies are held together by flesh instead of skeletons?”
This one is confused on multiple levels. The meaning of “X is held together by something” of course depends on what level of organization of X you are talking about.
Both of the following sentences are correct:
“Different proteins are held together by Van der Waals and other non-covalent forces instead of covalent bonds”. ”
A protein is held together by covalent bonds instead of Van der Waals forces and other non-covalent bonds”.
Those are fine sentences to say. Yes, it’s plausible to get misunderstood, and that’s a bit sad, but it doesn’t mean you were wrong.
Would you utter the phrase “animal bodies are held together by flesh instead of skeletons?”
This is a random nitpick, but animal bodies are indeed internally held together by flesh instead of skeletons. The skeleton itself is not connected. Bones only provide local structural support against bending and breaking. If I removed your flesh your bones would mostly disconnect and fall into a heap on the ground. Your bones are generally not under much of any strain at any given moment in time, instead you are more like a tensegrity structure where the vast majority of your integrity comes from tension, which comes from your tendons and muscles.
This implicitly makes the statement “proteins are not held together by strong covalent bonds”, which is false. Or it could be saying “strong covalent bonds are irrelevant compared to van der waals forces when talking about how proteins are held together”, which is also false. edit: Or it is saying that “van der waals forces are the dominant structural force in proteins”, which is also false, because this is materially dependent, and some proteins have covalent disulfide links as their dominant structural force.
Even with correction this is still inaccurate. It is correct that non-covalent bonds are the dominant forces for the structure of proteins. Yes, there are some exceptions, like lignin, and that matters, and as I said, I would have upvoted a post talking about that. Yes, it’s structurally dependent. But if you aggregate across structures it’s true and seems reasonably to describe as being the dominant force.
Hey, I want to thank you for going through the post. I think you’ve done a good job, and I appreciate it. I’ll try to go through and give a similar effort in the replies. Note that I don’t want to pressure you to do a re-reply, although you can if you want. I just want to say my piece and defend myself, and I’m happy to let the readers decide from our duelling accounts.
Actually, I will skip ahead first, because I think it illuminates the most where the disagreement lies.
This is a random nitpick, but animal bodies are indeed internally held together by flesh instead of skeletons. The skeleton itself is not connected. Bones only provide local structural support against bending and breaking. If I removed your flesh your bones would mostly disconnect and fall into a heap on the ground. Your bones are generally not under much of any strain at any given moment in time, instead you are more like a tensegrity structure where the vast majority of your integrity comes from tension, which comes from your tendons and muscles.
Yes, if I removed your flesh, your bones would fall to the ground. But similarly, If I removed your skeleton, your flesh would also fall to the ground. Holding a body together is a partnership between bones, muscles and flesh, and if you remove any one, the rest break.
This is kind of the whole of my original point here. Yes, it’s perfectly fine to zoom out onto the tertiary structure of a protein, and discuss the make-up of the crosslinks there, if you are clear that’s what you’re doing. But without the primarily covalent backbone of the system, there is no tertiary structure. So, starting at the beginning:
This paragraph clearly shows you misunderstood Eliezer. Different proteins are held together almost exclusively by non-covalent forces.
I just disagree. Proteins are held together by a combination of covalent bonds and non-covalent forces. If you went in and removed all the covalent bonds, the protein would collapse into nothingness. If you removed all the non-covalent bonds, you would still have that covalent primary structure backbone, which would then snap back into place and reform all the other bonds, rebuilding the protein. (I mean, not every single time, because sometimes undoing denaturation has an energy penalty that is too high). In that sense, it really makes no sense to say that it’s held together “almost exclusively by non-covalent forces”.
It is true that often non-covalent forces (typically hydrophobic interactions only sometimes Van der waals forces) are the dominant structural force of the 3D structure as a whole. Of course, other times covalent bonds are, as is the case in Keratin-type proteins.
This is also evidently false, since like dozens of people I know have engaged with Drexlers and Eliezers thoughts on this space, many of which have a pretty deep understanding of chemistry, and would use similar (or the same) phrase.
I spent a very, very long time investigating Drexlerian nanotech, and I definitely never saw anything like “covalently bonded equivalents of biology”. I think that would be a pretty bad way to describe it, because, as has been established, biology uses covalent bonds at every level. I could see a case for “strictly covalently bonded” though.
You seem to be invoking some expert consensus that doesn’t exist. Indeed, multiple people with PhD level chemistry background have left comments saying they understood Eliezer’s point here.
I don’t want to discount the people who did agree with me or who didn’t. I saw a chemist saying they agreed with me and getting downvoted, a protein chemist saying they thought the wording was wrong but they liked the “spaghetti” analogy, and an organic chemist also agreeing with me. Generally the consensus seems to be that the language was badly worded or wrong, but some people found the underlying point defensible or even good. I do agree that some experts are okay with the language.
I think it’s worth pointing out that the commentators here and on Lesswrong are disproportionately likely to be Eliezer fans, and be willing to give him the benefit of the doubt. This is not the case for a random person watching a TED talk.
This is also false. The point makes sense, many people with chemistry or biology background get it, as shown above.
You are right that it is an overstatement, I will edit that. However, I maintain that many experts who encounter these badly worded claims will dismiss your argument as a result.
Look, I appreciate the post about the errors in the quantum physics sequence, but you are again vastly overstating the expert consensus here. I have talked with literally 10+ physics PhDs about the quantum physics sequence. Of course there are selection effects, but most people liked it and thought it was great. Yes, it was actually important to add a renormalization term, as you said in your critique, but really none of the points brought up in the sequences depended on it at all.
I think you are underestimating the selection effects here. The physics phd’s who thought it sucked are not on lesswrong, they got turned away by the overconfident mistakes. As a physics PHD myself… Eh, it’s better than most pop-science stuff, but that’s a very low bar. There’s plenty more errors in there, and the underlying argument about MWI is pretty bad, but I’ll save that for a future post.
Like look, when people read your post without actually reading Eliezer’s reply, they get the very strong sense that you are claiming that Eliezer is making an error at the level of high-school biology. That somehow he got so confused about chemistry that he didn’t understand that a single protein of course is made out of covalent bonds.
But this is really evidently false and kind of absurd. As you can see in a lot of Eliezer’s writing, and also his comment level response, Eliezer did not at any point get confused that proteins are made out of covalent bonds. Indeed, to me and Eliezer it seemed so obvious that proteins internally are made out of covalent bonds that I did not consider the possibility that people could somehow interpret this as a claim about the atoms in proteins being held together by Van der Waals forces (how do you even understand what Van der Waals forces are, but don’t understand that proteins are internally covalently bonded?). But that misinterpretation seems really what your post was about.
I don’t think it really matters whether he did or did not truly know that the primary structure was covalent. The problem was that at no point, in all of the quotes I found of him discussing the matter, did he clarify that he was talking only about the tertiary structure, or “strictly covalent bonding”, or crosslinks between protein folds.
Intentionally or unintentionally, an uninformed listener would get the interpretation: “biology does not use this super duper strong force called “covalent bonds”.
Imagine reading those quotes from the perspective of someone who knows nothing about biology, and tell me that that is not the obvious implication of what he says.
I’m happy to give the benefit of the doubt when chatting between friends, but he is using this terminology on podcasts and TED talks. Factual rigour matters.
And, for the record, I don’t think it’s that unthinkable that he didn’t know the primary structure of protein was covalent, it’s not that hard of a mistake to make. Unlike say quantum physics, organic chemistry was never a subject I’ve seen him delve deeply into, and the only source he ever cited on the subject was Drexler.
Look, this is doubling down on a misinterpretation which at this point you really should have avoided. We are talking about what you call the tertiary structure here. At the level of tertiary structures, and the bonds between proteins, biology does almost solely stick to ionic bonds and proteins held together by Van der Waals forces.
Just no. Wait a second, and actually re-read the statement I am responding to. It’s a flat statement that “humans utilise covalent bonds, but “biology doesn’t”. Obviously, biology does “utilise covalent bonds”, in that it’s made out of covalent bonds. If he only wanted to talk about tertiary structure, he should have said “tertiary structure”, and not make a flat statement about all of biology.
Look man, I think you really know by know what Eliezer means by this. Eliezer is talking about alternatives to biology where most of the tertiary structure leverages covalent bonds.
If he means this, he should say that, instead of a different thing that makes no sense. If you say “covalently bonded bacteria”, that’s the same thing as a regular bacteria.
This is also doubling down on the same misunderstanding. The machinery and tertiary structure of bacteria do not use covalent bonds very much. This is quite different from most current nanomachine designs, which the relevant books hypothesize would be substantially more robust than present biological machinery due to leveraging mostly covalent bonds.
When you say “bacteria”, you are talking about the entire bacteria, not just the machinery and tertiary structure. And the entire bacteria does include covalent bonds, or it would fall apart.
Also, they do utilise tertiary covalent bonds for the parts of the cell that need to be very strong.
The relevant books focus on the structural differences: diamondoid nanofactories were hypothesized to be very stable, so that atomically precise placement of molecules could enable new structures that were not available in biology.
I don’t understand what you are talking about here. Basically everything in biology is either made out of proteins or manufactured by proteins. If you can make proteins, you can basically make anything. Proteins are the way most things get done in a cell. The sentence above reads as confused as saying “a CPU is not a general purpose calculator. It does exactly one thing, and that is to read instructions and return the results”. Yes, ribosomes read instructions and link together amino acids to form proteins, and that is how biological systems generally assemble things.
I see where you’re coming from here. It may be a simple matter of bad phrasing on his part leading to a misleading statement. For example, the latter statement:
It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone
implies that it is the ribosome itself which would directly print out drexlerian nanotech, which of course is impossible. I get that he was probably trying to say the ribosomes will create proteins which in turn create drexlerian nanotech (probably also impossible, but I’ll litigate that another day). The phrasing here overstates the ease of this process: uninformed readers will come away thinking “oh, ribosomes are general assemblers, so they can generally assemble nanotech”, which is not true.
This one is confused on multiple levels. The meaning of “X is held together by something” of course depends on what level of organization of X you are talking about.
I accept this if you specify the level you are talking about, or it’s obvious to both you and your audience which level you are talking about. Neither of these apply to Eliezer’s statements.
If you don’t specify the level, then the statement is either meaningless, or it’s talking about the entire structure. I’m sorry, I am just never going to accept that you can say “instead of covalent bonds”, talking generally, when the backbone is covalent bonds.
I think the main takeaway is that you should really avoid using the phrase “held together” altogether, unless it’s paired with a more precise descriptor of what structure you are talking about, primarily for all the reasons explored in the post.
Even with correction this is still inaccurate. It is correct that non-covalent bonds are the dominant forces for the structure of proteins. Yes, there are some exceptions, like lignin, and that matters, and as I said, I would have upvoted a post talking about that. Yes, it’s structurally dependent. But if you aggregate across structures it’s true and seems reasonably to describe as being the dominant force.
He never used the phrase “non-covalent bonds”. He only ever said van der waals forces, which is wrong: as multiple people pointed out, if you are forced to state one thing primarily making the 3d structure, it would be hydrophobic bonds.
I think the exceptions are very important, as they negate the whole argument he was making. If biology can make densely covalent bonded things, why doesn’t it make all of it’s structures covalently bonded? My guess is simply that it’s extremely useful to utilize all of the forces available to you, rather than sticking to stiff, rigid, and unflexible structures. But I’ll save this for a more detailed research post further down the line.
As a conclusion, I’ll just say this: I think that yudkowsky did not sufficiently scientifically vet his arguments before sending them out to the general public. As a result, he said things that gave his audience a misleading picture of the advantages and disadvantages of biology vs nanotech. They were scientifically badly worded, ending up with phrases that were at best misleading and at worst just incorrect. While this whole experience has been pretty exhausting, I hope that it will at least lead to improved scientific terminology in the future.
I will eventually write a more detailed analysis of biology vs Drexler, and you can better believe that one will be extensively fact checked before it goes out.
I will do another round of edits tomorrow, and I’ll probably let that be it, there is only so much time in the world one can devote to posting.
It does look like there is an interpretation of EYs basic claims which is roughly reasonable and one which is clearly false and unreasonable, and you assumed he meant the clearly unreasonable thing and attack that.
I think absent further evidence, it’s fair for others to say “he couldn’t have possibly meant that” and move on.
As someone in the ‘general public rather than chemistry/physics PhD’ group, which Eliezer is saying he’s targeting, I definitely thought he meant that
That’s fair enough and levels of Background understanding vary (I don’t have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.
I think part of titotal’s point is it’s not the ‘strawman’ interpretation but the straightforward one, and having it framed that way would understandably be frustrating. It sounds like he also disagrees with Eliezer’s actual, badly communicated argument [edit: about the size of potential improvements on biology] anyway though? Based on the response to Habryka
Yeah, I think it would have been much better for him to say “proteins are shaped by...” rather than “proteins are held together by...”, and to give some context for what that means. Seems fair to criticize his communication. But the quotes and examples in the linked post are more consistent with him understanding that and wording it poorly, or assuming too much of his audience, rather than him not understanding that proteins use covalent bonds.
The selected quotes do give me the impression Eliezer is underestimating what nature can accomplish relative to design, but I haven’t read any of them in context so that doesn’t prove much.
What is the best practice for dealing with biased sources? For example, if I’m writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?
Partially depends on the nature of the claim, methinks. Generally, I do not think you have any obligation to delve into collateral matters—just enough to put the reader on notice of the possible bias.
really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism
Doesn’t look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF?
I personally think people overrate people’s stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the “true cause”? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn’t seem coherent to say that means the principles are overstated as an explanation for my behavior.
I dunno why loneliness would be different; first thought is that loneliness means one has less of a community to appeal to, so there’s less conformity biases preventing such a person from developing divergent or (relatively) extreme views; the fact that they can find some community around said views and have conformity pressures towards them is also a factor of course; and that actually would be an ‘unprincipled’ reason to adopt a view so i guess for that case it does make sense to say, “it’s more situation(-activated biases) than genuine (less-biasedly arrived at) principles”.
An implication in my view is that this isn’t particularly about extreme behavior; less biased behavior is just rare across the spectrum. (Also, if we narrow in on people who are trying to be less biased, their behavior might be extreme; e.g., Rationalists trying prevent existential risk from AI seems deeply weird from the outside))
This is a guy who got back surgery that was covered by his health insurance and then murdered the CEO of a different health insurance company. While EAs are always keen to self-flagellate over any possible bad thing that might have some tangential connection, I really think this one can be categorized under ‘crazy’ and ‘psychedelics’. To the extent he was motivated by ideology it doesn’t seem to be EA—the slogan he carved onto the bullet casings was a general anti-capitalist anti-insurance one.
Well, they could have. A lot of things are logically possible. Unless there is some direct evidence that he was motivated by EA principles, I don’t think we should worry too much about that possibility.
I don’t see a viable connection here, unless you make “EA principles” vague enough to cover an extremely wide space (e.g., considering ~consequentialism an “EA principle”).
I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication.
The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It seems like LLM progress is slowing down as they run out of public data and resource demands become too high. I predict gpt-5, if it is released, will be impressive to people in the AI space, but it will still hallucinate, will still be limited in generalisation ability, will not be AGI and the average joe will not much notice the difference. Generative AI will be big business and play a role in society and peoples lives, but in the next decade will be much less transformative than, the introduction of the internet or social media.
I expect that sometime in the next decade it will be widely agreed that AI progress has stalled, that most of the current wave of AI bandwagon jumpers will be quietly ignored or shelved, and that the current wave of LLM hype might look like a financial bubble that burst (ala dotcom bubble but not as big).
Both AI doomers and accelerationists will come out looking silly, but will both argue that we are only an algorithmic improvement away from godlike AGI. Both movements will still be obscure silicon valley things that the average joe only vaguely knows about.
I’m hearing this claim everywhere. I’m curious to know why you think so, given that OpenAI hasn’t released GPT-5.
Sam said multiple times that GPT-5 is going to be much better than GPT-4. It could be just hype but this would hurt his reputation as soon as GPT-5 is released.
In any case, we’ll probably know soon.
I think you should update approximately not at all from Sam Altman saying GPT-5 is going to be much better. Every CEO says every new version of their product is much better—building hype is central to their job.
That’s true for many CEOs (like Elon Musk) but Sam Altman did not over-hype any of the big OpenAI launches (ChatGPT, gpt3.5, gpt4, gpt4o, dall-e, etc.).
It’s possible that he’s doing it for the first time now, but I think it’s unlikely.
But let’s ignore Sam’s claims. Why do you think LLM progress is slowing down?
Would you be interested in making quantitative predictions on the revenue of OpenAI/Anthropic in upcoming years, and/or when various benchmarks like these will be saturated (and OSWorld, released since that series was created), and/or when various Preparedness/ASL levels will be triggered?
A common view is a median around 2035-2050 with substantial (e.g. 25%) mass in the next 6 years or so.
This view is consistent with both thinking:
LLM progress is likely (>50%) to stall out.
LLMs are plausibly going to quickly scale into very powerful AI.
(This is pretty similar to my view.)
I don’t think many people think “we are only an algorithmic improvement away from godlike AGI”. In fact, I can’t think of anyone who thinks this. Some people think that 1 substantial algorithmic advance + continued scaling/general algorithmic improvement, but the continuation of other improvements is key.
I think you’re probably wrong, but I hope you’re right.
Upvoted for making your prediction. Disagree vote because I think it’s wrong.
Even if we expect AI progress to be “super fast”, it won’t always be “super fast”. Sometimes it’ll be “extra, super fast” and sometimes it’ll merely be “very fast”.
I think that some people are over-updating on AI progress now only being “very fast” thinking that it this can only happen within the model where AI is about to cap out, whilst I don’t think this is the case at all.
Why I disagree that this video insightful/entertaining: The YouTuber quite clearly has very little knowledge of the subject they are discussing—it’s actually quite reasonable for the Zoom CEO to simply say that fixing hallucinations will “occur down the stack”, given that they are not the ones developing AI models, and would instead be building the infrastructure and environments that the AI systems operate within.
From what I watched of the video, she also completely misses the real reason that the CEOs claims are ridiculous; if you have an AI system with a level of capability that allows it to replicate a person’s actions in the workplace, then why would we go to the extra effort of having Zoom calls between these AI clones?
I.e. It would be much more efficient to build information systems that align with the strengths & comparative advantages of the AI systems - presumably this would not involve having “realistic clones of real human workers” talking to each other, but rather a network of AI systems that communicate using protocols and data formats that are designed to be as robust and efficient as possible.
FWIW if I were the CEO of Zoom, I’d be pushing hard on the “Human-in-the-loop” idea. E.g. building in features that allow you send out AI agents to fetch information and complete tasks in real time as you’re having meetings with your colleagues. That would actually be a useful product that helps keep Zoom interesting and relevant.
With regards to AI progress stalling, I think it depends on what you mean by “stalling”, but I think this is basically impossible if you mean “literally will not meaningfully improve in a way that is economically useful”
When I first learned how modern AI systems worked, I was astonished at how absurdly simple and inefficient they are. In the last ~2 years there has been a move towards things like MoE architectures & RNN hybrids, but this is really only scratching the surface of what is possible with more complex architectures. We should expect a steady stream of algorithmic improvements that will push down inference costs and make more real-world applications viable. There’s also Moore’s Law, but everyone already talks about that quite a lot.
Also, if you buy the idea that “AI systems will learn tasks that they’re explicitly trained for”, then incremental progress is almost guaranteed. I think it’s hilarious that everyone in industry and Government is very excited about general-purpose AI and its capacity for automation, but there is basically no large-scale effort to create high-quality training data to expedite this process.
The fact that pre-training + chatbot RLHF is adequate to build a system with any economic value is dumb luck. I would predict that if we actually dedicated a not-insignificant chunk of society’s efforts towards training DL systems to perform important tasks, we would make quite a lot of progress very quickly. Perhaps a central actor like the CCP will do this at some stage, but until then we should expect incremental progress as small-scale efforts gradually build up datasets and training environments.
I think you’re mostly right, especially about LLMs and current hype (though I do think a couple innovations beyond current technology could get us AGI). but I want to point out that AI progress has not been entirely fruitless. The most salient example in my mind is AlphaFold which is actually used for research, drug discovery etc.
I want to say just “trust the market”, but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean “almost certainly will be quite useful and profitable, chance of near-term AGI almost zero’ or it could mean “probably won’t be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonetheless” or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment?
For publically-traded US companies there are ways to figure out the variance of their future value, not just the mean, mostly by looking at option prices. Unfortunately, OpenAI isn’t publically-traded and (afaik) has no liquid options market, but maybe other players (Nvidia? Microsoft?) can be more helpful there.
If you know how to do this, maybe it’d be useful to do it. (Maybe not though, I’ve never actually seen anyone defend “the market assigns a non-negligible probability to an intelligence explosion.)
It’s not really my specific area, but I had a quick look. (Frankly, this is mostly me just thinking out loud to see if I can come up with anything useful, and I don’t promise that I succeed.)
Yahoo Finance has option prices with expirations in Dec 2026. We’re mostly interested in upside potential rather than downside, so we look at call options, for which we see data up to strike prices of 280.[fn 1]
In principle I think the next step is to do something like invert Black-Scholes (perhaps (?) adjusting for the difference between European- and American-style options, assuming that these options are the latter), but that sounds hard, so let’s see if I can figure out something simpler from first principles:
The 280 strike Dec 2026 call option is the right to buy Nvidia stock, on Dec 18th 2026, for a price of $280. Nvidia’s current price is ~$124, so these options only have value if the stock more than doubles by then. They’re currently trading at $14.50, while the 275 call trades at $15.
The value of a particular option is the integral of the option’s payoff profile multiplied by the stock price’s probability density. If we want something like “probability the stock is at least X on date Y”, the ideal option payoff profile would be an indicator function with a step at X, but we can’t exactly get that. Instead, by buying a call struck at A and selling a call struck at B, we get a zero function up to A, then a linear increase from A to B, then a constant function from B. Picking A and B close together seems like the best approximation. It means looking at prices for very low-volume options, but looking at the nearby prices including for higher-volume options, they look superficially in line, so I’ll go with it.
More intuitively, if the stock was definitely going to be above both A and B, then the strike-A option would be B—A more valuable than the strike B option (that is, the right to buy a stock worth $10 for a price $1 is worth exactly $3 more than the right to do so for $4). If the stock was definitely going to be below both A and B, then both options would be worthless.
So the value of the two options differ by (B—A)P(the price is above B), plus some awkward term for when the price is between A and B, which you can hopefully make ignorable by making that interval small.
From this I hesitantly conclude that the options markets suggest that P(NVDA >= 280-ish) = 10%-ish?
[fn 1]: It looks like there are more strike prices than that, but all the ones after 280 I think aren’t applicable: you can see a huge discontinuity in the prices from 280 to 290, and all the “last trade date” fields for the higher options are from before mid-June, so I think these options don’t exist anymore and come from before the 10-to-1 stock split.
Appreciate the concreteness in the predictions!
Which examples do you think of when you say this? (Not necessarily disagreeing, I’m just interested in the different interpretations of ‘LLMs are poor at general reasoning’ ).
I also think that LLM reasoning can be significantly boosted with scaffolding—i.e: most hard reasoning problems can be split up into a a handful of easier reasoning problems; this can be done recursively until your LLM can solve a subproblem, then build back up the full solution. So whilst scale might not get us to a level of general reasoning that qualifies as AGI, perhaps GPT-5 (or 6) plus scaffolding can.
Thanks for writing this up!
FWIW, even if AGI arrives ~ 2050 I still think it* would be the thing I’d want to work on right now. I would need to be really confident it wasn’t arriving before then for me not to want to work on it.
*AI Safety.
I took a break from engaging with EA topics for like a month or two, and I think it noticeably improved my mental health and productivity, as debating here frequently was actually stressing me out a lot. Which is weird, because the stakes for me posting here are incredibly low: I’m pseudonymous, have no career or personal attachments to EA groups, and I’m highly skeptical that EA efforts will have any noticeable effect on the future of humanity. I can’t imagine how stressful these discussions are for people with the opposite positions!
I still have plenty of ideas I want to write up, so I’m not going anywhere, but I’ll try to be more considered in where I put my effort.
You are a good and smart commenter, but that is probably generally a sign that you could be doing something more valuable with your time than posting on here. In your case though, that might not actually be true, since you also represent a dissenting perspective that makes things a bit less of an echo chamber on topics like AI safety, and it’s possible that does have some marginal influence on what orgs and individuals actually do.
As a pseudonymous poster with a non-EA job who created his account a few months after yours, I’ve needed to update on the value of prolonged debate on at least meta topics. There was a lot of going in late 2022 / early 2023 where I think having outside voices deeply engaging with debates in the comments was of significant value. I think that is considerably less true for, e.g., the whole Nonlinear controversy in late 2023 / early 2024.
Debating still takes time and energy which reduces the time and energy available elsewhere.
I actually don’t really debate on the forums for this very reason. I too am EA-adjacent (yes I’m aware that’s a bit of a meme!) and do not work in the EA sphere. I share insights and give feedback, but generally if people disagree I’m happy to leave it at that. I have a very stressful (non-EA) job and so rarely have the bandwidth for something that has no real upside like forum debate. I may make exceptions if someone seems super receptive, but I totally understand why you feel how you do.
Edit: The bug I mentioned below has since been fixed. The default values still do not seem to match with the figures of RP’s report here, and I believe there is also an error in said report that underestimates the impact by ~a factor of 2. See the extended discussion on this post for details.
I would advise being careful with RP’s Cross-cause effectiveness tool as it currently stands, especially with regards to the chicken campaign.
There appears to be a very clear conversion error which I’ve detailed in the edit to my commenthere. I was also unable to replicate their default values from their source data, but I may be missing something.I think comments like these are valuable when they are made after the relevant parties have all had enough time to respond, the discussion is largely settled, and readers are in a position to make up their minds about the nature, magnitude and importance of the problems reported, by having access to all the information that is likely to emerge from the exchange in question. Instead, your comment cautions people to be careful in using a tool based on some issues you found and reported less than two days ago, when the discussion appears to be ongoing and some of the people involved have not even expressed an opinion, perhaps because they haven’t yet seen the thread or had enough time to digest your criticisms. Maybe these criticisms are correct and we should indeed exercise the degree of caution you advise when using the tool, but it seems not unlikely that we’ll be in a better epistemic position to know this, say, a week or so from now, so why not just wait for all the potential evidence to become available?
In the linked thread, the website owners have confirmed that there is indeed an error in the website. If you try to make calculations using their site as currently made you will be off by a factor of a thousand. They have confirmed this and have stated that this will be fixed soon. When it is fixed I will edit the shortform.
Would you prefer that for the next couple of days, during the heavily publicised AW vs GHD debate week, in which this tool has been cited multiple times, people continue to use it as is despite it being bugged and giving massively wrong results? Why are you not more concerned about flawed calculations being spread than about me pointing out that flawed calculations are being spread?
In your original shortform, you listed three separate criticisms, but your reply now focuses on just one of those criticisms, in a way that makes it look that my concerns would be invalidated if one granted the validity of that specific criticism. This is the sort of subtle goalpost moving that makes it difficult to have a productive discussion.
Because there is an asymmetry in the costs of waiting. Waiting a week or so to better understand the alleged problems of a tool that will likely be used for years is a very minor cost, compared to the expected improvement in that understanding that will occur over that period.
(ETA: I didn’t downvote any of your comments, in accordance with my policy of never downvoting comments I reply to, even if I believe I would normally have downvoted them. I mention this only because your most recent comment was downvoted just as I posted this one.)
I list exactly 2 criticisms. One of them was proven correct, the other I believe to be correct also but am waiting on a response.
I agree with the asymettry in the cost of waiting, but the other way. If these errors are corrected a week from now, after the debate week has wrapped up, then everybody will have stopped paying attention to the debate, and it will become much harder to correct any BS arising from the faulty tool.
Do you truly not care that people are accidentally spreading misinformation here?
Why do you attribute to me a view I never stated and do not hold? If I say that one cost is greater than another, it doesn’t mean that I do not care about the lesser cost.
I’d probably agree with this if the tool were not relevant for Debate Week and/or RP hadn’t highlighted this tool in a recent post for Debate Week. So there’s a greater risk of any errors cascading into the broader discussion in a way that wouldn’t be practically fixable by a later notice that the tool was broken.
I’m a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I’m particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at −16 at one point) . Fact checking should take precedence over fandoms.
But… your post was quite inaccurate and strawmanned people extensively?
Eliezer and other commenters compellingly demonstrated this in the comments. I don’t think you should get super downvoted, but your post includes a lot of straightforwardly false sentences, so I think the reaction makes sense.
I am legitimately offended by this accusation. I had an organic chemist fact-check the entire thing, and I have included his fact-check into the actual post. Yudkowsky admitted that at least one of his claims was wrong. I explained my exact reasoning for the other problems, and he did not debunk those.
If you can point to any further factual errors in my post after the recent edits, I’m happy to edit those as well.
Yudkowsky’s response persuaded me that he didn’t intend to say factually incorrect things. (It is also unsourced and full of factual errors, but I don’t know if it’s worth doing a fact check of a mere comment). But even if he was just badly putting forth an analogy, he is still saying scientifically incorrect things.
I think this sets a pretty terrible precedent about the response to public figures making errors.
Sure. Here are some quotes from the original version of your post:
This paragraph clearly shows you misunderstood Eliezer. Different proteins are held together almost exclusively by non-covalent forces.
This is also evidently false, since like dozens of people I know have engaged with Drexlers and Eliezers thoughts on this space, many of which have a pretty deep understanding of chemistry, and would use similar (or the same) phrase. You seem to be invoking some expert consensus that doesn’t exist. Indeed, multiple people with PhD level chemistry background have left comments saying they understood Eliezer’s point here.
This is also false. The point makes sense, many people with chemistry or biology background get it, as shown above.
Look, I appreciate the post about the errors in the quantum physics sequence, but you are again vastly overstating the expert consensus here. I have talked with literally 10+ physics PhDs about the quantum physics sequence. Of course there are selection effects, but most people liked it and thought it was great. Yes, it was actually important to add a renormalization term, as you said in your critique, but really none of the points brought up in the sequences depended on it at all.
Like look, when people read your post without actually reading Eliezer’s reply, they get the very strong sense that you are claiming that Eliezer is making an error at the level of high-school biology. That somehow he got so confused about chemistry that he didn’t understand that a single protein of course is made out of covalent bonds.
But this is really evidently false and kind of absurd. As you can see in a lot of Eliezer’s writing, and also his comment level response, Eliezer did not at any point get confused that proteins are made out of covalent bonds. Indeed, to me and Eliezer it seemed so obvious that proteins internally are made out of covalent bonds that I did not consider the possibility that people could somehow interpret this as a claim about the atoms in proteins being held together by Van der Waals forces (how do you even understand what Van der Waals forces are, but don’t understand that proteins are internally covalently bonded?). But that misinterpretation seems really what your post was about.
Now, let me look at the most recent version of your post:
Well, it still includes:
This still seems wrong, though like, you did add some clarifications around it that make it more reasonable.
You did add a whole new section which is quite dense in wrong claims:
Look, this is doubling down on a misinterpretation which at this point you really should have avoided. We are talking about what you call the tertiary structure here. At the level of tertiary structures, and the bonds between proteins, biology does almost solely stick to ionic bonds and proteins held together by Van der Waals forces.
It is the case that sometimes the tertiary structures also use covalent bonds, like in the case of lignin, and I think that’s a valid point. It’s however not one you had made in your post at all, and just one that Eliezer acknowledges independently. The most recent version of your post now does have a fraction of a sentence, in a quote by your chemistry fact-checker, saying that sometimes tertiary protein structures to use covalent bonds, and I think that’s an actual real point that responds to what Eliezer is saying. A post I wouldn’t downvote would be one that had that as its main focus, since I think there is a valid critique to be made that biology is in some circumstances capable of using covalent bonds for tertiary structures (as Eliezer acknowledges), but it’s not the one you made.
Look man, I think you really know by know what Eliezer means by this. Eliezer is talking about alternatives to biology where most of the tertiary structure leverages covalent bonds.
This is also doubling down on the same misunderstanding. The machinery and tertiary structure of bacteria do not use covalent bonds very much. This is quite different from most current nanomachine designs, which the relevant books hypothesize would be substantially more robust than present biological machinery due to leveraging mostly covalent bonds.
I don’t understand what you are talking about here. Basically everything in biology is either made out of proteins or manufactured by proteins. If you can make proteins, you can basically make anything. Proteins are the way most things get done in a cell. The sentence above reads as confused as saying “a CPU is not a general purpose calculator. It does exactly one thing, and that is to read instructions and return the results”. Yes, ribosomes read instructions and link together amino acids to form proteins, and that is how biological systems generally assemble things.
This one is confused on multiple levels. The meaning of “X is held together by something” of course depends on what level of organization of X you are talking about.
Both of the following sentences are correct:
“Different proteins are held together by Van der Waals and other non-covalent forces instead of covalent bonds”. ”
A protein is held together by covalent bonds instead of Van der Waals forces and other non-covalent bonds”.
Those are fine sentences to say. Yes, it’s plausible to get misunderstood, and that’s a bit sad, but it doesn’t mean you were wrong.
This is a random nitpick, but animal bodies are indeed internally held together by flesh instead of skeletons. The skeleton itself is not connected. Bones only provide local structural support against bending and breaking. If I removed your flesh your bones would mostly disconnect and fall into a heap on the ground. Your bones are generally not under much of any strain at any given moment in time, instead you are more like a tensegrity structure where the vast majority of your integrity comes from tension, which comes from your tendons and muscles.
Even with correction this is still inaccurate. It is correct that non-covalent bonds are the dominant forces for the structure of proteins. Yes, there are some exceptions, like lignin, and that matters, and as I said, I would have upvoted a post talking about that. Yes, it’s structurally dependent. But if you aggregate across structures it’s true and seems reasonably to describe as being the dominant force.
Hey, I want to thank you for going through the post. I think you’ve done a good job, and I appreciate it. I’ll try to go through and give a similar effort in the replies. Note that I don’t want to pressure you to do a re-reply, although you can if you want. I just want to say my piece and defend myself, and I’m happy to let the readers decide from our duelling accounts.
Actually, I will skip ahead first, because I think it illuminates the most where the disagreement lies.
Yes, if I removed your flesh, your bones would fall to the ground. But similarly, If I removed your skeleton, your flesh would also fall to the ground. Holding a body together is a partnership between bones, muscles and flesh, and if you remove any one, the rest break.
This is kind of the whole of my original point here. Yes, it’s perfectly fine to zoom out onto the tertiary structure of a protein, and discuss the make-up of the crosslinks there, if you are clear that’s what you’re doing. But without the primarily covalent backbone of the system, there is no tertiary structure. So, starting at the beginning:
I just disagree. Proteins are held together by a combination of covalent bonds and non-covalent forces. If you went in and removed all the covalent bonds, the protein would collapse into nothingness. If you removed all the non-covalent bonds, you would still have that covalent primary structure backbone, which would then snap back into place and reform all the other bonds, rebuilding the protein. (I mean, not every single time, because sometimes undoing denaturation has an energy penalty that is too high). In that sense, it really makes no sense to say that it’s held together “almost exclusively by non-covalent forces”.
It is true that often non-covalent forces (typically hydrophobic interactions only sometimes Van der waals forces) are the dominant structural force of the 3D structure as a whole. Of course, other times covalent bonds are, as is the case in Keratin-type proteins.
I spent a very, very long time investigating Drexlerian nanotech, and I definitely never saw anything like “covalently bonded equivalents of biology”. I think that would be a pretty bad way to describe it, because, as has been established, biology uses covalent bonds at every level. I could see a case for “strictly covalently bonded” though.
I don’t want to discount the people who did agree with me or who didn’t. I saw a chemist saying they agreed with me and getting downvoted, a protein chemist saying they thought the wording was wrong but they liked the “spaghetti” analogy, and an organic chemist also agreeing with me. Generally the consensus seems to be that the language was badly worded or wrong, but some people found the underlying point defensible or even good. I do agree that some experts are okay with the language.
I think it’s worth pointing out that the commentators here and on Lesswrong are disproportionately likely to be Eliezer fans, and be willing to give him the benefit of the doubt. This is not the case for a random person watching a TED talk.
You are right that it is an overstatement, I will edit that. However, I maintain that many experts who encounter these badly worded claims will dismiss your argument as a result.
I think you are underestimating the selection effects here. The physics phd’s who thought it sucked are not on lesswrong, they got turned away by the overconfident mistakes. As a physics PHD myself… Eh, it’s better than most pop-science stuff, but that’s a very low bar. There’s plenty more errors in there, and the underlying argument about MWI is pretty bad, but I’ll save that for a future post.
I don’t think it really matters whether he did or did not truly know that the primary structure was covalent. The problem was that at no point, in all of the quotes I found of him discussing the matter, did he clarify that he was talking only about the tertiary structure, or “strictly covalent bonding”, or crosslinks between protein folds.
Intentionally or unintentionally, an uninformed listener would get the interpretation: “biology does not use this super duper strong force called “covalent bonds”.
Imagine reading those quotes from the perspective of someone who knows nothing about biology, and tell me that that is not the obvious implication of what he says.
I’m happy to give the benefit of the doubt when chatting between friends, but he is using this terminology on podcasts and TED talks. Factual rigour matters.
And, for the record, I don’t think it’s that unthinkable that he didn’t know the primary structure of protein was covalent, it’s not that hard of a mistake to make. Unlike say quantum physics, organic chemistry was never a subject I’ve seen him delve deeply into, and the only source he ever cited on the subject was Drexler.
Just no. Wait a second, and actually re-read the statement I am responding to. It’s a flat statement that “humans utilise covalent bonds, but “biology doesn’t”. Obviously, biology does “utilise covalent bonds”, in that it’s made out of covalent bonds. If he only wanted to talk about tertiary structure, he should have said “tertiary structure”, and not make a flat statement about all of biology.
If he means this, he should say that, instead of a different thing that makes no sense. If you say “covalently bonded bacteria”, that’s the same thing as a regular bacteria.
When you say “bacteria”, you are talking about the entire bacteria, not just the machinery and tertiary structure. And the entire bacteria does include covalent bonds, or it would fall apart.
Also, they do utilise tertiary covalent bonds for the parts of the cell that need to be very strong.
The relevant books focus on the structural differences: diamondoid nanofactories were hypothesized to be very stable, so that atomically precise placement of molecules could enable new structures that were not available in biology.
I see where you’re coming from here. It may be a simple matter of bad phrasing on his part leading to a misleading statement. For example, the latter statement:
implies that it is the ribosome itself which would directly print out drexlerian nanotech, which of course is impossible. I get that he was probably trying to say the ribosomes will create proteins which in turn create drexlerian nanotech (probably also impossible, but I’ll litigate that another day). The phrasing here overstates the ease of this process: uninformed readers will come away thinking “oh, ribosomes are general assemblers, so they can generally assemble nanotech”, which is not true.
I accept this if you specify the level you are talking about, or it’s obvious to both you and your audience which level you are talking about. Neither of these apply to Eliezer’s statements.
If you don’t specify the level, then the statement is either meaningless, or it’s talking about the entire structure. I’m sorry, I am just never going to accept that you can say “instead of covalent bonds”, talking generally, when the backbone is covalent bonds.
I think the main takeaway is that you should really avoid using the phrase “held together” altogether, unless it’s paired with a more precise descriptor of what structure you are talking about, primarily for all the reasons explored in the post.
He never used the phrase “non-covalent bonds”. He only ever said van der waals forces, which is wrong: as multiple people pointed out, if you are forced to state one thing primarily making the 3d structure, it would be hydrophobic bonds.
I think the exceptions are very important, as they negate the whole argument he was making. If biology can make densely covalent bonded things, why doesn’t it make all of it’s structures covalently bonded? My guess is simply that it’s extremely useful to utilize all of the forces available to you, rather than sticking to stiff, rigid, and unflexible structures. But I’ll save this for a more detailed research post further down the line.
As a conclusion, I’ll just say this: I think that yudkowsky did not sufficiently scientifically vet his arguments before sending them out to the general public. As a result, he said things that gave his audience a misleading picture of the advantages and disadvantages of biology vs nanotech. They were scientifically badly worded, ending up with phrases that were at best misleading and at worst just incorrect. While this whole experience has been pretty exhausting, I hope that it will at least lead to improved scientific terminology in the future.
I will eventually write a more detailed analysis of biology vs Drexler, and you can better believe that one will be extensively fact checked before it goes out.
I will do another round of edits tomorrow, and I’ll probably let that be it, there is only so much time in the world one can devote to posting.
It does look like there is an interpretation of EYs basic claims which is roughly reasonable and one which is clearly false and unreasonable, and you assumed he meant the clearly unreasonable thing and attack that. I think absent further evidence, it’s fair for others to say “he couldn’t have possibly meant that” and move on.
As someone in the ‘general public rather than chemistry/physics PhD’ group, which Eliezer is saying he’s targeting, I definitely thought he meant that
That’s fair enough and levels of Background understanding vary (I don’t have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.
I think part of titotal’s point is it’s not the ‘strawman’ interpretation but the straightforward one, and having it framed that way would understandably be frustrating. It sounds like he also disagrees with Eliezer’s actual, badly communicated argument [edit: about the size of potential improvements on biology] anyway though? Based on the response to Habryka
Yeah, I think it would have been much better for him to say “proteins are shaped by...” rather than “proteins are held together by...”, and to give some context for what that means. Seems fair to criticize his communication. But the quotes and examples in the linked post are more consistent with him understanding that and wording it poorly, or assuming too much of his audience, rather than him not understanding that proteins use covalent bonds.
The selected quotes do give me the impression Eliezer is underestimating what nature can accomplish relative to design, but I haven’t read any of them in context so that doesn’t prove much.
Yeah I can empathise, it’s hard isn’t it. Though I am glad people get to freely vote. I’m sad for you.
What is the best practice for dealing with biased sources? For example, if I’m writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?
Partially depends on the nature of the claim, methinks. Generally, I do not think you have any obligation to delve into collateral matters—just enough to put the reader on notice of the possible bias.
According to this article, CEO shooter Luigi Malgione:
Doesn’t look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF?
I read this more like the guy was lonely and wanted community so was looking for some kind of secular religion to provide grounding to his life.
I personally think people overrate people’s stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness https://time.com/6223229/loneliness-vulnerable-extremist-views/
(would genuinely be interested to hear counter arguments to this! I’m not a researcher so honestly no idea how to go about testing that hypothesis)
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the “true cause”? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn’t seem coherent to say that means the principles are overstated as an explanation for my behavior.
I dunno why loneliness would be different; first thought is that loneliness means one has less of a community to appeal to, so there’s less conformity biases preventing such a person from developing divergent or (relatively) extreme views; the fact that they can find some community around said views and have conformity pressures towards them is also a factor of course; and that actually would be an ‘unprincipled’ reason to adopt a view so i guess for that case it does make sense to say, “it’s more situation(-activated biases) than genuine (less-biasedly arrived at) principles”.
An implication in my view is that this isn’t particularly about extreme behavior; less biased behavior is just rare across the spectrum. (Also, if we narrow in on people who are trying to be less biased, their behavior might be extreme; e.g., Rationalists trying prevent existential risk from AI seems deeply weird from the outside))
This is a guy who got back surgery that was covered by his health insurance and then murdered the CEO of a different health insurance company. While EAs are always keen to self-flagellate over any possible bad thing that might have some tangential connection, I really think this one can be categorized under ‘crazy’ and ‘psychedelics’. To the extent he was motivated by ideology it doesn’t seem to be EA—the slogan he carved onto the bullet casings was a general anti-capitalist anti-insurance one.
Well, they could have. A lot of things are logically possible. Unless there is some direct evidence that he was motivated by EA principles, I don’t think we should worry too much about that possibility.
I don’t see a viable connection here, unless you make “EA principles” vague enough to cover an extremely wide space (e.g., considering ~consequentialism an “EA principle”).