Hmm, I think that part definitely has relevance. Clearly we would trust Eliezer less if his response to that past writing was “I just got unlucky in my prediction, I still endorse the epistemological principles that gave rise to this prediction, and would make the same prediction, given the same evidence, today”.
If someone visibly learns from forecasting mistakes they make, that should clearly update us positively on them not repeating the same mistakes.
If someone visibly learns from forecasting mistakes they make, that should clearly update us positively on them not repeating the same mistakes.
I suppose one of my main questions is whether he has visibly learned from the mistakes, in this case.
For example, I wasn’t able to find a post or comment to the effect of “When I was younger, I spent of years of my life motivated by the belief that near-term extinction from nanotech was looming. I turned out to be wrong. Here’s what I learned from that experience and how I’ve applied it to my forecasts of near-term existential risk from AI.” Or a post or comment acknowledging his previous over-optimistic AI timelines and what he learned from them, when formulating his current seemingly short AI timelines.
(I genuinely could be missing these, since he has so much public writing.)
Eliezer writes a bit about his early AI timeline and nanotechnology opinions here, though it sure is a somewhat obscure reference that takes a bunch of context to parse:
Luke Muehlhauser reading a previous draft of this (only sounding much more serious than this, because Luke Muehlhauser): You know, there was this certain teenaged futurist who made some of his own predictions about AI timelines -
Eliezer: I’d really rather not argue from that as a case in point. I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were. I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood.
Luke Muehlhauser (still being paraphrased): It seems like it ought to be acknowledged somehow.
Eliezer: That’s fair, yeah, I can see how someone might think it was relevant. I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful. You don’t get to screw up yourself and then use that as an argument about how nobody else can do better.
Humbali: Uh, what’s the actual drama being subtweeted here?
Eliezer: A certain teenaged futurist, who, for example, said in 1999, “The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.”
Humbali: This young man must surely be possessed of some very deep character defect, which I worry will prove to be of the sort that people almost never truly outgrow except in the rarest cases. Why, he’s not even putting a probability distribution over his mad soothsaying—how blatantly absurd can a person get?
Eliezer: Dear child ignorant of history, your complaint is far too anachronistic. This is 1999 we’re talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced. Eliezer-2002 hasn’t been sent a copy of “Judgment Under Uncertainty” by Emil Gilliam. Eliezer-2006 hasn’t put his draft online for “Cognitive biases potentially affecting judgment of global risks”. The Sequences won’t start until another year after that. How would the forerunners of effective altruism in 1999 know about putting probability distributions on forecasts? I haven’t told them to do that yet! We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen.
Though there’s also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment. Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes.
But that was too much of a digression, when I tried to write it up; maybe later I’ll post something separately.
While also including some other points, I do read it as a pretty straightforward “Yes, I was really wrong. I didn’t know about cognitive biases, and I did not know about the virtue of putting probability distributions on things, and I had not thought enough about the art of thinking well. I would not make the same mistakes today.”.
How would the forerunners of effective altruism in 1999 know about putting probability distributions on forecasts? I haven’t told them to do that yet!
Did Yudkowsky actually write these sentences?
If Yudkowsky thinks, as this suggests, that people in EA think or do things because he tells them to—this alone means it’s valuable to question whether people give him the right credibility.
I am not sure about the question. Yeah, this is a quote from the linked post, so he wrote those sections.
Also, yeah, seems like Eliezer has had a very large effect on whether this community uses things like probability distributions, models things in a bayesian way, makes lots of bets, and pays attention to things like forecasting track records. I don’t think he gets to take full credit for those norms, but my guess is he is the single individual who most gets to take credit for those norms.
I don’t see how he has encouraged people to pay attention to forecasting track records. People who have encouraged that norm make public bets or go on public forecasting platforms and make predictions about questions that can resolve in the short term. Bryan Caplan does this; I think greg Lewis and David Manheim are superforecasters.
I thought the upshot of this piece and the Jotto post was that Yudkowsky is in fact very dismissive of people who make public forecasts. “I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.” This seems like the opposite of encouraging people to pay attention to forecasting but is rather dismissing the whole enterprise of forecasting.
I wanted to make sure I’m not missing something, since this shines a negative light about him IMO.
There’s a difference between saying, for example, “You can’t expect me to have done X then—nobody was doing it, and I haven’t even written about it yet, nor was I aware of anyone else doing so”—and saying ”… nobody was doing it because I haven’t told them to.”
This isn’t about credit. It’s about self-perception and social dynamics.
I mean… it is true that Eliezer really did shape the culture in the direction of forecasting and predictions and that kind of stuff. My best guess is that without Eliezer, we wouldn’t have a culture of doing those things (and like, the AI Alignment community as is probably wouldn’t exist). You might disagree with me and him on this, in which case sure, update in that direction, but I don’t think it’s a crazy opinion to hold.
My best guess is that without Eliezer, we wouldn’t have a culture of [forecasting and predictions]
The timeline doesn’t make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
Hmm, I think these are good points. My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer, though I agree that that kind of credit is harder to allocate (and it gets fuzzy what we even mean by “this community” as we extend into counterfactuals like this).
I do think the timeline here provides decent evidence in favor of less credit allocation (and I think against the stronger claim “we wouldn’t have a culture of [forecasting and predictions] without Eliezer”). My guess is in terms of causing that culture to take hold, Eliezer is probably still the single most-responsible individual, though I do now expect (after having looked into a bunch of comment threads from 1996 to 1999 and seeing many familiar faces show up) that a lot of the culture would show up without Eliezer.
speaking for myself, eliezer has played no role in encouraging me to give quantitative probability distributions. For me, that was almost entirely due to people like Tetlock and Bryan Caplan, both of whom I would have encountered regardless of Eliezer. I strongly suspect this is true of lots of people who are in EA but don’t identify with the rationalist community
More generally, I do think that Eliezer and other rationalists overestimate how much influence they have had on wider views in the community. eg I have not read the sequences and I just don’t think it plays a big role in the internal story of a lot of EAs.
For me, even people like Nate Silver or David McKay, who aren’t part of the community, have played a bigger role on encouraging quantification and probabilistic judgment.
I’ll currently take your word for that because I haven’t been here nearly as long. I’ll mention that some of these contributions I don’t necessarily consider positive.
But the point is, is Yudkowsky a (major) contributor to a shared project, or is he a ruler directing others, like his quote suggests? How does he view himself? How do the different communities involved view him?
P.S. I disagree with whoever (strong-)downvoted your comment.
Yudkowsky often complainsrants hopes people will form their own opinions instead of just listening to him, I can find references if you want.
I also think he lately finds it depressing worrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities.
I also think he lately finds it depressing worrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities
I think this strengthens my point, especially given how it is written in the post you linked. Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
I’m trying to account for my biases in these comments, but I encourage others to go to that post, search for “Eliezer” as you suggested, and form their own views.
Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
Those are four very different claims. In general, I think it’s bad to collapse all (real or claimed) differences in ability into a single status hierarchy, for the reasons stated in Inadequate Equilibria.
Eliezer is claiming that other people are not taking the problem sufficiently seriously, claiming ownership of it, trying to form their own detailed models of the full problem, and applying enough rigor and clarity to make real progress on the problem.
He is specifically not saying “just defer to me”, and in fact is saying that he and everyone else is going to die if people rely on deference here. A core claim in AGI Ruin is that we need more people with “not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you”.
Deferring to Eliezer means that Eliezer is the bottleneck on humanity solving the alignment problem; which means we die. The thing Eliezer claims we need is a larger set of people who arrive at true, deep, novel insights about the problem on their own —without Eliezer even mentioning the insights, much less spending a ton of time trying to persuade anyone of them—and writing them up.
It’s true that Eliezer endorses his current stated beliefs; this goes without saying, or he obviously wouldn’t have written them down. It doesn’t mean that he thinks humanity has any path to survival via deferring to him, or that he thinks he has figured out enough of the core problems (or ever could conceivably could do so, on his own) to give humanity a significant chance of surviving. Quoting AGI Ruin:
It’s guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction. The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it[.]
The end of the “death with dignity” post is also alluding to Eliezer’s view that it’s pretty useless to figure out what’s true merely via deferring to Eliezer.
Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.
Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.
Yes, definitely much more than Philip Tetlock, given that our community had strong norms of forecasting and making bets before Tetlock had done most of his work on the topic (Expert Political Forecasting was out, but as far as I can tell was not a major influence on people in the community, though I am not totally confident of that).
Does that particular quote from Yudkowsky not strike you as slightly arrogant?
I am generally strongly against a culture of fake modesty. If I want people to make good decisions, they need to be able to believe things about them that might sound arrogant to others. Yes, it sounds arrogant to an external audience, but it also seems true, and it seems like whether it is true should be the dominant fact on whether it is good to say.
FWIW I think “it was 20 years ago” is a good reason not to take these failed predictions too seriously, and “he has disavowed these predictions after seeing they were false” is a bad reason to take them unseriously.
Hmm, I think that part definitely has relevance. Clearly we would trust Eliezer less if his response to that past writing was “I just got unlucky in my prediction, I still endorse the epistemological principles that gave rise to this prediction, and would make the same prediction, given the same evidence, today”.
If someone visibly learns from forecasting mistakes they make, that should clearly update us positively on them not repeating the same mistakes.
I suppose one of my main questions is whether he has visibly learned from the mistakes, in this case.
For example, I wasn’t able to find a post or comment to the effect of “When I was younger, I spent of years of my life motivated by the belief that near-term extinction from nanotech was looming. I turned out to be wrong. Here’s what I learned from that experience and how I’ve applied it to my forecasts of near-term existential risk from AI.” Or a post or comment acknowledging his previous over-optimistic AI timelines and what he learned from them, when formulating his current seemingly short AI timelines.
(I genuinely could be missing these, since he has so much public writing.)
Eliezer writes a bit about his early AI timeline and nanotechnology opinions here, though it sure is a somewhat obscure reference that takes a bunch of context to parse:
While also including some other points, I do read it as a pretty straightforward “Yes, I was really wrong. I didn’t know about cognitive biases, and I did not know about the virtue of putting probability distributions on things, and I had not thought enough about the art of thinking well. I would not make the same mistakes today.”.
Did Yudkowsky actually write these sentences?
If Yudkowsky thinks, as this suggests, that people in EA think or do things because he tells them to—this alone means it’s valuable to question whether people give him the right credibility.
I am not sure about the question. Yeah, this is a quote from the linked post, so he wrote those sections.
Also, yeah, seems like Eliezer has had a very large effect on whether this community uses things like probability distributions, models things in a bayesian way, makes lots of bets, and pays attention to things like forecasting track records. I don’t think he gets to take full credit for those norms, but my guess is he is the single individual who most gets to take credit for those norms.
I don’t see how he has encouraged people to pay attention to forecasting track records. People who have encouraged that norm make public bets or go on public forecasting platforms and make predictions about questions that can resolve in the short term. Bryan Caplan does this; I think greg Lewis and David Manheim are superforecasters.
I thought the upshot of this piece and the Jotto post was that Yudkowsky is in fact very dismissive of people who make public forecasts. “I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.” This seems like the opposite of encouraging people to pay attention to forecasting but is rather dismissing the whole enterprise of forecasting.
I wanted to make sure I’m not missing something, since this shines a negative light about him IMO.
There’s a difference between saying, for example, “You can’t expect me to have done X then—nobody was doing it, and I haven’t even written about it yet, nor was I aware of anyone else doing so”—and saying ”… nobody was doing it because I haven’t told them to.”
This isn’t about credit. It’s about self-perception and social dynamics.
I mean… it is true that Eliezer really did shape the culture in the direction of forecasting and predictions and that kind of stuff. My best guess is that without Eliezer, we wouldn’t have a culture of doing those things (and like, the AI Alignment community as is probably wouldn’t exist). You might disagree with me and him on this, in which case sure, update in that direction, but I don’t think it’s a crazy opinion to hold.
The timeline doesn’t make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
Hmm, I think these are good points. My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer, though I agree that that kind of credit is harder to allocate (and it gets fuzzy what we even mean by “this community” as we extend into counterfactuals like this).
I do think the timeline here provides decent evidence in favor of less credit allocation (and I think against the stronger claim “we wouldn’t have a culture of [forecasting and predictions] without Eliezer”). My guess is in terms of causing that culture to take hold, Eliezer is probably still the single most-responsible individual, though I do now expect (after having looked into a bunch of comment threads from 1996 to 1999 and seeing many familiar faces show up) that a lot of the culture would show up without Eliezer.
speaking for myself, eliezer has played no role in encouraging me to give quantitative probability distributions. For me, that was almost entirely due to people like Tetlock and Bryan Caplan, both of whom I would have encountered regardless of Eliezer. I strongly suspect this is true of lots of people who are in EA but don’t identify with the rationalist community
More generally, I do think that Eliezer and other rationalists overestimate how much influence they have had on wider views in the community. eg I have not read the sequences and I just don’t think it plays a big role in the internal story of a lot of EAs.
For me, even people like Nate Silver or David McKay, who aren’t part of the community, have played a bigger role on encouraging quantification and probabilistic judgment.
This is my impression and experience as well
“My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer”
Fwiw, I found Eliezer through Robin Hanson.
Yeah, I think this isn’t super rare, but overall still much less common than the reverse.
I’ll currently take your word for that because I haven’t been here nearly as long. I’ll mention that some of these contributions I don’t necessarily consider positive.
But the point is, is Yudkowsky a (major) contributor to a shared project, or is he a ruler directing others, like his quote suggests? How does he view himself? How do the different communities involved view him?
P.S. I disagree with whoever (strong-)downvoted your comment.
Yudkowsky often
complainsrantshopes people will form their own opinions instead of just listening to him, I can find references if you want.I also think he lately finds it
depressingworrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities.I think this strengthens my point, especially given how it is written in the post you linked. Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
I’m trying to account for my biases in these comments, but I encourage others to go to that post, search for “Eliezer” as you suggested, and form their own views.
Those are four very different claims. In general, I think it’s bad to collapse all (real or claimed) differences in ability into a single status hierarchy, for the reasons stated in Inadequate Equilibria.
Eliezer is claiming that other people are not taking the problem sufficiently seriously, claiming ownership of it, trying to form their own detailed models of the full problem, and applying enough rigor and clarity to make real progress on the problem.
He is specifically not saying “just defer to me”, and in fact is saying that he and everyone else is going to die if people rely on deference here. A core claim in AGI Ruin is that we need more people with “not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you”.
Deferring to Eliezer means that Eliezer is the bottleneck on humanity solving the alignment problem; which means we die. The thing Eliezer claims we need is a larger set of people who arrive at true, deep, novel insights about the problem on their own —without Eliezer even mentioning the insights, much less spending a ton of time trying to persuade anyone of them—and writing them up.
It’s true that Eliezer endorses his current stated beliefs; this goes without saying, or he obviously wouldn’t have written them down. It doesn’t mean that he thinks humanity has any path to survival via deferring to him, or that he thinks he has figured out enough of the core problems (or ever could conceivably could do so, on his own) to give humanity a significant chance of surviving. Quoting AGI Ruin:
The end of the “death with dignity” post is also alluding to Eliezer’s view that it’s pretty useless to figure out what’s true merely via deferring to Eliezer.
Thanks, those are some good counterpoints.
Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.
Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.
More than Philip Tetlock (author of Superforecasting)?
Does that particular quote from Yudkowsky not strike you as slightly arrogant?
Yes, definitely much more than Philip Tetlock, given that our community had strong norms of forecasting and making bets before Tetlock had done most of his work on the topic (Expert Political Forecasting was out, but as far as I can tell was not a major influence on people in the community, though I am not totally confident of that).
I am generally strongly against a culture of fake modesty. If I want people to make good decisions, they need to be able to believe things about them that might sound arrogant to others. Yes, it sounds arrogant to an external audience, but it also seems true, and it seems like whether it is true should be the dominant fact on whether it is good to say.
FWIW I think “it was 20 years ago” is a good reason not to take these failed predictions too seriously, and “he has disavowed these predictions after seeing they were false” is a bad reason to take them unseriously.