I mean… it is true that Eliezer really did shape the culture in the direction of forecasting and predictions and that kind of stuff. My best guess is that without Eliezer, we wouldn’t have a culture of doing those things (and like, the AI Alignment community as is probably wouldn’t exist). You might disagree with me and him on this, in which case sure, update in that direction, but I don’t think it’s a crazy opinion to hold.
My best guess is that without Eliezer, we wouldn’t have a culture of [forecasting and predictions]
The timeline doesn’t make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
Hmm, I think these are good points. My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer, though I agree that that kind of credit is harder to allocate (and it gets fuzzy what we even mean by “this community” as we extend into counterfactuals like this).
I do think the timeline here provides decent evidence in favor of less credit allocation (and I think against the stronger claim “we wouldn’t have a culture of [forecasting and predictions] without Eliezer”). My guess is in terms of causing that culture to take hold, Eliezer is probably still the single most-responsible individual, though I do now expect (after having looked into a bunch of comment threads from 1996 to 1999 and seeing many familiar faces show up) that a lot of the culture would show up without Eliezer.
speaking for myself, eliezer has played no role in encouraging me to give quantitative probability distributions. For me, that was almost entirely due to people like Tetlock and Bryan Caplan, both of whom I would have encountered regardless of Eliezer. I strongly suspect this is true of lots of people who are in EA but don’t identify with the rationalist community
More generally, I do think that Eliezer and other rationalists overestimate how much influence they have had on wider views in the community. eg I have not read the sequences and I just don’t think it plays a big role in the internal story of a lot of EAs.
For me, even people like Nate Silver or David McKay, who aren’t part of the community, have played a bigger role on encouraging quantification and probabilistic judgment.
I’ll currently take your word for that because I haven’t been here nearly as long. I’ll mention that some of these contributions I don’t necessarily consider positive.
But the point is, is Yudkowsky a (major) contributor to a shared project, or is he a ruler directing others, like his quote suggests? How does he view himself? How do the different communities involved view him?
P.S. I disagree with whoever (strong-)downvoted your comment.
Yudkowsky often complainsrants hopes people will form their own opinions instead of just listening to him, I can find references if you want.
I also think he lately finds it depressing worrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities.
I also think he lately finds it depressing worrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities
I think this strengthens my point, especially given how it is written in the post you linked. Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
I’m trying to account for my biases in these comments, but I encourage others to go to that post, search for “Eliezer” as you suggested, and form their own views.
Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
Those are four very different claims. In general, I think it’s bad to collapse all (real or claimed) differences in ability into a single status hierarchy, for the reasons stated in Inadequate Equilibria.
Eliezer is claiming that other people are not taking the problem sufficiently seriously, claiming ownership of it, trying to form their own detailed models of the full problem, and applying enough rigor and clarity to make real progress on the problem.
He is specifically not saying “just defer to me”, and in fact is saying that he and everyone else is going to die if people rely on deference here. A core claim in AGI Ruin is that we need more people with “not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you”.
Deferring to Eliezer means that Eliezer is the bottleneck on humanity solving the alignment problem; which means we die. The thing Eliezer claims we need is a larger set of people who arrive at true, deep, novel insights about the problem on their own —without Eliezer even mentioning the insights, much less spending a ton of time trying to persuade anyone of them—and writing them up.
It’s true that Eliezer endorses his current stated beliefs; this goes without saying, or he obviously wouldn’t have written them down. It doesn’t mean that he thinks humanity has any path to survival via deferring to him, or that he thinks he has figured out enough of the core problems (or ever could conceivably could do so, on his own) to give humanity a significant chance of surviving. Quoting AGI Ruin:
It’s guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction. The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it[.]
The end of the “death with dignity” post is also alluding to Eliezer’s view that it’s pretty useless to figure out what’s true merely via deferring to Eliezer.
Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.
Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.
I mean… it is true that Eliezer really did shape the culture in the direction of forecasting and predictions and that kind of stuff. My best guess is that without Eliezer, we wouldn’t have a culture of doing those things (and like, the AI Alignment community as is probably wouldn’t exist). You might disagree with me and him on this, in which case sure, update in that direction, but I don’t think it’s a crazy opinion to hold.
The timeline doesn’t make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
Hmm, I think these are good points. My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer, though I agree that that kind of credit is harder to allocate (and it gets fuzzy what we even mean by “this community” as we extend into counterfactuals like this).
I do think the timeline here provides decent evidence in favor of less credit allocation (and I think against the stronger claim “we wouldn’t have a culture of [forecasting and predictions] without Eliezer”). My guess is in terms of causing that culture to take hold, Eliezer is probably still the single most-responsible individual, though I do now expect (after having looked into a bunch of comment threads from 1996 to 1999 and seeing many familiar faces show up) that a lot of the culture would show up without Eliezer.
speaking for myself, eliezer has played no role in encouraging me to give quantitative probability distributions. For me, that was almost entirely due to people like Tetlock and Bryan Caplan, both of whom I would have encountered regardless of Eliezer. I strongly suspect this is true of lots of people who are in EA but don’t identify with the rationalist community
More generally, I do think that Eliezer and other rationalists overestimate how much influence they have had on wider views in the community. eg I have not read the sequences and I just don’t think it plays a big role in the internal story of a lot of EAs.
For me, even people like Nate Silver or David McKay, who aren’t part of the community, have played a bigger role on encouraging quantification and probabilistic judgment.
This is my impression and experience as well
“My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer”
Fwiw, I found Eliezer through Robin Hanson.
Yeah, I think this isn’t super rare, but overall still much less common than the reverse.
I’ll currently take your word for that because I haven’t been here nearly as long. I’ll mention that some of these contributions I don’t necessarily consider positive.
But the point is, is Yudkowsky a (major) contributor to a shared project, or is he a ruler directing others, like his quote suggests? How does he view himself? How do the different communities involved view him?
P.S. I disagree with whoever (strong-)downvoted your comment.
Yudkowsky often
complainsrantshopes people will form their own opinions instead of just listening to him, I can find references if you want.I also think he lately finds it
depressingworrying that he’s got to be the responsible adult. Easy references: Search for “Eliezer” in List Of Lethalities.I think this strengthens my point, especially given how it is written in the post you linked. Telling people you’re the responsible adult, or the only one who notices things, still means telling them you’re smarter than them and they should just defer to you.
I’m trying to account for my biases in these comments, but I encourage others to go to that post, search for “Eliezer” as you suggested, and form their own views.
Those are four very different claims. In general, I think it’s bad to collapse all (real or claimed) differences in ability into a single status hierarchy, for the reasons stated in Inadequate Equilibria.
Eliezer is claiming that other people are not taking the problem sufficiently seriously, claiming ownership of it, trying to form their own detailed models of the full problem, and applying enough rigor and clarity to make real progress on the problem.
He is specifically not saying “just defer to me”, and in fact is saying that he and everyone else is going to die if people rely on deference here. A core claim in AGI Ruin is that we need more people with “not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you”.
Deferring to Eliezer means that Eliezer is the bottleneck on humanity solving the alignment problem; which means we die. The thing Eliezer claims we need is a larger set of people who arrive at true, deep, novel insights about the problem on their own —without Eliezer even mentioning the insights, much less spending a ton of time trying to persuade anyone of them—and writing them up.
It’s true that Eliezer endorses his current stated beliefs; this goes without saying, or he obviously wouldn’t have written them down. It doesn’t mean that he thinks humanity has any path to survival via deferring to him, or that he thinks he has figured out enough of the core problems (or ever could conceivably could do so, on his own) to give humanity a significant chance of surviving. Quoting AGI Ruin:
The end of the “death with dignity” post is also alluding to Eliezer’s view that it’s pretty useless to figure out what’s true merely via deferring to Eliezer.
Thanks, those are some good counterpoints.
Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.
Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.