‘Here’s one data point I can offer from my own life: Through a mixture of college classes and other reading, I’m pretty confident I had already encountered the heuristics and biases literature, Bayes’ theorem, Bayesian epistemology, the ethos of working to overcome bias, arguments for the many worlds interpretation, the expected utility framework, population ethics, and a number of other ‘rationalist-associated’ ideas before I engaged with the effective altruism or rationalist communities.’
I think some of this is just a result of being a community founded partly by analytic philosophers. (though as a philosopher I would say that!).
I think it’s normal to encounter some of these ideas in undergrad philosophy programs. At my undergrad back in 2005-09 there was a whole upper-level undergraduate course in decision theory. I don’t think that’s true everywhere all the time, but I’d be surprised if it was wildly unusual. I can’t remember if we covered population ethics in any class, but I do remember discovering Parfit on the Repugnant Conclusion in 2nd-year of undergrad because one of my ethics lecturers said Reasons and Persons was a super-important book. In terms of the Oxford phil scene where the term “effective altruism” was born, the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can’t remember if he was the PhD supervisor of anyone important to the founding of EA, but I’d be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it. Most of the phil. physics people at Oxford were gung-ho for many worlds, it’s not a fringe view in philosophy of physics as far as I know. (Though I think Oxford was kind of a centre for it and there was more dissent elsewhere.) As far as I can tell, Bayesian epistemology in at least some senses of that term is a fairly well-known approach in philosophy of science. Philosophers specializing in epistemology might more often ignore it, but they know it’s there. And not all of them ignore it! I’m not an epistemologist, by my doctoral supervisor was, and it’s not unusual for his work to refer to Bayesian ideas in modelling stuff about how to evaluate evidence. (I.e. in uhm, defending the fine-tuning argument for the existence of God, which might not be the best use, but still!: https://www.yoaavisaacs.com/uploads/6/9/2/0/69204575/ms_for_fine-tuning_fine-tuning.pdf). (John was my supervisor, not Yoav.)
A high interest in bias stuff might genuinely be more an Eliezer/LessWrong legacy though.
the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can’t remember if he was the PhD supervisor of anyone important to the founding of EA, but I’d be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it.
Indeed, Broome co-supervised the doctoral theses of both Toby Ord and Will MacAskill. And Broome was, in fact, the person who advised Will to get in touch with Toby, before the two had met.
Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like “I found people who can describe these ideas well” than “oh these are interesting and novel ideas to me.” (I had the same realization when I learned about utilitarianism...much more of a feeling that “this is the articulation of clearly correct ideas, believing otherwise seems dumb”).
That said, some of the ideas on LW that seemed more original to me (AI risk, logical decision theory stuff, heroic responsibility in an inadequate world), do seem both substantively true and extremely important, and it took me a lot of time to be convinced of this.
(There are also other ideas that I’m less sure about, like cryonics and MW).
EY pointed out the many worlds hypothesis as a thing that even modern science, specifically physics (which is considered a very well functioning science, it’s not like social psychology), is missing.
And he used this as an example to get people to stop trusting authority, including modern science, which many people around him seem to trust.
Can’t say any of that makes sense to me. I have the feeling there’s some context I’m totally missing (or he’s just wrong about it). I may ask you about this in person at some point :)
‘Here’s one data point I can offer from my own life: Through a mixture of college classes and other reading, I’m pretty confident I had already encountered the heuristics and biases literature, Bayes’ theorem, Bayesian epistemology, the ethos of working to overcome bias, arguments for the many worlds interpretation, the expected utility framework, population ethics, and a number of other ‘rationalist-associated’ ideas before I engaged with the effective altruism or rationalist communities.’
I think some of this is just a result of being a community founded partly by analytic philosophers. (though as a philosopher I would say that!).
I think it’s normal to encounter some of these ideas in undergrad philosophy programs. At my undergrad back in 2005-09 there was a whole upper-level undergraduate course in decision theory. I don’t think that’s true everywhere all the time, but I’d be surprised if it was wildly unusual. I can’t remember if we covered population ethics in any class, but I do remember discovering Parfit on the Repugnant Conclusion in 2nd-year of undergrad because one of my ethics lecturers said Reasons and Persons was a super-important book. In terms of the Oxford phil scene where the term “effective altruism” was born, the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can’t remember if he was the PhD supervisor of anyone important to the founding of EA, but I’d be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it. Most of the phil. physics people at Oxford were gung-ho for many worlds, it’s not a fringe view in philosophy of physics as far as I know. (Though I think Oxford was kind of a centre for it and there was more dissent elsewhere.) As far as I can tell, Bayesian epistemology in at least some senses of that term is a fairly well-known approach in philosophy of science. Philosophers specializing in epistemology might more often ignore it, but they know it’s there. And not all of them ignore it! I’m not an epistemologist, by my doctoral supervisor was, and it’s not unusual for his work to refer to Bayesian ideas in modelling stuff about how to evaluate evidence. (I.e. in uhm, defending the fine-tuning argument for the existence of God, which might not be the best use, but still!: https://www.yoaavisaacs.com/uploads/6/9/2/0/69204575/ms_for_fine-tuning_fine-tuning.pdf). (John was my supervisor, not Yoav.)
A high interest in bias stuff might genuinely be more an Eliezer/LessWrong legacy though.
Indeed, Broome co-supervised the doctoral theses of both Toby Ord and Will MacAskill. And Broome was, in fact, the person who advised Will to get in touch with Toby, before the two had met.
Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like “I found people who can describe these ideas well” than “oh these are interesting and novel ideas to me.” (I had the same realization when I learned about utilitarianism...much more of a feeling that “this is the articulation of clearly correct ideas, believing otherwise seems dumb”).
That said, some of the ideas on LW that seemed more original to me (AI risk, logical decision theory stuff, heroic responsibility in an inadequate world), do seem both substantively true and extremely important, and it took me a lot of time to be convinced of this.
(There are also other ideas that I’m less sure about, like cryonics and MW).
Veering entirely off-topic here, but how does the many worlds hypothesis tie in with all the rest of the rationality/EA stuff?
[replying only to you with no context]
EY pointed out the many worlds hypothesis as a thing that even modern science, specifically physics (which is considered a very well functioning science, it’s not like social psychology), is missing.
And he used this as an example to get people to stop trusting authority, including modern science, which many people around him seem to trust.
I think this is a reasonable reference.
Can’t say any of that makes sense to me. I have the feeling there’s some context I’m totally missing (or he’s just wrong about it). I may ask you about this in person at some point :)