I think maybe people mistake the process of research, which takes time and hard work
I think you’re confusing “hard work” with the disclosure of wisdom.
Take a look at the history of philosophy and you’ll find plenty of hard work in medieval scholasticism, or in Marxist dialectical materialism. Heidegger was one of the greatest philosophers… and he was a Nazi and organized book burnings. Sartre supported Stalinism. They were true scholars who worked very hard, with extraordinary intellectual capacity and commendable academic careers.
Wisdom is something else entirely. It stems from an unbiased perspective and risks breaking with paradigms. “Effective Altruism” might be close to this. For the first time, there’s a movement for social change centered on a behavioral trait, detached from old traditions and political constraints.
This is a very strange critique. The claim that research takes hard work does not logically imply a claim that hard work is all you need for research. In other words, to say hard work is necessary for research (or for good research) not does imply it is sufficient. I certainly would never say that it is sufficient, although it is necessary.
Indeed, I explicitly discuss other considerations in this post, such as the “rigour and scrutiny” of the academic process and what I see as “the basics of good epistemic practice”, e.g. open-minded discussion with people who disagree with you. I talk about specific problems I see in academic philosophy research that have nothing to do with whether people are working hard enough or not. I also discuss how, from my point of view, ego concerns can get in the way, and love for research itself — and maybe I should have added curiosity — seems to be behind most great research. But, in any case, this post is not intended to give an exhaustive, rigorous account of what constitutes good research.
If picking examples of academic philosophers who did bad research or came to bad conclusions is intended to discredit the whole academic enterprise, I discussed that form of argument at length in the post and gave my response to it. (Incidentally, some members of the Bay Area rationalist community might see Heidegger’s participation in the Nazi Party and his involvement in book burnings as evidence that he was a good decoupler, although I would disagree with that as strongly as I could ever disagree about anything.)
I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
I think effective altruism is as much attached to intellectual tradition and as much constrained by political considerations as pretty much anything else. No one can transcend the world with an act of will. We are all a part of history and culture.
I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
Though I don’t claim that EAs are without bias, I think there’s lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.
These are all good points and genuine examples of virtuous behaviours often exemplified by people in EA, but we’ve got to ask what we’re comparing to. Typical chat spaces, like Reddit? Good grief, of course the EA Forum is better than that! But the specific point of comparison I was responding to was academia.
I see no evidence that effective altruism is any better at being unbiased than anyone else.
So that’s why I compared to non-EAs. But ok, let’s compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course there’s a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and it’s similar to the common EA condition of a better calibrated generalist). I’ve heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, I’m not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other people’s experience publishing in AI) that it’s very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which I’ve published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And it’s true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/using techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Yes, I said “anyone else”, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don’t really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I don’t think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there’s often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that it’s true. I don’t know whether it’s true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I don’t know why you would think that.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again — not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived — the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don’t know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
But I do think that EAs updated more quickly that the replication crisis was a big problem. … Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
I am not familiar with the other two things you mentioned, but I’m very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that.
But Future Perfect is a small subset of what Vox does overall, and what Vox does — mainly explainer journalism, which is important, and which Vox does well — is just one part of news overall.
Future Perfect is great, but it’s also kind of just publishing articles about effective altruism on a news site — not that there’s anything wrong with that — rather than an improvement on the news overall.
If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.
Of course, I agree that EA contains extravagant, Byzantine, and biased approaches, influenced by all sorts of traditions. But there is one approach that is original, unique, and that opens a window for social change. In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.
The critique of “undisciplined iconoclasm” is welcome. There is never enough improvement when there is so much to gain.
I think many people like myself once detected something nearly divine in effective altruism’s emphasis on sacrificing personal consumption to help the world’s poorest people, not for any kind of recognition or external reward, but just to do it. That is act of basically selfless love.
In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.
This is pretty weird thing to say. You understand that “academic knowledge” encompasses basically all of science, right? I know plenty of academics, and I can’t think of anyone I know IRL that is not committed to truthseeking, often with signficantly more rigor than is found in effective altruism.
You understand that “academic knowledge” encompasses basically all of science, right?
Obviously, I was not referring to the empirical sciences, but, as is clear from the context, to the social sciences, which have a certain capacity to influence moral culture.
You have the impression that the work of academic professionals is rigorously focused on the truth. I think that there are some self-evident truths about social progress that are not currently being addressed in academia.
I don’t think that EA is a complete ideology today, but its foundation is based on a great novelty: conceiving social change from a trait of human behavior (altruism).
I’m not sure I’m able to follow anything you’re trying to say. I find your comments quite confusing.
I don’t agree with your opinion that academia is nothing but careerism and, presumably, that effective altruism is something more than that. I would say effective altruism and academia are roughly equally careerist and roughly equally idealistic. I also don’t agree that effective altruism is more epistemically virtuous than academia, or more capable of promoting social change, or anything like that.
I think you’re confusing “hard work” with the disclosure of wisdom.
Take a look at the history of philosophy and you’ll find plenty of hard work in medieval scholasticism, or in Marxist dialectical materialism. Heidegger was one of the greatest philosophers… and he was a Nazi and organized book burnings. Sartre supported Stalinism. They were true scholars who worked very hard, with extraordinary intellectual capacity and commendable academic careers.
Wisdom is something else entirely. It stems from an unbiased perspective and risks breaking with paradigms. “Effective Altruism” might be close to this. For the first time, there’s a movement for social change centered on a behavioral trait, detached from old traditions and political constraints.
This is a very strange critique. The claim that research takes hard work does not logically imply a claim that hard work is all you need for research. In other words, to say hard work is necessary for research (or for good research) not does imply it is sufficient. I certainly would never say that it is sufficient, although it is necessary.
Indeed, I explicitly discuss other considerations in this post, such as the “rigour and scrutiny” of the academic process and what I see as “the basics of good epistemic practice”, e.g. open-minded discussion with people who disagree with you. I talk about specific problems I see in academic philosophy research that have nothing to do with whether people are working hard enough or not. I also discuss how, from my point of view, ego concerns can get in the way, and love for research itself — and maybe I should have added curiosity — seems to be behind most great research. But, in any case, this post is not intended to give an exhaustive, rigorous account of what constitutes good research.
If picking examples of academic philosophers who did bad research or came to bad conclusions is intended to discredit the whole academic enterprise, I discussed that form of argument at length in the post and gave my response to it. (Incidentally, some members of the Bay Area rationalist community might see Heidegger’s participation in the Nazi Party and his involvement in book burnings as evidence that he was a good decoupler, although I would disagree with that as strongly as I could ever disagree about anything.)
I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
I think effective altruism is as much attached to intellectual tradition and as much constrained by political considerations as pretty much anything else. No one can transcend the world with an act of will. We are all a part of history and culture.
Though I don’t claim that EAs are without bias, I think there’s lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.
These are all good points and genuine examples of virtuous behaviours often exemplified by people in EA, but we’ve got to ask what we’re comparing to. Typical chat spaces, like Reddit? Good grief, of course the EA Forum is better than that! But the specific point of comparison I was responding to was academia.
You said:
So that’s why I compared to non-EAs. But ok, let’s compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course there’s a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and it’s similar to the common EA condition of a better calibrated generalist). I’ve heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, I’m not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other people’s experience publishing in AI) that it’s very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which I’ve published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And it’s true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/using techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Yes, I said “anyone else”, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don’t really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I don’t think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there’s often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that it’s true. I don’t know whether it’s true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I don’t know why you would think that.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again — not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived — the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
I am not familiar with the other two things you mentioned, but I’m very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that.
But Future Perfect is a small subset of what Vox does overall, and what Vox does — mainly explainer journalism, which is important, and which Vox does well — is just one part of news overall.
Future Perfect is great, but it’s also kind of just publishing articles about effective altruism on a news site — not that there’s anything wrong with that — rather than an improvement on the news overall.
If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.
Of course, I agree that EA contains extravagant, Byzantine, and biased approaches, influenced by all sorts of traditions. But there is one approach that is original, unique, and that opens a window for social change. In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.
The critique of “undisciplined iconoclasm” is welcome. There is never enough improvement when there is so much to gain.
And “love” is a real phenomenon, a part of human behavior, that deserves analysis and understanding. It is not ornamental, nor a vague idealistic reference, nor a “reductio ad absurdum”.
This is pretty weird thing to say. You understand that “academic knowledge” encompasses basically all of science, right? I know plenty of academics, and I can’t think of anyone I know IRL that is not committed to truthseeking, often with signficantly more rigor than is found in effective altruism.
Obviously, I was not referring to the empirical sciences, but, as is clear from the context, to the social sciences, which have a certain capacity to influence moral culture.
You have the impression that the work of academic professionals is rigorously focused on the truth. I think that there are some self-evident truths about social progress that are not currently being addressed in academia.
I don’t think that EA is a complete ideology today, but its foundation is based on a great novelty: conceiving social change from a trait of human behavior (altruism).
I’m not sure I’m able to follow anything you’re trying to say. I find your comments quite confusing.
I don’t agree with your opinion that academia is nothing but careerism and, presumably, that effective altruism is something more than that. I would say effective altruism and academia are roughly equally careerist and roughly equally idealistic. I also don’t agree that effective altruism is more epistemically virtuous than academia, or more capable of promoting social change, or anything like that.