I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.
Though I donāt claim that EAs are without bias, I think thereās lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.
These are all good points and genuine examples of virtuous behaviours often exemplified by people in EA, but weāve got to ask what weāre comparing to. Typical chat spaces, like Reddit? Good grief, of course the EA Forum is better than that! But the specific point of comparison I was responding to was academia.
I see no evidence that effective altruism is any better at being unbiased than anyone else.
So thatās why I compared to non-EAs. But ok, letās compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course thereās a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and itās similar to the common EA condition of a better calibrated generalist). Iāve heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, Iām not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other peopleās experience publishing in AI) that itās very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which Iāve published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And itās true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/āusing techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Yes, I said āanyone elseā, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I donāt really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I donāt think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and thereās often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that itās true. I donāt know whether itās true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I donāt know why you would think that.
Iām not sure exactly what you were referencing Eliezer Yudkowsky as an example of ā someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again ā not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived ā the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I donāt know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
Iām not sure exactly what you were referencing Eliezer Yudkowsky as an example of ā someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
But I do think that EAs updated more quickly that the replication crisis was a big problem. ⦠Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I donāt know how you would go about proving that to someone who (like me) is skeptical.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh.
The sort of problems with Yudkowskyās epistemic practices that Iām referring to have existed for much longer than the last few years. Hereās an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldnāt be more central to his lifeās work, so thatās very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowskyās Sequences, which were written in 2006-2009.
If you go back to Yudkowskyās even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
I am not familiar with the other two things you mentioned, but Iām very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that.
But Future Perfect is a small subset of what Vox does overall, and what Vox does ā mainly explainer journalism, which is important, and which Vox does well ā is just one part of news overall.
Future Perfect is great, but itās also kind of just publishing articles about effective altruism on a news site ā not that thereās anything wrong with that ā rather than an improvement on the news overall.
If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.
Though I donāt claim that EAs are without bias, I think thereās lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.
These are all good points and genuine examples of virtuous behaviours often exemplified by people in EA, but weāve got to ask what weāre comparing to. Typical chat spaces, like Reddit? Good grief, of course the EA Forum is better than that! But the specific point of comparison I was responding to was academia.
You said:
So thatās why I compared to non-EAs. But ok, letās compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course thereās a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and itās similar to the common EA condition of a better calibrated generalist). Iāve heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, Iām not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other peopleās experience publishing in AI) that itās very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which Iāve published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And itās true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/āusing techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Yes, I said āanyone elseā, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I donāt really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I donāt think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and thereās often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that itās true. I donāt know whether itās true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I donāt know why you would think that.
Iām not sure exactly what you were referencing Eliezer Yudkowsky as an example of ā someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again ā not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived ā the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
I donāt know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowskyās epistemic practices that Iām referring to have existed for much longer than the last few years. Hereās an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldnāt be more central to his lifeās work, so thatās very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowskyās Sequences, which were written in 2006-2009.
If you go back to Yudkowskyās even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
I am not familiar with the other two things you mentioned, but Iām very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that.
But Future Perfect is a small subset of what Vox does overall, and what Vox does ā mainly explainer journalism, which is important, and which Vox does well ā is just one part of news overall.
Future Perfect is great, but itās also kind of just publishing articles about effective altruism on a news site ā not that thereās anything wrong with that ā rather than an improvement on the news overall.
If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.