I feel that something went wrong, epistemically, but I’m not entirely sure what it was.
My memory is that, a few years ago, there was a strong feeling within the longtermist portion of the EA community that reducing AI risk was far-and-away the most urgent problem. I remember there being a feeling that the risk was very high, that short timelines were more likely than not, and that the emergence of AGI would likely be a sudden event. I remember it being an open question, for example, whether it made sense to encourage people to get ML PhDs, since, by the time they graduated, it might be too late. There was also, in my memory, a sense that all existing criticisms of the classic AI risk arguments were weak. It seemed plausible that the longtermist EA community would pretty much just become an AI-focused community. Strangely, I’m a bit fuzzy on what my own views were, but I think they were at most only a bit out-of-step.
This might be an exaggerated memory. The community is also, obviously, large enough for my experience to be significantly non-representative. (I’d be interested in whether the above description resonates with anyone else.) But, in any case, I am pretty confident that there’s been a real shift in average views over the past three years: credences in discontinuous progress and very short timelines have decreased; people’s concerns about AI have become more diverse; a broad portfolio approach to long-termism has become more popular; and, overall, there’s less of a doom-y vibe.
One explanation for the shift, if it’s real, is that the community has been rationally and rigorously responding to available evidence, and the available evidence has simply changed. I don’t think this could be the whole explanation, though. As I wrote in response to another question, many of the arguments for continuous AI progress, which seem to have had a significant impact over the past couple years, could have been published more than a decade ago—and, in some cases, were. An awareness of the differences between the ML paradigm and the “good-old-fashioned-AI” (GOFAI) paradigm has been another source of optimism, but ML had already largely overtaken GOFAI by the time Superintelligence was published. I also don’t think that much novel evidence for long timelines has emerged over the past few years, beyond the fact that we still don’t have AGI.
It’s possible that the community’s updated views, including my own updated views, are wrong: but even in this case, there needs to have been an epistemic mishap somewhere down the line. (The mishap would just be more recent.) I’m unfortunately pretty unsure of what actually happened. I do think that more energy should have gone into critiquing the classic AI risk arguments, porting them into the ML paradigm, etc., in the few years immediately after Superintelligence was published, and I do think that there’s been too much epistemic deference within the community. As Asya pointed out in a comment on this post, I think that misperception has also been an important issue: people have often underestimated how much uncertainty and optimism prominent community members actually have about AI risk. Another explanation—although this isn’t a very fundamental explanation—is that, over the past few years, many people with less doom-y views have entered the community and had an influence. But I’m still confused, overall.
I think that studying and explaining the evolution of views within the community would be an interesting and valuable project in its own right.
[[As a side note, partly in response to below comment: It’s possible that the community has still made pretty much the right prioritization decisions over the past few years, even if there have been significant epistemic mistakes. Especially since AI safety/governance were so incredibly neglected in 2017, I’m less confident that the historical allocation of EA attention/talent/money to AI risk has actually substantially overshot the optimal level. We should still be nervous, though, if it turns out that the right decisions were made despite significantly miscalibrated views within the community.]]
I’d be interested in whether the above description resonates with anyone else.
FWIW, it mostly doesn’t resonate with me. (Of course, my experience is no more representative than yours.) Just as you I’d be curious to hear from more people.
I think what matches my impression most is that:
There has been a fair amount of arguably dysfunctional epistemic deference (more at the very end of this comment); and
Concerns about AI risk have become more diverse. (Though I think even this has been a mix of some people such as Allan Dafoe raising genuinely new concerns and people such as Paul Christiano explaining the concerns which for all I know they’ve always had more publicly.)
On the other points, my impression is that if there were consistent and significant changes in views they must have happened mostly among people I rarely interact with personally, or more than 3 years ago.
One shift in views that has had major real-world consequences is Holden Karnofsky, and by extension Open Phil, taking AI risk more seriously. He posted about this in September 2016, so presumably he changed his mind over the months prior to that.
I started to engage more deeply with public discussions on AI risk, and had my first conversations with EA-ish researchers in the area, in mid 2016. As far as I can remember, the main contours of the views prominent today were already discernable then. (Of course, since then a lot of detail has been added. E.g. today I encounter people who make fairly specific claims about how, say, GPT-3 is evidence for TAI soon, which obviously wasn’t possible in 2016. Though people did talk about AlphaGo when it came out.) E.g. there was a “MIRI view” on one hand, and Paul Christiano’s writing on prosaic AI alignment and IDA on the other hand. And the Concrete Problems in AI Safety paper appeared. Key writing on issues such as takeoff speeds, e.g. Superintelligence, Yudkowsky’s Intelligence Explosion Microeconomics, the Yudkowsky-Hanson FOOM debate, or some of Brian Tomasik’s posts, are even more dated. I didn’t get the impression that any view was particularly prominent.
Already in summer 2017, I’ve witnessed a lot of talk of how the “Bostrom/Yudkowsky model of AI risk” had been replaced by something else, including by staff at key organizations and at the Leaders Forum. Note that this must refer to developments that happened a year before more publicly visible signs such as Paul Christiano’s post on takeoff speeds from February 2018. Similarly, Daniel Dewey’s post on his reservations about some of MIRI’s research appeared in summer 2017, which I think is ample evidence of fundamental disagreements on AI risk among people at key organizations; and again, the post surely is based on epistemic trajectories dating back even further.
In late 2017 / early 2018, at an AI-strategy-focused event which I think we both attended, I don’t recall that short timelines, rapid takeoff, or ‘sudden emergence’ were particularly common views.
I know people who are skeptical about the value of ML PhDs for unrelated reasons, but I don’t recall anyone seriously suggesting there might not be enough time to finish a PhD before AGI appears. (I only recall a joke to the opposite effect—i.e. saying there will be time to finish a PhD—with which Demis Hassabis dodged a question on his AI timelines on a panel at EAGx Oxford 2016.) [Though we both know a senior researcher whose median timelines come close to that implication, and I don’t think their timelines became any longer over the last 3 years, again contra the trend you perceived.]
Most people I can think of who in 2017 had any at least minimally considered view on questions such as probability of doom, takeoff speed, polarity, timelines, and which AI safety agendas are promising still hold roughly the same view as far as I can tell. E.g. I recall one influential AI safety researcher who in summer 2017 gave what I thought were extremely short timelines, and in 2018 they told me they had become even shorter. I also don’t think I have changed my views significantly—they do feel more nuanced, but my bottom line on e.g. timelines or probability of different scenarios hasn’t changed significantly as far as I can remember.
My impression is that there hasn’t so much been a shift in views within individual people than the influx of a younger generation who tends to have an ML background and roughly speaking tends to agree more with Paul Christiano than MIRI. Some of them are now somewhat prominent themselves (e.g. Rohin Shah, Adam Gleave, you), and so the distribution of views among the set of perceived “AI risk thought leaders” has changed. But arguably this is a largely sociological phenomenon (e.g. due to prominent ML successes there are just way more people with ML background in general). [ETA: As Rohin notes, neither he nor Paul or Adam had an ML background when they decided which kind of AI safety research to focus on—instead, they switched to ML because they thought that was the more promising approach. So the suggested sociological explanation fails in at least their cases.]
More broadly, my impression is that for years there have been intractable disagreements on several fundamental questions regarding AI risk, that there hasn’t been much progress on resolving them, that few people have changed their mind in major ways, and that sometimes people holding different views have mostly stopped talking to each other. E.g. I’ve for months shared an office with people who hold views which I think are really off but have never talked to them about it, and more broadly I think we both know that even within just FHI there is an arguably extreme spread of views on issues pertaining to AI risk and longtermism/macrostrategy more generally.
(NB I don’t think this is necessarily bad. When disagreements prove intractable, it might be best if different groups make different bets and pursue their agendas separately. It might also not be that unusual for cases without decisive uncontroversial evidence, e.g. I’m sure there are protracted and intractable disagreements between, say, Keynesian and neoclassical economists or proponents of different quantum gravity theories.)
At the other extreme, I’ve seen dozens of collective person-hours being invested into experimenting with social technologies (e.g. certain ways of “facilitating” conversations) that were supposed to help people with different views understand each other, and to transmit some of that understanding to an audience of spectators. (I thought these were poorly executed and largely failures, but other thoughtful people seemed to disagree and expressed an eagerness to invest much more time into similar activities.)
I do recall instances of what I thought constituted exaggerated epistemic deference, especially in 2016 and to some extent 2017. Some of them were I think quite bizarre, with people essentially engaging in a long exegesis of brief, cryptic remarks that someone they know had relayed as something someone they know had heard as attributed to some presumed epistemic authority. Sometimes it wasn’t even clear who the supposed source of some information was, e.g. I recall a period where people around me were fuzzed that “people at OpenAI had short timelines”, with both the identities of these people and the question of just how short their timelines were being unclear. Usually I think it would have been more productive for the participants (myself included) to take an online course in ML, to google for some relevant factual information, or to try to make their thoughts more precise by writing them down.
(Again, some amount of epistemic deference is of course healthy. And more specifically it does seem correct to give more weight to people who have more relevant expertise or experience.)
My impression is that there hasn’t so much been a shift in views within individual people than the influx of a younger generation who tends to have an ML background and roughly speaking tends to agree more with Paul Christiano than MIRI. Some of them are now somewhat prominent themselves (e.g. Rohin Shah, Adam Gleave, you), and so the distribution of views among the set of perceived “AI risk thought leaders” has changed.
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
(I make a point out of this because I sometimes hear “well if you started out liking math then you join MIRI and if you started out liking ML you join CHAI / OpenAI / DeepMind and that explains the disagreement” and I think that’s not true.)
I don’t recall anyone seriously suggesting there might not be enough time to finish a PhD before AGI appears.
I’ve heard this (might be a Bay Area vs. Europe thing).
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)
This Robin Hanson quote is perhaps also evidence for a shift in views on AI risk, somewhat contra my above comment, though neutral on the “people changed their minds vs. new people have different views” and “when exactly did it happen?” questions:
Back when my ex-co-blogger Eliezer Yudkowsky and I discussed his AI risk concerns here on this blog (concerns that got much wider attention via Nick Bostrom’s book), those concerns were plausibly about a huge market failure. Just as there’s an obvious market failure in letting someone experiment with nuclear weapons in their home basement near a crowded city (without holding sufficient liability insurance), there’d be an obvious market failure from letting a small AI team experiment with software that might, in a weekend, explode to become a superintelligence that enslaved or destroyed the world. [...]
But when I read and talk to people today about AI risk, I mostly hear people worried about local failures to control local AIs, in a roughly competitive world full of many AI systems with reasonably strong property rights. [...]
(I expect many people worried about AI risk think that Hanson, in the above quote and elsewhere, misunderstands current concerns. But perceiving some change seems easier than correctly describing the target of the change, so arguably the quote is evidence for change even if you think it misunderstands current concerns.)
I think that instead of talking about potential failures in the way the EA community prioritized AI risk, it might be better to talk about something more concrete, e.g.
The views of the average EA
How much money was given to AI
How many EAs shifted their careers to be AI-focused as opposed to something else that deserved more EA attention
I think if we think there were mistakes in the concrete actions people have taken, e.g. mistaken funding decisions or mistaken career changes (I’m not sure that there were), we should look at the process that led to those decisions, and address that process directly.
Targeting ‘the views of the average EA’ seems pretty hard. I do think it might be important, because it has downstream effects on things like recruitment, external perception, funding, etc. But then I think we need to have a story for how we affect the views of the average EA (as Ben mentions). My guess is that we don’t have a story like that, and that’s a big part of ‘what went wrong’—the movement is growing in a chaotic way that no individual is responsible for, and that can lead to collectively bad epistemics.
‘Encouraging EAs to defer less’ and ‘expressing more public uncertainty’ could be part of the story for helping the average EA have better views. It also seems possible to me that we want some kind of centralized official source for presenting EA beliefs that keeps up to date the best case for and against certain views (though this obviously has its own issues). Then we can be more sure that people have come to their views after being exposed to alternatives, and we can have something concrete to point to when we worry that there hasn’t been enough criticism.
I think that studying and explaining the evolution of views within the community would be an interesting and valuable project in its own right.
I second this. I think Halstead’s question is an excellent one and finding an answer to it is hugely important. Understanding what went wrong epistemically (or indeed if anything did in fact go wrong epistemically) could massively help us going forward.
I feel that something went wrong, epistemically, but I’m not entirely sure what it was.
My memory is that, a few years ago, there was a strong feeling within the longtermist portion of the EA community that reducing AI risk was far-and-away the most urgent problem. I remember there being a feeling that the risk was very high, that short timelines were more likely than not, and that the emergence of AGI would likely be a sudden event. I remember it being an open question, for example, whether it made sense to encourage people to get ML PhDs, since, by the time they graduated, it might be too late. There was also, in my memory, a sense that all existing criticisms of the classic AI risk arguments were weak. It seemed plausible that the longtermist EA community would pretty much just become an AI-focused community. Strangely, I’m a bit fuzzy on what my own views were, but I think they were at most only a bit out-of-step.
This might be an exaggerated memory. The community is also, obviously, large enough for my experience to be significantly non-representative. (I’d be interested in whether the above description resonates with anyone else.) But, in any case, I am pretty confident that there’s been a real shift in average views over the past three years: credences in discontinuous progress and very short timelines have decreased; people’s concerns about AI have become more diverse; a broad portfolio approach to long-termism has become more popular; and, overall, there’s less of a doom-y vibe.
One explanation for the shift, if it’s real, is that the community has been rationally and rigorously responding to available evidence, and the available evidence has simply changed. I don’t think this could be the whole explanation, though. As I wrote in response to another question, many of the arguments for continuous AI progress, which seem to have had a significant impact over the past couple years, could have been published more than a decade ago—and, in some cases, were. An awareness of the differences between the ML paradigm and the “good-old-fashioned-AI” (GOFAI) paradigm has been another source of optimism, but ML had already largely overtaken GOFAI by the time Superintelligence was published. I also don’t think that much novel evidence for long timelines has emerged over the past few years, beyond the fact that we still don’t have AGI.
It’s possible that the community’s updated views, including my own updated views, are wrong: but even in this case, there needs to have been an epistemic mishap somewhere down the line. (The mishap would just be more recent.) I’m unfortunately pretty unsure of what actually happened. I do think that more energy should have gone into critiquing the classic AI risk arguments, porting them into the ML paradigm, etc., in the few years immediately after Superintelligence was published, and I do think that there’s been too much epistemic deference within the community. As Asya pointed out in a comment on this post, I think that misperception has also been an important issue: people have often underestimated how much uncertainty and optimism prominent community members actually have about AI risk. Another explanation—although this isn’t a very fundamental explanation—is that, over the past few years, many people with less doom-y views have entered the community and had an influence. But I’m still confused, overall.
I think that studying and explaining the evolution of views within the community would be an interesting and valuable project in its own right.
[[As a side note, partly in response to below comment: It’s possible that the community has still made pretty much the right prioritization decisions over the past few years, even if there have been significant epistemic mistakes. Especially since AI safety/governance were so incredibly neglected in 2017, I’m less confident that the historical allocation of EA attention/talent/money to AI risk has actually substantially overshot the optimal level. We should still be nervous, though, if it turns out that the right decisions were made despite significantly miscalibrated views within the community.]]
FWIW, it mostly doesn’t resonate with me. (Of course, my experience is no more representative than yours.) Just as you I’d be curious to hear from more people.
I think what matches my impression most is that:
There has been a fair amount of arguably dysfunctional epistemic deference (more at the very end of this comment); and
Concerns about AI risk have become more diverse. (Though I think even this has been a mix of some people such as Allan Dafoe raising genuinely new concerns and people such as Paul Christiano explaining the concerns which for all I know they’ve always had more publicly.)
On the other points, my impression is that if there were consistent and significant changes in views they must have happened mostly among people I rarely interact with personally, or more than 3 years ago.
One shift in views that has had major real-world consequences is Holden Karnofsky, and by extension Open Phil, taking AI risk more seriously. He posted about this in September 2016, so presumably he changed his mind over the months prior to that.
I started to engage more deeply with public discussions on AI risk, and had my first conversations with EA-ish researchers in the area, in mid 2016. As far as I can remember, the main contours of the views prominent today were already discernable then. (Of course, since then a lot of detail has been added. E.g. today I encounter people who make fairly specific claims about how, say, GPT-3 is evidence for TAI soon, which obviously wasn’t possible in 2016. Though people did talk about AlphaGo when it came out.) E.g. there was a “MIRI view” on one hand, and Paul Christiano’s writing on prosaic AI alignment and IDA on the other hand. And the Concrete Problems in AI Safety paper appeared. Key writing on issues such as takeoff speeds, e.g. Superintelligence, Yudkowsky’s Intelligence Explosion Microeconomics, the Yudkowsky-Hanson FOOM debate, or some of Brian Tomasik’s posts, are even more dated. I didn’t get the impression that any view was particularly prominent.
Already in summer 2017, I’ve witnessed a lot of talk of how the “Bostrom/Yudkowsky model of AI risk” had been replaced by something else, including by staff at key organizations and at the Leaders Forum. Note that this must refer to developments that happened a year before more publicly visible signs such as Paul Christiano’s post on takeoff speeds from February 2018. Similarly, Daniel Dewey’s post on his reservations about some of MIRI’s research appeared in summer 2017, which I think is ample evidence of fundamental disagreements on AI risk among people at key organizations; and again, the post surely is based on epistemic trajectories dating back even further.
In late 2017 / early 2018, at an AI-strategy-focused event which I think we both attended, I don’t recall that short timelines, rapid takeoff, or ‘sudden emergence’ were particularly common views.
I know people who are skeptical about the value of ML PhDs for unrelated reasons, but I don’t recall anyone seriously suggesting there might not be enough time to finish a PhD before AGI appears. (I only recall a joke to the opposite effect—i.e. saying there will be time to finish a PhD—with which Demis Hassabis dodged a question on his AI timelines on a panel at EAGx Oxford 2016.) [Though we both know a senior researcher whose median timelines come close to that implication, and I don’t think their timelines became any longer over the last 3 years, again contra the trend you perceived.]
Most people I can think of who in 2017 had any at least minimally considered view on questions such as probability of doom, takeoff speed, polarity, timelines, and which AI safety agendas are promising still hold roughly the same view as far as I can tell. E.g. I recall one influential AI safety researcher who in summer 2017 gave what I thought were extremely short timelines, and in 2018 they told me they had become even shorter. I also don’t think I have changed my views significantly—they do feel more nuanced, but my bottom line on e.g. timelines or probability of different scenarios hasn’t changed significantly as far as I can remember.
My impression is that there hasn’t so much been a shift in views within individual people than the influx of a younger generation who tends to have an ML background and roughly speaking tends to agree more with Paul Christiano than MIRI. Some of them are now somewhat prominent themselves (e.g. Rohin Shah, Adam Gleave, you), and so the distribution of views among the set of perceived “AI risk thought leaders” has changed. But arguably this is a largely sociological phenomenon (e.g. due to prominent ML successes there are just way more people with ML background in general). [ETA: As Rohin notes, neither he nor Paul or Adam had an ML background when they decided which kind of AI safety research to focus on—instead, they switched to ML because they thought that was the more promising approach. So the suggested sociological explanation fails in at least their cases.]
More broadly, my impression is that for years there have been intractable disagreements on several fundamental questions regarding AI risk, that there hasn’t been much progress on resolving them, that few people have changed their mind in major ways, and that sometimes people holding different views have mostly stopped talking to each other. E.g. I’ve for months shared an office with people who hold views which I think are really off but have never talked to them about it, and more broadly I think we both know that even within just FHI there is an arguably extreme spread of views on issues pertaining to AI risk and longtermism/macrostrategy more generally.
(NB I don’t think this is necessarily bad. When disagreements prove intractable, it might be best if different groups make different bets and pursue their agendas separately. It might also not be that unusual for cases without decisive uncontroversial evidence, e.g. I’m sure there are protracted and intractable disagreements between, say, Keynesian and neoclassical economists or proponents of different quantum gravity theories.)
At the other extreme, I’ve seen dozens of collective person-hours being invested into experimenting with social technologies (e.g. certain ways of “facilitating” conversations) that were supposed to help people with different views understand each other, and to transmit some of that understanding to an audience of spectators. (I thought these were poorly executed and largely failures, but other thoughtful people seemed to disagree and expressed an eagerness to invest much more time into similar activities.)
I do recall instances of what I thought constituted exaggerated epistemic deference, especially in 2016 and to some extent 2017. Some of them were I think quite bizarre, with people essentially engaging in a long exegesis of brief, cryptic remarks that someone they know had relayed as something someone they know had heard as attributed to some presumed epistemic authority. Sometimes it wasn’t even clear who the supposed source of some information was, e.g. I recall a period where people around me were fuzzed that “people at OpenAI had short timelines”, with both the identities of these people and the question of just how short their timelines were being unclear. Usually I think it would have been more productive for the participants (myself included) to take an online course in ML, to google for some relevant factual information, or to try to make their thoughts more precise by writing them down.
(Again, some amount of epistemic deference is of course healthy. And more specifically it does seem correct to give more weight to people who have more relevant expertise or experience.)
My experience matches Ben’s more than yours.
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
(I make a point out of this because I sometimes hear “well if you started out liking math then you join MIRI and if you started out liking ML you join CHAI / OpenAI / DeepMind and that explains the disagreement” and I think that’s not true.)
I’ve heard this (might be a Bay Area vs. Europe thing).
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)
This Robin Hanson quote is perhaps also evidence for a shift in views on AI risk, somewhat contra my above comment, though neutral on the “people changed their minds vs. new people have different views” and “when exactly did it happen?” questions:
(I expect many people worried about AI risk think that Hanson, in the above quote and elsewhere, misunderstands current concerns. But perceiving some change seems easier than correctly describing the target of the change, so arguably the quote is evidence for change even if you think it misunderstands current concerns.)
I think that instead of talking about potential failures in the way the EA community prioritized AI risk, it might be better to talk about something more concrete, e.g.
The views of the average EA
How much money was given to AI
How many EAs shifted their careers to be AI-focused as opposed to something else that deserved more EA attention
I think if we think there were mistakes in the concrete actions people have taken, e.g. mistaken funding decisions or mistaken career changes (I’m not sure that there were), we should look at the process that led to those decisions, and address that process directly.
Targeting ‘the views of the average EA’ seems pretty hard. I do think it might be important, because it has downstream effects on things like recruitment, external perception, funding, etc. But then I think we need to have a story for how we affect the views of the average EA (as Ben mentions). My guess is that we don’t have a story like that, and that’s a big part of ‘what went wrong’—the movement is growing in a chaotic way that no individual is responsible for, and that can lead to collectively bad epistemics.
‘Encouraging EAs to defer less’ and ‘expressing more public uncertainty’ could be part of the story for helping the average EA have better views. It also seems possible to me that we want some kind of centralized official source for presenting EA beliefs that keeps up to date the best case for and against certain views (though this obviously has its own issues). Then we can be more sure that people have come to their views after being exposed to alternatives, and we can have something concrete to point to when we worry that there hasn’t been enough criticism.
I second this. I think Halstead’s question is an excellent one and finding an answer to it is hugely important. Understanding what went wrong epistemically (or indeed if anything did in fact go wrong epistemically) could massively help us going forward.
I wonder how we get the ball rolling on this...?