I’m not sure how much I agree with the framework — or at least the idea that we’re entering a third wave, but this seems like a useful tool/exercise.
Here’s one consideration that comes to mind as I think about whether we’re entering a third wave (written quickly, sorry in advance!).
I’ve got competing intuitions:
We tend to over-react to (or over-update on) changes that seem really huge but end up not affecting priorities/ the status quo that much.
E.g. I think some events feel like they’ll have a big effect, but they’re actually just big in the news or on Twitter for a few weeks, and then everyone goes back to something pretty normal. Or relatedly, when something really bad happens and is covered by the news (e.g. an earthquake, or some form of violence): we might feel pressure to donate to a relevant charity, make a public statement, etc., when actually we should keep working on our mostly unrelated projects.
At the same time, I think we tend to under-react and are too slow to make important changes based on things happening in the world. It’s too easy to believe that everything is normal (while in reality, futures are wild). We’re probably attached to projects (don’t want to stare into the abyss) and probably dismiss some ideas/predictions as too weird without giving them enough consideration.
COVID is probably an important example here (people weren’t updating fast enough), and I can think of some other examples from my personal life.
My best guess (not resilient and pretty vague) is that we’re generally too slow to update on in-the-world changes (that aren’t about other people’s views or the like), and too quick to update on ~memes in our immediate surroundings or our information sources/networks. I tentatively think that (public) opinion does in fact change a lot, but those changes are generally slower, and that we should be cautious about thinking that opinion-like changes are big, since small/local changes can feel huge/permanent/global.
So: to the extent that the idea that we’re entering a third wave is based on the sense that AI safety concerns are going mainstream, I feel very unsure that we’re interpreting things correctly. We have decent (and not vibes-based) signals that AI safety is in fact going mainstream, but I’m still pretty unsure if things will go back to ~normal. Of course, other things have also changed; specific influential people seem to have gotten worried, it seems like governments are taking AI (existential) risk seriously, etc. — these seem less(?) likely to revert to normal (although I’m just guessing, again). I imagine that we can look at past case studies of this and get very rough ~base rates, potentially — I’d be very interested.
(I have some other concerns about using/believing this model, but just wanted to outline one for now.)
I’ll also share some notes/comments I added on a slightly earlier draft. I haven’t read the comments carefully, so at least some of this is probably redundant.
Some other possible “third waves” (very quick brainstorm)
Attempting to stay relevant: AI safety blows up, EA still has a lot of people who have been thinking about AI safety for a long time and feel like they should be contributing, but they don’t catch on to the fact that they’re now 100x smaller fraction of the field, and not the biggest players anymore. (Also seems possible that they’re the experts and suddenly have lots of work, but it doesn’t seem like a certain thing.)
EA grows: AI attention brings a lot of attention to EA somehow, and EA grows a bunch through unusual pathways (unusual for us), everything else is similar (maybe this is the 4th wave somehow — hinges on something that hasn’t happened). The main updates are about the size of the movement/network (what would EA look like if it had 20x more people?), and its composition (later-career folks, etc.)
“Effective AIS”: Little changes from now from EA’s POV except that AI safety is big outside of EA, but most of that is ~ineffective for one reason or another. At the same time, there’s a fair amount of funding for “effective AI safety” work (possibly something similar to what happens with effective climate work)
I.e. a lot of stuff gets “AI safety” that’s not really AI safety (or is just not great). But big donors are interested in AI (existential) safety and there are people in ~EA-adjacent spaces who are attracted to EAxAIS because of competence and reasonableness of arguments; donors are excited about funding this kind of thing. We need to work on making work like this legible. We need a version of Longview/FP but for AI safety.
Alternatively: AI safety becomes super politicized and people don’t want to work with AI companies, so EAs are the only ones doing that.
Alternatively: AI safety (in the popular understanding) becomes very strongly about something like copyright issues/bias/unemployment (“we shouldn’t be distracted from the real problems today”)
Etc.
“Back to normal+puddles”: Attention on AI safety passes. Things are very similar except there’s a ~quiet and occasionally noisy archipelago of AI-safety-oriented communities/projects (think puddles after a storm).
Some people think that EA is “the AI safety thing” and confuse EA with that (like they still do with earning to give sometimes).
The “third wave” might be prompted by something that isn’t AI-related. Some possible scenarios:
Something potentially FTX-related leads to the EA brand becoming toxic.
The EA network sees a schism along something like GHD vs. non-GHD, “longtermism” vs not, “weird” vs not, etc.
Alternatively, there’s an overall fracturing into loosely-grouped and loosely-networked focus areas, like effective GHD, effective FAW, WAW, maybe pandemic preparedness, AI safety, AI governance, ~cause prioritization research, etc. Some organizations and groups focus on letting donors evaluate projects across a wide space given their priorities and philosophies (or giving career advice).
EA has just grown too big to be useful to coordinate around, and we’re seeing what looks like the beginnings of a ~healthy fracturing (which in reality might be past the point of no return); we’re shifting to a model where there are cause-specific communities that are friendly to each other, and some orgs work across them and keep an eye on them, etc.
Stuff that might happen that could change things fast
Big war
Big politicization moment of AI, or AI safety becomes very strongly about something like copyright issues/bias/unemployment
Really scary AI thing that makes people really freaked out
Something weird happens with labs (E.g. government does something strange) and they become super uncooperative?
I like the distinction between overreacting and underreacting as being “in the world” vs. “memes”—another way of saying this is something like “object level reality” vs. “social reality”.
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn’t really socially involved at the time).
So to the extent that this is about “what’s happening to EA” I think there’s clearly a third wave here, where people are running and getting funded to run AI specific groups, people are doing policy and advocacy in a way I’ve never seen before.
If this ends up being a flash in the pan, then maybe the way to see this is something like a “trend” or “fad”, like maybe 2022-spending was.
Which maybe brings me to something like “we might want these waves to consistently be about “what’s happening in EA” vs “what’s happening in the world”, and they’re currently not.
I’m glad Ben shared this post!
I’m not sure how much I agree with the framework — or at least the idea that we’re entering a third wave, but this seems like a useful tool/exercise.
Here’s one consideration that comes to mind as I think about whether we’re entering a third wave (written quickly, sorry in advance!).
I’ve got competing intuitions:
We tend to over-react to (or over-update on) changes that seem really huge but end up not affecting priorities/ the status quo that much.
E.g. I think some events feel like they’ll have a big effect, but they’re actually just big in the news or on Twitter for a few weeks, and then everyone goes back to something pretty normal. Or relatedly, when something really bad happens and is covered by the news (e.g. an earthquake, or some form of violence): we might feel pressure to donate to a relevant charity, make a public statement, etc., when actually we should keep working on our mostly unrelated projects.
At the same time, I think we tend to under-react and are too slow to make important changes based on things happening in the world. It’s too easy to believe that everything is normal (while in reality, futures are wild). We’re probably attached to projects (don’t want to stare into the abyss) and probably dismiss some ideas/predictions as too weird without giving them enough consideration.
COVID is probably an important example here (people weren’t updating fast enough), and I can think of some other examples from my personal life.
My best guess (not resilient and pretty vague) is that we’re generally too slow to update on in-the-world changes (that aren’t about other people’s views or the like), and too quick to update on ~memes in our immediate surroundings or our information sources/networks. I tentatively think that (public) opinion does in fact change a lot, but those changes are generally slower, and that we should be cautious about thinking that opinion-like changes are big, since small/local changes can feel huge/permanent/global.
So: to the extent that the idea that we’re entering a third wave is based on the sense that AI safety concerns are going mainstream, I feel very unsure that we’re interpreting things correctly. We have decent (and not vibes-based) signals that AI safety is in fact going mainstream, but I’m still pretty unsure if things will go back to ~normal. Of course, other things have also changed; specific influential people seem to have gotten worried, it seems like governments are taking AI (existential) risk seriously, etc. — these seem less(?) likely to revert to normal (although I’m just guessing, again). I imagine that we can look at past case studies of this and get very rough ~base rates, potentially — I’d be very interested.
(I have some other concerns about using/believing this model, but just wanted to outline one for now.)
I’ll also share some notes/comments I added on a slightly earlier draft. I haven’t read the comments carefully, so at least some of this is probably redundant.
Some other possible “third waves” (very quick brainstorm)
Attempting to stay relevant: AI safety blows up, EA still has a lot of people who have been thinking about AI safety for a long time and feel like they should be contributing, but they don’t catch on to the fact that they’re now 100x smaller fraction of the field, and not the biggest players anymore. (Also seems possible that they’re the experts and suddenly have lots of work, but it doesn’t seem like a certain thing.)
EA grows: AI attention brings a lot of attention to EA somehow, and EA grows a bunch through unusual pathways (unusual for us), everything else is similar (maybe this is the 4th wave somehow — hinges on something that hasn’t happened). The main updates are about the size of the movement/network (what would EA look like if it had 20x more people?), and its composition (later-career folks, etc.)
“Effective AIS”: Little changes from now from EA’s POV except that AI safety is big outside of EA, but most of that is ~ineffective for one reason or another. At the same time, there’s a fair amount of funding for “effective AI safety” work (possibly something similar to what happens with effective climate work)
I.e. a lot of stuff gets “AI safety” that’s not really AI safety (or is just not great). But big donors are interested in AI (existential) safety and there are people in ~EA-adjacent spaces who are attracted to EAxAIS because of competence and reasonableness of arguments; donors are excited about funding this kind of thing. We need to work on making work like this legible. We need a version of Longview/FP but for AI safety.
Alternatively: AI safety becomes super politicized and people don’t want to work with AI companies, so EAs are the only ones doing that.
Alternatively: AI safety (in the popular understanding) becomes very strongly about something like copyright issues/bias/unemployment (“we shouldn’t be distracted from the real problems today”)
Etc.
“Back to normal+puddles”: Attention on AI safety passes. Things are very similar except there’s a ~quiet and occasionally noisy archipelago of AI-safety-oriented communities/projects (think puddles after a storm).
Some people think that EA is “the AI safety thing” and confuse EA with that (like they still do with earning to give sometimes).
The “third wave” might be prompted by something that isn’t AI-related. Some possible scenarios:
Something potentially FTX-related leads to the EA brand becoming toxic.
The EA network sees a schism along something like GHD vs. non-GHD, “longtermism” vs not, “weird” vs not, etc.
Alternatively, there’s an overall fracturing into loosely-grouped and loosely-networked focus areas, like effective GHD, effective FAW, WAW, maybe pandemic preparedness, AI safety, AI governance, ~cause prioritization research, etc. Some organizations and groups focus on letting donors evaluate projects across a wide space given their priorities and philosophies (or giving career advice).
EA has just grown too big to be useful to coordinate around, and we’re seeing what looks like the beginnings of a ~healthy fracturing (which in reality might be past the point of no return); we’re shifting to a model where there are cause-specific communities that are friendly to each other, and some orgs work across them and keep an eye on them, etc.
Stuff that might happen that could change things fast
Big war
Big politicization moment of AI, or AI safety becomes very strongly about something like copyright issues/bias/unemployment
Really scary AI thing that makes people really freaked out
Something weird happens with labs (E.g. government does something strange) and they become super uncooperative?
New pandemic
Significantly more bad press about EA
Huge endorsement of EA somehow / viral moment
~research becomes automated
Etc.
I like the distinction between overreacting and underreacting as being “in the world” vs. “memes”—another way of saying this is something like “object level reality” vs. “social reality”.
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn’t really socially involved at the time).
So to the extent that this is about “what’s happening to EA” I think there’s clearly a third wave here, where people are running and getting funded to run AI specific groups, people are doing policy and advocacy in a way I’ve never seen before.
If this ends up being a flash in the pan, then maybe the way to see this is something like a “trend” or “fad”, like maybe 2022-spending was.
Which maybe brings me to something like “we might want these waves to consistently be about “what’s happening in EA” vs “what’s happening in the world”, and they’re currently not.