I’ve been thinking about coup risks more lately so would actually be pretty keen to collaborate or give feedback on any early stuff. There isn’t much work on this (for example, none at RAND as far as I can tell).
I think EAs have frequently suffered from a lack of expertise, which causes pain in areas like politics. Almost every EA and AI safety person was way off on the magnitude of change a Trump win would create—gutting USAID easily dwarfs all of EA global health by orders of magnitude. Basically no one took this seriously as a possibility, or at least I do not know of anyone. And it’s not like you’d normally be incentivized to plan for abrupt major changes to a longstanding status quo in the first placce.
Oversimplification of neglectedness has definitely been an unfortunate meme for a while. Sometimes things are too neglected to make progress or don’t make sense for your skillset, or are neglected for a reason, or just less impactful. To a lesser extent, I think there has been some misuse/misunderstanding of counterfactual thinking as well instead of Shapley additives. Or being overly optimistic “our few week fellowship can very likely change someone’s entrenched career path” if they haven’t strongly shown that as their purpose for participating.
Definitely agree we have a problem with deference/not figuring things out. It’s hard and there’s lots of imposter syndrome where people think they aren’t good enough to do this or try to do it. I think sometimes people get early negative feedback and over-update, dropping projects before they’ve tested things to see results. I would definitely like to see more rigorous impact evaluation in the space. At one point I wanted to start an independent org that did this. It seems surprisingly underprioritized. There’s a meme that EAs like to think and research and need to just do more things, but I think it’s a bit of a false dichotomy and on net more research + iteration is valuable and amplifies your effectiveness, making sure you’re prioritizing the right things in the right ways.
Another way deference expresses negative effects is that established orgs act as whirlpools that suck up all the talent and offer more “legitimacy” including frontier AI companies, but I think they’re often not the highest impact thing you could do. Often there is something that would be impactful but won’t happen if you don’t do it. Or would happen worse. Or happen way later. People also underestimate how much the org they work at will change how they think and what they think about and what they want to do or are willing to give up. But finding alternatives can be tough—how many people really want to continue working as independent contractors with no benefits and no coworkers indefinitely? it’s very adverse selection against impact. Sure, this level of competition might weed out some worse ideas but also good ones.
“Basically no one took this seriously as a possibility, or at least I do not know of anyone.”
I alluded to this over a year ago in this comment, which might count in your book as taking it seriously. But to be honest, where we are at in Day 100 of this administration is not the territory I expected us to be in until at least the 2nd year.
I think these people do exist (those that appreciated the second term for the risks it presented) and I’ll count myself as one of them. I think we are just less visible because we push this concern a lot less in the EA discourse than other topics because 1) the people with these viewpoints and are willing to be vocal about it are a small minority of EA*, 2) espousing these views is perceived as a way to lose social capital, and 3) EA institutions have made decisions that have somewhat gatekeeped how much of this discourse can take place in EA official venues.
Note on #1
A lot of potential EAs—people who embrace EA principles and come to the conclusion that the way to do the most good is work on democracy/political systems/politics of Great Powers—interact with the community, are offput by the little engagement and, sometimes, dismissiveness of the community towards this cause area, and then decide that rather than fight the uphill battle of moving the Overton window they will instead retreat back to other communities more aligned with their conclusions.
This characterized my own relationship with EA. Despite knowing and resonating with EA since 2013/2014, I did not engage deeply with the community until 2022 because there seemed to be little to no overlap with people that wanted to change the political system and address the politics upstream of the policies EA spend so much time thinking how to influence. I think this space is still small in EA but is garnering more interest and will only do so because I think we are at the beginning and not the end of this moment in history.
Note on #1 and #2
When I talk with EAs one-on-one, a substantial portion share my views that EA negelects politics of the world’s superpower and the political system upstream of those politics. However, very few act on these beliefs or take much time to vocalize them. I think people underestimate how much people share this sentiment, which only makes it less likely for people to speak out (which of course, leads back to people underestimating the prevalence of the belief).
Note on #3
CEA has once allowed me to speak once on the topic of risks to the US system at an EAGxVirtual—kudos where it is due. However, I’ve have inquired multiple times with CEA since 2022 about running such an event at an actual EAG and have always been declined; I think this is clear area of improvement. I’d also like to see networking meetups for people interested in this area at the EAGs themselves instead of people resorting to personally organizing satellite events around them; recently there was indication CEA was open to this.
On the Forum, posts in and around this topic can, and sometimes do, get marked as community posts and thus lose visibility. This is not to say it happens all the time. There are posts that make it to the main page that others would want to see moved to community.
I’ve been thinking about coup risks more lately so would actually be pretty keen to collaborate or give feedback on any early stuff. There isn’t much work on this (for example, none at RAND as far as I can tell).
I think EAs have frequently suffered from a lack of expertise, which causes pain in areas like politics. Almost every EA and AI safety person was way off on the magnitude of change a Trump win would create—gutting USAID easily dwarfs all of EA global health by orders of magnitude. Basically no one took this seriously as a possibility, or at least I do not know of anyone. And it’s not like you’d normally be incentivized to plan for abrupt major changes to a longstanding status quo in the first placce.
Oversimplification of neglectedness has definitely been an unfortunate meme for a while. Sometimes things are too neglected to make progress or don’t make sense for your skillset, or are neglected for a reason, or just less impactful. To a lesser extent, I think there has been some misuse/misunderstanding of counterfactual thinking as well instead of Shapley additives. Or being overly optimistic “our few week fellowship can very likely change someone’s entrenched career path” if they haven’t strongly shown that as their purpose for participating.
Definitely agree we have a problem with deference/not figuring things out. It’s hard and there’s lots of imposter syndrome where people think they aren’t good enough to do this or try to do it. I think sometimes people get early negative feedback and over-update, dropping projects before they’ve tested things to see results. I would definitely like to see more rigorous impact evaluation in the space. At one point I wanted to start an independent org that did this. It seems surprisingly underprioritized. There’s a meme that EAs like to think and research and need to just do more things, but I think it’s a bit of a false dichotomy and on net more research + iteration is valuable and amplifies your effectiveness, making sure you’re prioritizing the right things in the right ways.
Another way deference expresses negative effects is that established orgs act as whirlpools that suck up all the talent and offer more “legitimacy” including frontier AI companies, but I think they’re often not the highest impact thing you could do. Often there is something that would be impactful but won’t happen if you don’t do it. Or would happen worse. Or happen way later. People also underestimate how much the org they work at will change how they think and what they think about and what they want to do or are willing to give up. But finding alternatives can be tough—how many people really want to continue working as independent contractors with no benefits and no coworkers indefinitely? it’s very adverse selection against impact. Sure, this level of competition might weed out some worse ideas but also good ones.
“Basically no one took this seriously as a possibility, or at least I do not know of anyone.”
I alluded to this over a year ago in this comment, which might count in your book as taking it seriously. But to be honest, where we are at in Day 100 of this administration is not the territory I expected us to be in until at least the 2nd year.
I think these people do exist (those that appreciated the second term for the risks it presented) and I’ll count myself as one of them. I think we are just less visible because we push this concern a lot less in the EA discourse than other topics because 1) the people with these viewpoints and are willing to be vocal about it are a small minority of EA*, 2) espousing these views is perceived as a way to lose social capital, and 3) EA institutions have made decisions that have somewhat gatekeeped how much of this discourse can take place in EA official venues.
Note on #1
A lot of potential EAs—people who embrace EA principles and come to the conclusion that the way to do the most good is work on democracy/political systems/politics of Great Powers—interact with the community, are offput by the little engagement and, sometimes, dismissiveness of the community towards this cause area, and then decide that rather than fight the uphill battle of moving the Overton window they will instead retreat back to other communities more aligned with their conclusions.
This characterized my own relationship with EA. Despite knowing and resonating with EA since 2013/2014, I did not engage deeply with the community until 2022 because there seemed to be little to no overlap with people that wanted to change the political system and address the politics upstream of the policies EA spend so much time thinking how to influence. I think this space is still small in EA but is garnering more interest and will only do so because I think we are at the beginning and not the end of this moment in history.
Note on #1 and #2
When I talk with EAs one-on-one, a substantial portion share my views that EA negelects politics of the world’s superpower and the political system upstream of those politics. However, very few act on these beliefs or take much time to vocalize them. I think people underestimate how much people share this sentiment, which only makes it less likely for people to speak out (which of course, leads back to people underestimating the prevalence of the belief).
Note on #3
CEA has once allowed me to speak once on the topic of risks to the US system at an EAGxVirtual—kudos where it is due. However, I’ve have inquired multiple times with CEA since 2022 about running such an event at an actual EAG and have always been declined; I think this is clear area of improvement. I’d also like to see networking meetups for people interested in this area at the EAGs themselves instead of people resorting to personally organizing satellite events around them; recently there was indication CEA was open to this.
On the Forum, posts in and around this topic can, and sometimes do, get marked as community posts and thus lose visibility. This is not to say it happens all the time. There are posts that make it to the main page that others would want to see moved to community.