And we should also examine neglect not just on the headline number of dollars going into the space but on specific facets, like how much money in that space actually goes to top tier impact opportunities or how much investment is there in innovating the space/interventions.
I think in-comparison to the space, EA has a comparative advantage more in talent than in money. I think the Harris campaign got $2B or so of donations, but I get the impression that it could have used smarter + more empirically-minded people. That said, there is of course the challenge or actually getting those people to be listened to.
I have not seen a lot of evidence that EA skills are very transferable to the realm of politics. As counterexamples, look at the botched Altman ouster, or the fact that AI safety people ended up helping start an AI arms race: these partially seem to come from a place of poor political instincts. EA is also disproportionately STEM background, which are generally considered comparatively poor at people skills (accurately, in my experience).
I think combating authoritarianism is important, but EA would probably be better off identifying other people who are good at politics and sending support their way.
I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they don’t want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that “prepared to tell people really hard truths” is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what it’s worth Matt Yglesias’ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally don’t think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like “we want democratic countries to beat China”) because I don’t think it is particularly tractable or neglected. But I don’t think it is a skill issue, unless you’re talking about completely EA run projects (and even then, you don’t necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)
(other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like “we want democratic countries to beat China”)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about “entente” and “realism” and cut ties with Thiel-aligned national security state companies like Palantir.
I don’t think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what it’s worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country’s business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don’t expect institutional inertia (the reasons why one would expect “the US might recover relatively quickly from its current disaster” and “the US to remain somewhat less dictatorial than China even in the worst outcomes”) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.
My impression from other writing I’ve seen of yours is that you don’t think that EAs are good at too many things. What do you think EAs are best at, and/or should be doing? Perhaps, narrow GiveWell-style research on domains with lots of data?
My knee-jerk reaction is to feel attacked by this comment, on behalf of the EA community.
I assume that one thing that might be going on is a miscommunication. Perhaps you believe that I was assuming that EAs could quickly swoop in, spent a little time on things, and be far more correct than many experience political experts and analysts.
I’m not sure if this helps, but the above really doesn’t align with what I’m thinking. More something like, “We could provide more sustained help through a variety of methods. People can be useful for many things, like direct volunteering, working in think tanks, being candidates, helping prioritization, etc. I don’t expect miracle results—I instead expect roughly the results of adding some pretty smart and hardworking people.”
I instead expect roughly the results of adding some pretty smart and hardworking people.
The usefulness of smart people is highly dependent on the willingness of the powers-that-be to listen to them. I don’t think lack of raw intelligence had much of anything to do with the recent US electoral results. The initial candidate at the top of the ticket was not fit for a second term, and was forced out too late for a viable replacement to emerge. Instead, we got someone who had never polled well. I also don’t think intelligence was the limiting factor in the Democrats’ refusal to move toward the center on issues that were costing them votes in the swing states. Intellectually understanding that it is necessary to throw some of your most loyal supporters under the bus is one thing; committing to do it is something else; and actually getting it done is harder still. One could think of intelligence as a rate-limiting catalyst up to a certain point, but dumping even more catalyst in after that point doesn’t speed the reaction much.
I think @titotal’s critique largely holds if one models EAs as a group as exceptional in intelligence but roughly at population baseline for more critical and/or rate-limiting elements for political success (e.g., charisma, people savvy). I don’t think that would be an attack—most people are in fact broadly average, and average people would be expected to fail against Altman, etc. And if intelligence were mostly neutralized by the powers-that-be not listening to it, having a few hundred FTEs (i.e., ~10% of all EA FTEs?) with a roughly normal distribution of key attributes is relatively unlikely to be impactful.
Finally, I think this is a place where EA’s tendencies toward being a monoculture hurts—for example, I think a movement that is very disproportionately educationally-privileged, white, STEM focused, and socially liberal will have a hard time understanding why (e.g.) so many Latino voters [most of whom share few of those characteristics] were going for Trump this cycle and how to stop that.
On EAs in policy, I’d flag that: - There’s a good number of people currently working in AI governance, Bio governance, and animal law. - Very arguably, said people have had a decent list of accomplishments and power positions, given that such work was fairly recent. See Biden’s executive orders on AI, or the UK AI Security Institute. https://www.aisi.gov.uk/ - People like Dustin Moskovitz and SBF were some highly prominent donors to the Democratic party.
I think the EA policy side might not get a huge amount of popularity here, but it seems decently reputable to me. Mistakes have been made, but I think a decent report on the wins and losses would include several wins.
I do agree that finding others doing well and helping them is one important way to help. I’d suspect that the most obvious EA work would look like prioritization for policy efforts. This has been done before, and there’s a great deal more that could be done here.
In fairness, SBF was also secretly a prominent Republican donor, right? Didn’t he basically suggest in the infamous interview with Kelsey Piper that he was essentially cynical about politics and just trying to gain influence with both parties to help advance FTX and Alameda’s interests?
He was a Republican donor, but from what I understand, not really a MAGA donor. My impression was that he was funding people on both sides, who were generally in favor of their interests—but that their interests did genuinely include issues like bio/ai safety.
I think it’s very reasonable to try to be bipartisan on these issues.
Fair point. I certainly don’t think it is established (or even more than 50% likely) that SBF was purely motivated by narrow personal gain to the exclusion of any real utilitarian convictions at all. But I do think he misrepresented his political convictions.
And we should also examine neglect not just on the headline number of dollars going into the space but on specific facets, like how much money in that space actually goes to top tier impact opportunities or how much investment is there in innovating the space/interventions.
I think in-comparison to the space, EA has a comparative advantage more in talent than in money. I think the Harris campaign got $2B or so of donations, but I get the impression that it could have used smarter + more empirically-minded people. That said, there is of course the challenge or actually getting those people to be listened to.
I have not seen a lot of evidence that EA skills are very transferable to the realm of politics. As counterexamples, look at the botched Altman ouster, or the fact that AI safety people ended up helping start an AI arms race: these partially seem to come from a place of poor political instincts. EA is also disproportionately STEM background, which are generally considered comparatively poor at people skills (accurately, in my experience).
I think combating authoritarianism is important, but EA would probably be better off identifying other people who are good at politics and sending support their way.
I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they don’t want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that “prepared to tell people really hard truths” is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what it’s worth Matt Yglesias’ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally don’t think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like “we want democratic countries to beat China”) because I don’t think it is particularly tractable or neglected. But I don’t think it is a skill issue, unless you’re talking about completely EA run projects (and even then, you don’t necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about “entente” and “realism” and cut ties with Thiel-aligned national security state companies like Palantir.
I don’t think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what it’s worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country’s business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don’t expect institutional inertia (the reasons why one would expect “the US might recover relatively quickly from its current disaster” and “the US to remain somewhat less dictatorial than China even in the worst outcomes”) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
“how much AGI companies have embedded with the national security state is a crux for the future of the lightcone”
What’s the line of thought here?
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.
I disagree, but this has me curious.
My impression from other writing I’ve seen of yours is that you don’t think that EAs are good at too many things. What do you think EAs are best at, and/or should be doing? Perhaps, narrow GiveWell-style research on domains with lots of data?
Thinking about this a bit more—
My knee-jerk reaction is to feel attacked by this comment, on behalf of the EA community.
I assume that one thing that might be going on is a miscommunication. Perhaps you believe that I was assuming that EAs could quickly swoop in, spent a little time on things, and be far more correct than many experience political experts and analysts.
I’m not sure if this helps, but the above really doesn’t align with what I’m thinking. More something like, “We could provide more sustained help through a variety of methods. People can be useful for many things, like direct volunteering, working in think tanks, being candidates, helping prioritization, etc. I don’t expect miracle results—I instead expect roughly the results of adding some pretty smart and hardworking people.”
The usefulness of smart people is highly dependent on the willingness of the powers-that-be to listen to them. I don’t think lack of raw intelligence had much of anything to do with the recent US electoral results. The initial candidate at the top of the ticket was not fit for a second term, and was forced out too late for a viable replacement to emerge. Instead, we got someone who had never polled well. I also don’t think intelligence was the limiting factor in the Democrats’ refusal to move toward the center on issues that were costing them votes in the swing states. Intellectually understanding that it is necessary to throw some of your most loyal supporters under the bus is one thing; committing to do it is something else; and actually getting it done is harder still. One could think of intelligence as a rate-limiting catalyst up to a certain point, but dumping even more catalyst in after that point doesn’t speed the reaction much.
I think @titotal’s critique largely holds if one models EAs as a group as exceptional in intelligence but roughly at population baseline for more critical and/or rate-limiting elements for political success (e.g., charisma, people savvy). I don’t think that would be an attack—most people are in fact broadly average, and average people would be expected to fail against Altman, etc. And if intelligence were mostly neutralized by the powers-that-be not listening to it, having a few hundred FTEs (i.e., ~10% of all EA FTEs?) with a roughly normal distribution of key attributes is relatively unlikely to be impactful.
Finally, I think this is a place where EA’s tendencies toward being a monoculture hurts—for example, I think a movement that is very disproportionately educationally-privileged, white, STEM focused, and socially liberal will have a hard time understanding why (e.g.) so many Latino voters [most of whom share few of those characteristics] were going for Trump this cycle and how to stop that.
On EAs in policy, I’d flag that:
- There’s a good number of people currently working in AI governance, Bio governance, and animal law.
- Very arguably, said people have had a decent list of accomplishments and power positions, given that such work was fairly recent. See Biden’s executive orders on AI, or the UK AI Security Institute. https://www.aisi.gov.uk/
- People like Dustin Moskovitz and SBF were some highly prominent donors to the Democratic party.
I think the EA policy side might not get a huge amount of popularity here, but it seems decently reputable to me. Mistakes have been made, but I think a decent report on the wins and losses would include several wins.
I do agree that finding others doing well and helping them is one important way to help. I’d suspect that the most obvious EA work would look like prioritization for policy efforts. This has been done before, and there’s a great deal more that could be done here.
In fairness, SBF was also secretly a prominent Republican donor, right? Didn’t he basically suggest in the infamous interview with Kelsey Piper that he was essentially cynical about politics and just trying to gain influence with both parties to help advance FTX and Alameda’s interests?
He was a Republican donor, but from what I understand, not really a MAGA donor. My impression was that he was funding people on both sides, who were generally in favor of their interests—but that their interests did genuinely include issues like bio/ai safety.
I think it’s very reasonable to try to be bipartisan on these issues.
Fair point. I certainly don’t think it is established (or even more than 50% likely) that SBF was purely motivated by narrow personal gain to the exclusion of any real utilitarian convictions at all. But I do think he misrepresented his political convictions.
I think it’s clear he misrepresented his political convictions, especially to the public (as opposed to close friends and some EAs).
But I think there’s separately decent evidence that he was thinking of himself as ultimately advancing utilitarian goals.
Not that that makes it okay—it’s very possible to consider yourself trying to help any noble goals—then using that to justify really bad actions.
That’s my read of the evidence as well, but I haven’t examined it closely.