My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards ‘longtermism tends to be harmful in practice’ much more than towards ‘those people are not longtermists’.
It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards ‘longtermism tends to be harmful in practice’ much more than towards ‘those people are not longtermists’.
I agree with this, but “longtermists may do harmful stuff” doesn’t mean “this person doing harmful stuff is a longtermist”. My understanding is that Schmidt (1) has never espoused views along the lines of “positively influencing the long-term future is a key moral priority of our time”, and (2) seems to see AI/AGI kind of like the nuclear bomb—a strategically important and potentially dangerous technology that the US should develop before its competitors.
I think it’s fair for Davis to characterise Schmidt as a longtermist.
He’s recently been vocal about AI X-Risk.
He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.
His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.
Yeah, but so have lots of people; it doesn’t mean they’re all longtermists. Same thing with Sam Altman—I haven’t seen any indication that he’s longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.
He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.
Do you have a source on Schmidt funding Carrick Flynn’s campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don’t think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.
His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
I know Schmidt Futures has “future” in its name, but as far as I can tell they’re not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word “future”.
His philanthropic organisation Schmidt Futures...funds various EA orgs
Can you give some examples? My impression was that the funding has been minimal at best, would be surprised if EA orgs receive say >10% of their funding, and likely <1%.
Also I don’t want to overstate this point, but I don’t think I’ve yet met a longtermist researcher who claims to have had a extended (or any) conversation with Schimdt. Given that there aren’t many longtermist researchers to begin with (<500 worldwide defined rather broadly?), it’d be quite surprising for someone to claim to be a longtermist (or for others to claim that they are) if they’ve never even talked to someone doing research in the space.
To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they’re a longtermist.
I agree there are probably a few longtermist and/or EA-affliated people at Schimdt Futures, just as there are probably such people at Google, Meta, the World Bank, etc. This is a different claim than whether Schimdt Futures institutionally is longtermist, which is again a different claim from whether Eric Schimdt himself is.
My understanding is that Schmidt (1) has never espoused views along the lines of “positively influencing the long-term future is a key moral priority of our time”
I don’t think that’s so important a distinction. Prominentlongtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards ‘longtermism tends to be harmful in practice’.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
The posts linked in support of “prominent longtermists have declared the view that longtermism basically boils down to x-risk” do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you’re arguing against, i.e. you cannot conclude someone is a longtermist because they’re worried about x-risk.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn’t update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice… Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes
You could argue that he means ‘socially promote good norms on the assumption that the singularity will lock in much of society’s then-standard morality’, but ‘shape them by trying to make AI human-compatible’ seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
If you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as ‘the core action relevant points of EA’, since they certainly didn’t come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were ‘less than 10’ people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who’d consider themselves longtermist is surely in the thousands.
I don’t know how we got to whether we should update about longtermism being “bad.” As far as I’m concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you’re saying: “the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists.”
When stated that simply, this is an obvious logical error (in the form of “most squares are rectangles, so this rectangle named Eric Schmidt must be a square”). I’m curious if I’m missing something about your argument.
This is a true claim in general, but seems quite an implausible claim for Schimdt specifically, who has been in tech and at Google for much longer than people in our parts have been around.
Mind if I re-frame this discussion? The relevant question here shouldn’t be a matter of beliefs, “is he a longtermist?”, it’s a matter of identity and identity strength. This isn’t to say beliefs aren’t important and knowing his wouldn’t be informative, but identity (at least to some considerable degree) precedes and predicts beliefs and behavior.
But I also don’t want to overemphasize particular labels, there are enough discernible positions out there that this isn’t very helpful. Especially for individuals with some expertise, in positions of authority who may be reluctant to carelessly endorse particular groups.
Accepting this, here’s some of what we could look into:
Amount of positive socialization with EAs and affiliates (Jason Matheny’s FLI history is notable, how long and involved was this position?)
Amount of out-group derogation—if he’s positioned against our out-group, this may indicate or induce sympathy. Mentioning X-risk seriously once did this, may still to a degree.
Effect of role identities (Matheny apparently did malaria work before EA. Not sure what tech industry or Google CEO entails, defensiveness or maybe self-importance(?), “yeah me quoting the Bhagavad Gita would sound good!”)
Identities are correlated; what are his political, religious and cultural identities?
This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards ‘longtermism tends to be harmful in practice’ much more than towards ‘those people are not longtermists’.
I agree with this, but “longtermists may do harmful stuff” doesn’t mean “this person doing harmful stuff is a longtermist”. My understanding is that Schmidt (1) has never espoused views along the lines of “positively influencing the long-term future is a key moral priority of our time”, and (2) seems to see AI/AGI kind of like the nuclear bomb—a strategically important and potentially dangerous technology that the US should develop before its competitors.
I think it’s fair for Davis to characterise Schmidt as a longtermist.
He’s recently been vocal about AI X-Risk. He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.
https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html
Yeah, but so have lots of people; it doesn’t mean they’re all longtermists. Same thing with Sam Altman—I haven’t seen any indication that he’s longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.
Do you have a source on Schmidt funding Carrick Flynn’s campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don’t think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.
I know Schmidt Futures has “future” in its name, but as far as I can tell they’re not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word “future”.
Can you give some examples? My impression was that the funding has been minimal at best, would be surprised if EA orgs receive say >10% of their funding, and likely <1%.
Also I don’t want to overstate this point, but I don’t think I’ve yet met a longtermist researcher who claims to have had a extended (or any) conversation with Schimdt. Given that there aren’t many longtermist researchers to begin with (<500 worldwide defined rather broadly?), it’d be quite surprising for someone to claim to be a longtermist (or for others to claim that they are) if they’ve never even talked to someone doing research in the space.
To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they’re a longtermist.
I agree there are probably a few longtermist and/or EA-affliated people at Schimdt Futures, just as there are probably such people at Google, Meta, the World Bank, etc. This is a different claim than whether Schimdt Futures institutionally is longtermist, which is again a different claim from whether Eric Schimdt himself is.
I don’t think that’s so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards ‘longtermism tends to be harmful in practice’.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
The posts linked in support of “prominent longtermists have declared the view that longtermism basically boils down to x-risk” do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you’re arguing against, i.e. you cannot conclude someone is a longtermist because they’re worried about x-risk.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn’t update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
You could argue that he means ‘socially promote good norms on the assumption that the singularity will lock in much of society’s then-standard morality’, but ‘shape them by trying to make AI human-compatible’ seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as ‘the core action relevant points of EA’, since they certainly didn’t come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were ‘less than 10’ people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who’d consider themselves longtermist is surely in the thousands.
I don’t know how we got to whether we should update about longtermism being “bad.” As far as I’m concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you’re saying: “the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists.”
When stated that simply, this is an obvious logical error (in the form of “most squares are rectangles, so this rectangle named Eric Schmidt must be a square”). I’m curious if I’m missing something about your argument.
This is a true claim in general, but seems quite an implausible claim for Schimdt specifically, who has been in tech and at Google for much longer than people in our parts have been around.
Mind if I re-frame this discussion? The relevant question here shouldn’t be a matter of beliefs, “is he a longtermist?”, it’s a matter of identity and identity strength. This isn’t to say beliefs aren’t important and knowing his wouldn’t be informative, but identity (at least to some considerable degree) precedes and predicts beliefs and behavior.
But I also don’t want to overemphasize particular labels, there are enough discernible positions out there that this isn’t very helpful. Especially for individuals with some expertise, in positions of authority who may be reluctant to carelessly endorse particular groups.
Accepting this, here’s some of what we could look into:
Amount of positive socialization with EAs and affiliates (Jason Matheny’s FLI history is notable, how long and involved was this position?)
Amount of out-group derogation—if he’s positioned against our out-group, this may indicate or induce sympathy. Mentioning X-risk seriously once did this, may still to a degree.
Effect of role identities (Matheny apparently did malaria work before EA. Not sure what tech industry or Google CEO entails, defensiveness or maybe self-importance(?), “yeah me quoting the Bhagavad Gita would sound good!”)
Identities are correlated; what are his political, religious and cultural identities?
I agree that identity and identity strength are important variables for collective guilt assignment.
That said, I think the case for JM is substantially stronger than the case for Schimdt, which we were previously talking about upthread.