My understanding is that Schmidt (1) has never espoused views along the lines of “positively influencing the long-term future is a key moral priority of our time”
I don’t think that’s so important a distinction. Prominentlongtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards ‘longtermism tends to be harmful in practice’.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
The posts linked in support of “prominent longtermists have declared the view that longtermism basically boils down to x-risk” do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you’re arguing against, i.e. you cannot conclude someone is a longtermist because they’re worried about x-risk.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn’t update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice… Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes
You could argue that he means ‘socially promote good norms on the assumption that the singularity will lock in much of society’s then-standard morality’, but ‘shape them by trying to make AI human-compatible’ seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
If you believe the key claims of “there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime” this is enough to justify the core action relevant points of EA
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as ‘the core action relevant points of EA’, since they certainly didn’t come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were ‘less than 10’ people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who’d consider themselves longtermist is surely in the thousands.
I don’t know how we got to whether we should update about longtermism being “bad.” As far as I’m concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you’re saying: “the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists.”
When stated that simply, this is an obvious logical error (in the form of “most squares are rectangles, so this rectangle named Eric Schmidt must be a square”). I’m curious if I’m missing something about your argument.
I don’t think that’s so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards ‘longtermism tends to be harmful in practice’.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
The posts linked in support of “prominent longtermists have declared the view that longtermism basically boils down to x-risk” do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you’re arguing against, i.e. you cannot conclude someone is a longtermist because they’re worried about x-risk.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn’t update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
You could argue that he means ‘socially promote good norms on the assumption that the singularity will lock in much of society’s then-standard morality’, but ‘shape them by trying to make AI human-compatible’ seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as ‘the core action relevant points of EA’, since they certainly didn’t come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were ‘less than 10’ people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who’d consider themselves longtermist is surely in the thousands.
I don’t know how we got to whether we should update about longtermism being “bad.” As far as I’m concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you’re saying: “the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists.”
When stated that simply, this is an obvious logical error (in the form of “most squares are rectangles, so this rectangle named Eric Schmidt must be a square”). I’m curious if I’m missing something about your argument.