I’m not sure it is a full misreading, sadly. I don’t think it a fair characterization of Ord, Greaves and MacAskill (though I am kind of biased because of my pride in having been an Oxford philosophy DPhil). It would be easy to give a radical deliberative democracy spin on Will and Toby’s “long reflection” ideas in particular. But all the “pivotal act” stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent. Of course, the idea (or at least Yudkowsky’s original vision for “coherent extrapolated volition”) is that eventually the governing AIs will just implement what we all collectively want. And that could happen! But remember Lenin thought that the state would eventually “wither away” as Marx predicted, once the dictatorship of the proletariat had taken care of building industrial socialism...
Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs).
(Despite all this, I should say that I think the in-principle philosophical case for longtermism is very strong. Alas, ideas can be both correct and dangerous.)
Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs).
These people both seem like clear longtermists to me—they have orientated their lives around trying to positively influence the long term future of humanity. I struggle to see any reasonable criteria by which they do not count as longtermists which doesn’t also exclude almost everyone else that we would normally think of as a longtermist. Even under a super parochial definition like ‘people Will supports’ it seems like Elon would still count!
In practice I think people’s exclusionist instincts here are more tribal / political than philosophically grounded.
Meaning he tried to put him in touch with someone else who was interested in buying Twitter in case they wanted to buy it together?
(If that’s what you’re referring to, I think we understand “people Will supports” differently. And I can’t see how it’s relevant to whether or not Elon is a longtermist.)
But all the “pivotal act” stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent.
I’m not familiar with this stuff and I’m unsure how it relates to longtermism as an idea (if at all) but, yes that would certainly be an example of power-seeking behaviour.
And Yudkowsky has also tried to work out what looks like a template for how an AI could govern the whole world (though he gave up on the idea later): https://arbital.com/p/cev/
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
I’m not saying this stuff is unambiguously bad by the way: any political theorizing involves an interest in power, and it’s hard to tell whether benevolent AI governance in particular would be more or less dangerous than human governments (which have done lots of bad things! even the liberal democracies!). I’m just saying you can see why it would set off alarm bells. I get the impression Bostrom and Yudkowsky basically think that it’s okay to act in a fairly unilateralist way so long as the system you set up takes everyone’s interests into account, which has obvious dangers as a line of thought.
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
For what it’s worth, my impression is that Bostrom’s sympathies here are less about perfect optimization (e.g., CEV realization or hedonium tessellation) and more about existential security. (A world government singleton in theory ensures existential security because it is able to suppress bad actors, coordination disasters and collective action failures, i.e., suppress type-1, 2a and 2b threats in Bostrom’s “Vulnerable World Hypothesis”.)
Yeah that’s probably fair actually. This might make the view more sympathetic but not necessarily less dangerous. Maybe more dangerous, because most people will laugh you out the room if you say we need extreme measures to make sure we fill the galaxy with hedonium, but they will take ‘extreme measures are needed or we might all die’ rather more seriously.
Sorry, I downvoted this comment because it was asked with a strong rhetorical spin rather than with genuine curiosity, even though it’s an interesting question whether Musk and Thiel identify as longtermists.
I meant, I guess, that they are not followers of the theories of the academic longtermists and not part of the longtermist wing of organised EA. I agree that Musk at least is “longtermist” in the sense of concerned about extinction risk and the long-term future. Less sure of Thiel’s views on this stuff.
it is, at least ,a very interesting use of language where you can build your whole career around existential risk mitigation (spacex) and climate change adaptation (tesla, solar city), help to found OpenAI, help to fund the Future of Life Institute, publicly recommend Nick Bostrom’s work—and yet apparently you don’t qualify as a longtermist.
How about Thiel? He gave the keynote speech at proto-EA-Global back in 2013, and funded MIRI from its early days (not sure when he stopped, but the early support was definitely important). Clearly he’s deeply familiar with the academic AI risk arguments and has backed it up with money. He even had an early affiliation with organized EA! Again, not really sure how he doesn’t qualify as a longtermist.
Ah, so you only count as a longtermist if you think the principles are important and you agree with the practical approach of a small, narrow clique of people? Seems an overly restrictive definition to me.
For both of these people, you can be associated with lots of EA/longtermist/x-risk-reduction activity and still not identify as a longtermist (i.e. you don’t entirely buy the argument at the start of this post). Lots of the things you’ve listed here look good from several other perspectives, not just longtermism.
I’m pretty sure both of them would endorse the claim at the start of this post, and would bet tons of money that Elon in particular would. His career looks really bizarre otherwise.
I’m not sure it is a full misreading, sadly. I don’t think it a fair characterization of Ord, Greaves and MacAskill (though I am kind of biased because of my pride in having been an Oxford philosophy DPhil). It would be easy to give a radical deliberative democracy spin on Will and Toby’s “long reflection” ideas in particular. But all the “pivotal act” stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent. Of course, the idea (or at least Yudkowsky’s original vision for “coherent extrapolated volition”) is that eventually the governing AIs will just implement what we all collectively want. And that could happen! But remember Lenin thought that the state would eventually “wither away” as Marx predicted, once the dictatorship of the proletariat had taken care of building industrial socialism...
Not to mention there are, shall we say, longtermism adjacent rich people like Musk and Thiel who seem pretty plausibly power-seeking, even if they are not really proper longtermists (or at least, they are not EAs).
(Despite all this, I should say that I think the in-principle philosophical case for longtermism is very strong. Alas, ideas can be both correct and dangerous.)
These people both seem like clear longtermists to me—they have orientated their lives around trying to positively influence the long term future of humanity. I struggle to see any reasonable criteria by which they do not count as longtermists which doesn’t also exclude almost everyone else that we would normally think of as a longtermist. Even under a super parochial definition like ‘people Will supports’ it seems like Elon would still count!
In practice I think people’s exclusionist instincts here are more tribal / political than philosophically grounded.
When has Will supported Elon?
Will attempted to support Elon’s purchase of Twitter.
Meaning he tried to put him in touch with someone else who was interested in buying Twitter in case they wanted to buy it together?
(If that’s what you’re referring to, I think we understand “people Will supports” differently. And I can’t see how it’s relevant to whether or not Elon is a longtermist.)
I agree it’s not relevant—I think the real test is whether someone cares a lot about future people and tries to help them, which Elon satisfies.
Musk basically endorsed longtermism recently, too: https://twitter.com/elonmusk/status/1554335028313718784?lang=en
These seem like reasonable points.
I’m not familiar with this stuff and I’m unsure how it relates to longtermism as an idea (if at all) but, yes that would certainly be an example of power-seeking behaviour.
Here’s the first hit on google for ‘Yudkowsky pivotal act’: https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious
And Yudkowsky has also tried to work out what looks like a template for how an AI could govern the whole world (though he gave up on the idea later): https://arbital.com/p/cev/
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
I’m not saying this stuff is unambiguously bad by the way: any political theorizing involves an interest in power, and it’s hard to tell whether benevolent AI governance in particular would be more or less dangerous than human governments (which have done lots of bad things! even the liberal democracies!). I’m just saying you can see why it would set off alarm bells. I get the impression Bostrom and Yudkowsky basically think that it’s okay to act in a fairly unilateralist way so long as the system you set up takes everyone’s interests into account, which has obvious dangers as a line of thought.
For what it’s worth, my impression is that Bostrom’s sympathies here are less about perfect optimization (e.g., CEV realization or hedonium tessellation) and more about existential security. (A world government singleton in theory ensures existential security because it is able to suppress bad actors, coordination disasters and collective action failures, i.e., suppress type-1, 2a and 2b threats in Bostrom’s “Vulnerable World Hypothesis”.)
Yeah that’s probably fair actually. This might make the view more sympathetic but not necessarily less dangerous. Maybe more dangerous, because most people will laugh you out the room if you say we need extreme measures to make sure we fill the galaxy with hedonium, but they will take ‘extreme measures are needed or we might all die’ rather more seriously.
How on earth are Thiel and Elon (especially Elon!) not longtermists???
Sorry, I downvoted this comment because it was asked with a strong rhetorical spin rather than with genuine curiosity, even though it’s an interesting question whether Musk and Thiel identify as longtermists.
I meant, I guess, that they are not followers of the theories of the academic longtermists and not part of the longtermist wing of organised EA. I agree that Musk at least is “longtermist” in the sense of concerned about extinction risk and the long-term future. Less sure of Thiel’s views on this stuff.
it is, at least ,a very interesting use of language where you can build your whole career around existential risk mitigation (spacex) and climate change adaptation (tesla, solar city), help to found OpenAI, help to fund the Future of Life Institute, publicly recommend Nick Bostrom’s work—and yet apparently you don’t qualify as a longtermist.
Fair enough, maybe Musk does count.
How about Thiel? He gave the keynote speech at proto-EA-Global back in 2013, and funded MIRI from its early days (not sure when he stopped, but the early support was definitely important). Clearly he’s deeply familiar with the academic AI risk arguments and has backed it up with money. He even had an early affiliation with organized EA! Again, not really sure how he doesn’t qualify as a longtermist.
I think in his case he has since denounced EA and AI safety no?
Ah, so you only count as a longtermist if you think the principles are important and you agree with the practical approach of a small, narrow clique of people? Seems an overly restrictive definition to me.
I was trying to use a definition that matched “are these people our problem on this forum”, since it seemed the most contextually relevant.
For both of these people, you can be associated with lots of EA/longtermist/x-risk-reduction activity and still not identify as a longtermist (i.e. you don’t entirely buy the argument at the start of this post). Lots of the things you’ve listed here look good from several other perspectives, not just longtermism.
I’m pretty sure both of them would endorse the claim at the start of this post, and would bet tons of money that Elon in particular would. His career looks really bizarre otherwise.