But all the “pivotal act” stuff come out of certain people in the Bay, sure sounds like an attempt to temporarily seize control of the future without worrying too much about actual consent.
I’m not familiar with this stuff and I’m unsure how it relates to longtermism as an idea (if at all) but, yes that would certainly be an example of power-seeking behaviour.
And Yudkowsky has also tried to work out what looks like a template for how an AI could govern the whole world (though he gave up on the idea later): https://arbital.com/p/cev/
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
I’m not saying this stuff is unambiguously bad by the way: any political theorizing involves an interest in power, and it’s hard to tell whether benevolent AI governance in particular would be more or less dangerous than human governments (which have done lots of bad things! even the liberal democracies!). I’m just saying you can see why it would set off alarm bells. I get the impression Bostrom and Yudkowsky basically think that it’s okay to act in a fairly unilateralist way so long as the system you set up takes everyone’s interests into account, which has obvious dangers as a line of thought.
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
For what it’s worth, my impression is that Bostrom’s sympathies here are less about perfect optimization (e.g., CEV realization or hedonium tessellation) and more about existential security. (A world government singleton in theory ensures existential security because it is able to suppress bad actors, coordination disasters and collective action failures, i.e., suppress type-1, 2a and 2b threats in Bostrom’s “Vulnerable World Hypothesis”.)
Yeah that’s probably fair actually. This might make the view more sympathetic but not necessarily less dangerous. Maybe more dangerous, because most people will laugh you out the room if you say we need extreme measures to make sure we fill the galaxy with hedonium, but they will take ‘extreme measures are needed or we might all die’ rather more seriously.
These seem like reasonable points.
I’m not familiar with this stuff and I’m unsure how it relates to longtermism as an idea (if at all) but, yes that would certainly be an example of power-seeking behaviour.
Here’s the first hit on google for ‘Yudkowsky pivotal act’: https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious
And Yudkowsky has also tried to work out what looks like a template for how an AI could govern the whole world (though he gave up on the idea later): https://arbital.com/p/cev/
I also have the impression that Bostrom in particular, is sympathetic to the idea that a single government should one day exist that takes control of all real important stuff to ensure it is perfectly optimized: https://nickbostrom.com/fut/singleton
I’m not saying this stuff is unambiguously bad by the way: any political theorizing involves an interest in power, and it’s hard to tell whether benevolent AI governance in particular would be more or less dangerous than human governments (which have done lots of bad things! even the liberal democracies!). I’m just saying you can see why it would set off alarm bells. I get the impression Bostrom and Yudkowsky basically think that it’s okay to act in a fairly unilateralist way so long as the system you set up takes everyone’s interests into account, which has obvious dangers as a line of thought.
For what it’s worth, my impression is that Bostrom’s sympathies here are less about perfect optimization (e.g., CEV realization or hedonium tessellation) and more about existential security. (A world government singleton in theory ensures existential security because it is able to suppress bad actors, coordination disasters and collective action failures, i.e., suppress type-1, 2a and 2b threats in Bostrom’s “Vulnerable World Hypothesis”.)
Yeah that’s probably fair actually. This might make the view more sympathetic but not necessarily less dangerous. Maybe more dangerous, because most people will laugh you out the room if you say we need extreme measures to make sure we fill the galaxy with hedonium, but they will take ‘extreme measures are needed or we might all die’ rather more seriously.