I think there’s a ~20% chance of AI destroying the world.
I’d like to see more fleshed out reasoning on where this number is coming from. Is it based on an aggregate of expert views from people you trust? Or is there an actual gears-level mechanism for why there is non-doom over ~80% of future worlds with AGI? (Also, 20% is more than enough to be shouting “fucking stop[!]”...)
But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela.
That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much.
Good that you don’t support AI accelerationism, but I remain unconvinced by the reasoning for having carefully-tailored pauses. It seems far too risky to me.
I’d like to see more fleshed out reasoning on where this number is coming from. Is it based on an aggregate of expert views from people you trust? Or is there an actual gears-level mechanism for why there is non-doom over ~80% of future worlds with AGI? (Also, 20% is more than enough to be shouting “fucking stop[!]”...)
Also would be good to see more justification for this! As per Dr. David Mathers’ comment below. (And also: “Find some other route to the glorious transhuman future[!]”)
Good that you don’t support AI accelerationism, but I remain unconvinced by the reasoning for having carefully-tailored pauses. It seems far too risky to me.