I sometimes find the terminology of “no x-risk”, “going well” etc.
Agree on “going well” being under-defined. I was mostly using that for brevity, but probably more confusion than it’s worth. A definition I might use is “preserves the probability of getting to the best possible futures”, or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we’ve locked in a substantially suboptimal moral situation, we’ve effectively lost most possible value—which I’d call x-risk.
The main point was fairly object-level—Will’s beliefs imply it’s near 1% likelihood of AGI in 100 years, or near 99% likelihood of it “not reducing the probability of the best possible futures”, or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I’m curious for the intuition behind whichever one Will believes.
I think it’s a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for “events that are at least as transformative as the industrial revolution
Def agree. Things-like-this shouldn’t be approached with a 50-50 prior—throw me in another century & I think <5% likelihood of AGI, the Industrial Revolution, etc is very reasonable on priors. I just think that probability can shift relatively quickly in response to observations. For the industrial revolution, that might be when you’ve already had the agricultural revolution (so a smallish fraction of the population can grow enough food for everyone), you get engines working well & relatively affordably, you have large-scale political stability for a while s.t. you can interact peacefully with millions of other people, you have proto-capitalism where you can produce/sell things & reasonably expect to make money doing so, etc. At that point, from an inside view, it feels like “we can use machines & spare labor to produce a lot more stuff per person, and we can make lots of money off producing a lot of stuff, so people will start doing that more” is a reasonable position. So those would shift me from single digits or less, to at least >20% on the industrial revolution in that century, probably more but discounting for hindsight bias. (I don’t know if this is a useful comparison, just using since you mentioned & does seem similar in some ways where base rate is low, but it did eventually happen).
For AI, these seem relevant: when you have a plausible physical substrate, have better predictive models for what the brain does (connectionism & refinements seem plausible & have been fairly successful over the last few decades despite being unpopular initially), start to see how comparably long-evolved mechanisms work & duplicate some of them, reach super-human performance on some tasks historically considered hard/ requiring great intelligence, have physical substrate reaching scales that seem comparable to the brain, etc.
In any case, these are getting a bit far from my original thought, which was just which of those situations w.r.t. AGI does Will believe & some intuition for why
I’d usually want to modify my definition of “well” to “preserves the probability of getting to the best possible futures AND doesn’t increase the probability of the worst possible futures”, but that’s a bit more verbose.
Agree on “going well” being under-defined. I was mostly using that for brevity, but probably more confusion than it’s worth. A definition I might use is “preserves the probability of getting to the best possible futures”, or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we’ve locked in a substantially suboptimal moral situation, we’ve effectively lost most possible value—which I’d call x-risk.
The main point was fairly object-level—Will’s beliefs imply it’s near 1% likelihood of AGI in 100 years, or near 99% likelihood of it “not reducing the probability of the best possible futures”, or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I’m curious for the intuition behind whichever one Will believes.
Def agree. Things-like-this shouldn’t be approached with a 50-50 prior—throw me in another century & I think <5% likelihood of AGI, the Industrial Revolution, etc is very reasonable on priors. I just think that probability can shift relatively quickly in response to observations. For the industrial revolution, that might be when you’ve already had the agricultural revolution (so a smallish fraction of the population can grow enough food for everyone), you get engines working well & relatively affordably, you have large-scale political stability for a while s.t. you can interact peacefully with millions of other people, you have proto-capitalism where you can produce/sell things & reasonably expect to make money doing so, etc. At that point, from an inside view, it feels like “we can use machines & spare labor to produce a lot more stuff per person, and we can make lots of money off producing a lot of stuff, so people will start doing that more” is a reasonable position. So those would shift me from single digits or less, to at least >20% on the industrial revolution in that century, probably more but discounting for hindsight bias. (I don’t know if this is a useful comparison, just using since you mentioned & does seem similar in some ways where base rate is low, but it did eventually happen).
For AI, these seem relevant: when you have a plausible physical substrate, have better predictive models for what the brain does (connectionism & refinements seem plausible & have been fairly successful over the last few decades despite being unpopular initially), start to see how comparably long-evolved mechanisms work & duplicate some of them, reach super-human performance on some tasks historically considered hard/ requiring great intelligence, have physical substrate reaching scales that seem comparable to the brain, etc.
In any case, these are getting a bit far from my original thought, which was just which of those situations w.r.t. AGI does Will believe & some intuition for why
I’d usually want to modify my definition of “well” to “preserves the probability of getting to the best possible futures AND doesn’t increase the probability of the worst possible futures”, but that’s a bit more verbose.