Technology is advancing rapidly and AI is transforming the world sector by sector.
would be quite surprising to me, since I strongly expect superintelligence within a couple years after AGI, and I strongly expect a technological singularity at that time. So I do not believe that a story consistent with the rules can be plausible. (I also expect more unipolarity by 5 years after AGI, but even multipolar scenarios don’t give us a future as prosaic as the rules require.)
I also feel like this assumption kind of moves this from “oh, interesting exercise” to “hmm, the set of ground rules feel kind of actively inconsistent, I guess I am not super excited about stories set in this world, since I expect it to kind of actively communicate wrong assumptions”. Though I do generally like using fiction to explore things like this.
Yeah, it seems strange to be forced to adopt a scenario where the development of AGI doesn’t create some kind of surprising upset in terms of power.
I suppose a contest that included a singularity might seem too far out for most people. And maybe this is the best we can do insofar as persuading people to engage with these ideas. (There’s definitely a risk that people over update on these kind of scenarios, but it’s not obvious that this will be a huge problem).
If you’re confident in very fast takeoff, I agree this seems problematic.
But otherwise, given the ambiguity about what “AGI” is, I think you can choose to consider “AGI” to be the AI technology that existed, say, 7 years before the technological singularity (and I personally expect that AI technology to be very powerful), so that you are writing about society 2 years before the singularity.
Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn’t see anything like this. You’d see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.
This scenario is “no takeoff at all”—advancement happening only at the speed of economic growth.
You seem to have an unusual definition of slow takeoff. If I take on the definition in this post (probably the most influential post by a proponent of slow / continuous takeoff), there’s supposed to be an 8-year doubling before a 2-year doubling. An 8-year doubling corresponds to an average of 9% growth each year (roughly double the current amount). Let’s say that we actually reach the 9% growth halfway through that doubling; then there are 4 years before the first 2-year doubling even starts. If you define AGI to be the AI technology that’s around at 9% growth (which, let’s recall, is doubling the growth rate, so it’s quite powerful), then there are > 6 years left until the singularity (4 years from the rest of the 8-year doubling, 2 years from the first 2-year doubling, which in turn happens before the start of the first 0.5 year doubling, which in turn is before the singularity).
Presumably you just think slow takeoff of this form is completely implausible, but I’d summarize that as either “Czynski is very confident in fast / discontinuous takeoff” or “Czynski uses definitions that are different from the ones other people are using”.
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn’t matter here, as even that scenario would cause massive disruption.
Speaking as one partly responsible for that conjunction, I’d say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It’s called a singularity for a reason!) It’s arguably a bit conservative in terms of AGI’s transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of “varying one thing at a time” (rather than a claim of maximal probability) that we kept much of the rest of the world relatively similar to how it is now.
Part of our goal is to use this contest as a springboard for exploring a wider variety of scenarios and “ground assumptions” and there I think we can try some out that are more radically transformative.
Metaculus currently gives ~20% probability to >60 months
I’d expect the bets there to be basically random. Prediction markets aren’t useful for predictions about far out events: Betting in them requires tying up your credit for that long, which is a big opportunity cost, so you should expect that only fools are betting here. I’d also expect it to be biased towards the fools who don’t expect AGI to be transformative, because the fools who do expect AGI to be transformative have even fewer incentives to bet: There’s not going to be any use for metaculus points after a singularity: They become meaningless, past performance stops working as a predictor of future performance, the world will change too much, and so will the predictors.
If a singularity-expecter wants tachyons, they’re really going to want to get them before this closes. If they don’t sincerely want tachyons, if they’re driven by something else, then their answers wouldn’t be improved by the incentives of a prediction market.
I’d note that Metaculus is not a prediction market and there are no assets to “tie up.” Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I’d say it’s a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there’s heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.
I checked again and, yeah, that’s right, sorry about the misunderstanding.
I think the root of my confusion on this is that most of my thinking about prediction platform designs, is situated in the genre of designs where users can create questions without oversight, and in this genre I’m hoping to find something highly General, and Robust. These sorts of designs always seem to collapse into being prediction markets. So it comes as a surprise to me that just removing user-generated questions seems to turn out to prevent that collapse[1], and this thing it becomes instead, turns out to be pretty Robust. Just did not expect that.
[1] (If you had something like Metaculus and you added arbitrary user-generated questions (I think that would allow unlimited point farming, but, that aside), that would enable trading points as assets, as phony questions with user-controlled resolution criteria could be made just for transferring points between a pair of users, with equal, opposite transfers of currency out of band.)
Correction: Metaculus’s currency is just called “points”, tachyons are something else. Aside from that, I have double-checked, and it definitely is a play-money prediction market (well, is it wrong to call it a prediction market it’s not structured as an exchange, even if it has the same mechanics?) (Edit: I was missing the fact that, though there are assets, they are not staked when you make a prediction), and you do in fact earn points by winning bets.
and have an excellent forecasting track record
I’m concerned that the bettors here may be the types who have spent most of their points on questions that wont close for decades. Metaculus has existed for less than one decade, so that demographic, if it’s a thing, actually wouldn’t have any track record.
Isn’t “Technology is advancing rapidly and AI is transforming the world sector by sector” perfectly consistent with a singularity? Perhaps it would be a rather large understatement, but still basically true.
Not really (but the quote is consistent with no singularity; see Rohin’s comment). I expect technological progress will be very slow soon after a singularity because science is essentially solved and almost all technology is discovered during or immediately after the singularity. Additionally, the suggestions that there’s ‘international power equilibrium’ and generally that the world is recognizable—e.g., with prosaic global political power balance, and that AI merely ‘solves problems’ and ‘reshapes the economy’—rather than totally transformed is not what I expect years after singularity.
The conjunction
would be quite surprising to me, since I strongly expect superintelligence within a couple years after AGI, and I strongly expect a technological singularity at that time. So I do not believe that a story consistent with the rules can be plausible. (I also expect more unipolarity by 5 years after AGI, but even multipolar scenarios don’t give us a future as prosaic as the rules require.)
I also feel like this assumption kind of moves this from “oh, interesting exercise” to “hmm, the set of ground rules feel kind of actively inconsistent, I guess I am not super excited about stories set in this world, since I expect it to kind of actively communicate wrong assumptions”. Though I do generally like using fiction to explore things like this.
Yeah, it seems strange to be forced to adopt a scenario where the development of AGI doesn’t create some kind of surprising upset in terms of power.
I suppose a contest that included a singularity might seem too far out for most people. And maybe this is the best we can do insofar as persuading people to engage with these ideas. (There’s definitely a risk that people over update on these kind of scenarios, but it’s not obvious that this will be a huge problem).
If you’re confident in very fast takeoff, I agree this seems problematic.
But otherwise, given the ambiguity about what “AGI” is, I think you can choose to consider “AGI” to be the AI technology that existed, say, 7 years before the technological singularity (and I personally expect that AI technology to be very powerful), so that you are writing about society 2 years before the singularity.
Even without a singularity, no unexpected power upsets seems a bit implausible.
(Disagree if by implausible you mean < 5%, but I don’t want to get into it here.)
Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn’t see anything like this. You’d see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.
This scenario is “no takeoff at all”—advancement happening only at the speed of economic growth.
Sorry for the late reply.
You seem to have an unusual definition of slow takeoff. If I take on the definition in this post (probably the most influential post by a proponent of slow / continuous takeoff), there’s supposed to be an 8-year doubling before a 2-year doubling. An 8-year doubling corresponds to an average of 9% growth each year (roughly double the current amount). Let’s say that we actually reach the 9% growth halfway through that doubling; then there are 4 years before the first 2-year doubling even starts. If you define AGI to be the AI technology that’s around at 9% growth (which, let’s recall, is doubling the growth rate, so it’s quite powerful), then there are > 6 years left until the singularity (4 years from the rest of the 8-year doubling, 2 years from the first 2-year doubling, which in turn happens before the start of the first 0.5 year doubling, which in turn is before the singularity).
Presumably you just think slow takeoff of this form is completely implausible, but I’d summarize that as either “Czynski is very confident in fast / discontinuous takeoff” or “Czynski uses definitions that are different from the ones other people are using”.
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn’t matter here, as even that scenario would cause massive disruption.
Speaking as one partly responsible for that conjunction, I’d say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It’s called a singularity for a reason!) It’s arguably a bit conservative in terms of AGI’s transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of “varying one thing at a time” (rather than a claim of maximal probability) that we kept much of the rest of the world relatively similar to how it is now.
Part of our goal is to use this contest as a springboard for exploring a wider variety of scenarios and “ground assumptions” and there I think we can try some out that are more radically transformative.
I’d expect the bets there to be basically random. Prediction markets aren’t useful for predictions about far out events: Betting in them requires tying up your credit for that long, which is a big opportunity cost, so you should expect that only fools are betting here. I’d also expect it to be biased towards the fools who don’t expect AGI to be transformative, because the fools who do expect AGI to be transformative have even fewer incentives to bet: There’s not going to be any use for metaculus points after a singularity: They become meaningless, past performance stops working as a predictor of future performance, the world will change too much, and so will the predictors.
If a singularity-expecter wants tachyons, they’re really going to want to get them before this closes. If they don’t sincerely want tachyons, if they’re driven by something else, then their answers wouldn’t be improved by the incentives of a prediction market.
I’d note that Metaculus is not a prediction market and there are no assets to “tie up.” Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I’d say it’s a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there’s heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.
I checked again and, yeah, that’s right, sorry about the misunderstanding.
I think the root of my confusion on this is that most of my thinking about prediction platform designs, is situated in the genre of designs where users can create questions without oversight, and in this genre I’m hoping to find something highly General, and Robust. These sorts of designs always seem to collapse into being prediction markets.
So it comes as a surprise to me that just removing user-generated questions seems to turn out to prevent that collapse[1], and this thing it becomes instead, turns out to be pretty Robust. Just did not expect that.
[1] (If you had something like Metaculus and you added arbitrary user-generated questions (I think that would allow unlimited point farming, but, that aside), that would enable trading points as assets, as phony questions with user-controlled resolution criteria could be made just for transferring points between a pair of users, with equal, opposite transfers of currency out of band.)
Correction: Metaculus’s currency is just called “points”, tachyons are something else. Aside from that, I have double-checked, and
it definitely is a play-money prediction market(well, is it wrong to call it a prediction market it’s not structured as an exchange, even ifit has the same mechanics?) (Edit: I was missing the fact that, though there are assets, they are not staked when you make a prediction), and you do in fact earn points by winning bets.I’m concerned that the bettors here may be the types who have spent most of their points on questions that wont close for decades. Metaculus has existed for less than one decade, so that demographic, if it’s a thing, actually wouldn’t have any track record.
Isn’t “Technology is advancing rapidly and AI is transforming the world sector by sector” perfectly consistent with a singularity? Perhaps it would be a rather large understatement, but still basically true.
Not really (but the quote is consistent with no singularity; see Rohin’s comment). I expect technological progress will be very slow soon after a singularity because science is essentially solved and almost all technology is discovered during or immediately after the singularity. Additionally, the suggestions that there’s ‘international power equilibrium’ and generally that the world is recognizable—e.g., with prosaic global political power balance, and that AI merely ‘solves problems’ and ‘reshapes the economy’—rather than totally transformed is not what I expect years after singularity.