intelligence peaks more closely to humans, and super intelligence doesn’t yield significant increases to growth.
Even if you have a human-ish intelligence, most of the advantage of AI from its other features: - You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it. - You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, it’s much more power than anything human-ish. These are two of the many features that make AI radically different and make that it will shape the future.
2. superintelligence in one domain doesn’t yield superintelligence in others, leading to some, but limited growth, like most other technologies.
That’s very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that it’s more and more sample efficient.
3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
I think that’s the most plausible of all three claims but I still think it’s like btwn 0.1% and 1% likely. Whereas we’ve a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, we’ve no idea how to build EMs. Also, this doesn’t change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that it’s unlikely that EMs are a significant thing we should care about rn.
If you have other examples, I’m happy to consider them but I suspect you don’t have better examples than those.
Meta-point: I think that you should be more inside viewy when considering claims. ”Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?” This class of arguments for instance is pretty bad IMO. Uncertainty doesn’t prevent you from thinking about the EV and here I was mainly arguing on the line that if you care about the long-term EV, AI is very likely to be the first-order determinant of it. Uncertainty should make us willing to do some exploration and I’m not arguing against that but in other cause areas we’re making much more than exploration. 5% of longtermists would be sufficient to do all types of explorations on many topics, even EMs.
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I’m often a bit perplexed as to how quick people are to jump from ‘nearly everyone dies’ to ‘literally everyone dies’. Similarly I’m sympathetic to the point that it’s difficult to imagine particularly compelling scenarios where AI doesn’t radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didn’t predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldn’t have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.
intelligence peaks more closely to humans, and super intelligence doesn’t yield significant increases to growth.
Even if you have a human-ish intelligence, most of the advantage of AI from its other features:
- You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it.
- You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, it’s much more power than anything human-ish.
These are two of the many features that make AI radically different and make that it will shape the future.
2. superintelligence in one domain doesn’t yield superintelligence in others, leading to some, but limited growth, like most other technologies.
That’s very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that it’s more and more sample efficient.
3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
I think that’s the most plausible of all three claims but I still think it’s like btwn 0.1% and 1% likely. Whereas we’ve a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, we’ve no idea how to build EMs. Also, this doesn’t change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that it’s unlikely that EMs are a significant thing we should care about rn.
If you have other examples, I’m happy to consider them but I suspect you don’t have better examples than those.
Meta-point: I think that you should be more inside viewy when considering claims.
”Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?”
This class of arguments for instance is pretty bad IMO. Uncertainty doesn’t prevent you from thinking about the EV and here I was mainly arguing on the line that if you care about the long-term EV, AI is very likely to be the first-order determinant of it. Uncertainty should make us willing to do some exploration and I’m not arguing against that but in other cause areas we’re making much more than exploration. 5% of longtermists would be sufficient to do all types of explorations on many topics, even EMs.
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I’m often a bit perplexed as to how quick people are to jump from ‘nearly everyone dies’ to ‘literally everyone dies’. Similarly I’m sympathetic to the point that it’s difficult to imagine particularly compelling scenarios where AI doesn’t radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didn’t predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldn’t have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.