AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either:
The superintelligence is misaligned and it kills us all
The superintelligence is misaligned with our own objectives but is benign
The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about.
I think there are many more options than this, and every argument that follows banks entirely your logical models being correct. Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?
I think you are massively overconfident, considering that your only source of evidence is abstract models with zero feedback loops. There is nothing wrong with creating such models, but be aware of just how difficult it is to get even something much simpler right.
Itâs great that you spent a year thinking about this, but many have spent decades and feel MUCH less confident about all of this than you.
intelligence peaks more closely to humans, and super intelligence doesnât yield significant increases to growth.
Even if you have a human-ish intelligence, most of the advantage of AI from its other features: - You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it. - You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, itâs much more power than anything human-ish. These are two of the many features that make AI radically different and make that it will shape the future.
2. superintelligence in one domain doesnât yield superintelligence in others, leading to some, but limited growth, like most other technologies.
Thatâs very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that itâs more and more sample efficient.
3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
I think thatâs the most plausible of all three claims but I still think itâs like btwn 0.1% and 1% likely. Whereas weâve a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, weâve no idea how to build EMs. Also, this doesnât change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that itâs unlikely that EMs are a significant thing we should care about rn.
If you have other examples, Iâm happy to consider them but I suspect you donât have better examples than those.
Meta-point: I think that you should be more inside viewy when considering claims. âEngineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?â This class of arguments for instance is pretty bad IMO. Uncertainty doesnât prevent you from thinking about the EV and here I was mainly arguing on the line that if you care about the long-term EV, AI is very likely to be the first-order determinant of it. Uncertainty should make us willing to do some exploration and Iâm not arguing against that but in other cause areas weâre making much more than exploration. 5% of longtermists would be sufficient to do all types of explorations on many topics, even EMs.
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and Iâm often a bit perplexed as to how quick people are to jump from ânearly everyone diesâ to âliterally everyone diesâ. Similarly Iâm sympathetic to the point that itâs difficult to imagine particularly compelling scenarios where AI doesnât radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didnât predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldnât have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.
I think there are many more options than this, and every argument that follows banks entirely your logical models being correct. Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?
I think you are massively overconfident, considering that your only source of evidence is abstract models with zero feedback loops. There is nothing wrong with creating such models, but be aware of just how difficult it is to get even something much simpler right.
Itâs great that you spent a year thinking about this, but many have spent decades and feel MUCH less confident about all of this than you.
I think that this comment is way too outside viewy.
Could you mention concretely one of the âmany optionsâ that would change directionally the conclusion of the post?
for example:
intelligence peaks more closely to humans, and super intelligence doesnât yield significant increases to growth.
superintelligence in one domain doesnât yield superintelligence in others, leading to some, but limited growth, like most other technologies.
we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
intelligence peaks more closely to humans, and super intelligence doesnât yield significant increases to growth.
Even if you have a human-ish intelligence, most of the advantage of AI from its other features:
- You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it.
- You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, itâs much more power than anything human-ish.
These are two of the many features that make AI radically different and make that it will shape the future.
2. superintelligence in one domain doesnât yield superintelligence in others, leading to some, but limited growth, like most other technologies.
Thatâs very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that itâs more and more sample efficient.
3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
I think thatâs the most plausible of all three claims but I still think itâs like btwn 0.1% and 1% likely. Whereas weâve a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, weâve no idea how to build EMs. Also, this doesnât change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that itâs unlikely that EMs are a significant thing we should care about rn.
If you have other examples, Iâm happy to consider them but I suspect you donât have better examples than those.
Meta-point: I think that you should be more inside viewy when considering claims.
âEngineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?â
This class of arguments for instance is pretty bad IMO. Uncertainty doesnât prevent you from thinking about the EV and here I was mainly arguing on the line that if you care about the long-term EV, AI is very likely to be the first-order determinant of it. Uncertainty should make us willing to do some exploration and Iâm not arguing against that but in other cause areas weâre making much more than exploration. 5% of longtermists would be sufficient to do all types of explorations on many topics, even EMs.
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and Iâm often a bit perplexed as to how quick people are to jump from ânearly everyone diesâ to âliterally everyone diesâ. Similarly Iâm sympathetic to the point that itâs difficult to imagine particularly compelling scenarios where AI doesnât radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didnât predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldnât have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.