My current understanding is that he believes extinction or similar from AI is possible, at 5% probability, but that this is low enough that concerns about stable totalitarianism are slightly more important. Furthermore, he believes that AI alignment is a technical but solvable problem. More here.
I am far more pessimistic than him about extinction from misaligned AI systems, but I think it’s quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.
I am far more pessimistic than him about extinction from misaligned AI systems, but I think it’s quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.
I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.
But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner’s hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.
But even if Aschenbrenner’s plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.
If you are concerned about extinction and stable totalitarianism, ‘we should continue to develop AI but the good guys will have it’ sounds like a very unimaginative and naïve solution
(I feel slightly bad for pointing this out) It’s also, perhaps not too coincidentally, the sort of general belief that’s associated with giving Leopold more power, compared to many other possible beliefs one could have in this area.
Agreed. Getting a larger share of the pie (without breaking rules during peacetime) might be ‘unimaginative’ but it’s hardly naïve. It’s straightforward and has a good track record of allowing groups to shape the world disproportionately.
I’m a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying ‘this will happen’) and then work to bring it about as quickly and as urgently as possible.
Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn’t involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.
My current understanding is that he believes extinction or similar from AI is possible, at 5% probability, but that this is low enough that concerns about stable totalitarianism are slightly more important. Furthermore, he believes that AI alignment is a technical but solvable problem. More here.
I am far more pessimistic than him about extinction from misaligned AI systems, but I think it’s quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.
I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.
But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner’s hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.
But even if Aschenbrenner’s plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.
If you are concerned about extinction and stable totalitarianism, ‘we should continue to develop AI but the good guys will have it’ sounds like a very unimaginative and naïve solution
+1.
(I feel slightly bad for pointing this out) It’s also, perhaps not too coincidentally, the sort of general belief that’s associated with giving Leopold more power, compared to many other possible beliefs one could have in this area.
What would the imaginative solution be?
Agreed. Getting a larger share of the pie (without breaking rules during peacetime) might be ‘unimaginative’ but it’s hardly naïve. It’s straightforward and has a good track record of allowing groups to shape the world disproportionately.
I’m a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying ‘this will happen’) and then work to bring it about as quickly and as urgently as possible.
Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn’t involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.