As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us. See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.
Happy though to have had this chat!