I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/​profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/​profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.
Happy though to have had this chat!