I love the comparison to corporations! I’ve never heard that before and think it’s terrific.
Overall well-written and clever. Good formatting. Readable and skimmable. (This is one of those posts that has the necessity of being a “41 minute read”.) Many reasons to give props to this.
My favorite quote:
“There are large and powerful systems doing things vastly beyond the ability of individual humans, and acting in a definitively goal-directed way. We have a vague understanding of their goals, and do not assume that they are coherent. Their goals are clearly not aligned with human goals, but they have enough overlap that many people are broadly in favor of their existence. They seek power. This all causes some problems, but problems within the power of humans and other organized human groups to keep under control, for some definition of ‘under control’.”
Some people in the EA community who are particularly terrified of AI risks might end up saying in response, “well, this scares me almost equally too!” In which case, maybe we can hope for a branch of EA cause areas to focus on all kinds of risks from “large and powerful systems” including mega corporations.
A book “The Corporation” by Joel Bakan suggests that corporations are analogous to psychopaths. The book and an accompanying documentary and set of interviews with various economists, activists, CEO’s, politicians, and intellectuals shared many perspectives on corporations as psychopathic or a source of danger to humanity, the planet, etc. The book was published in 2003, but the perspective goes back further, of course.
I am speaking from my opinion and the conversations I had face-to-face with EA’s in different situations but this seems broadly true:
Corporations are a necessary evil: generally much of the harm that is imposed on animals for example comes from monopolies, MNCs and such; tax evasion and exploitation of labor is conducted by the largest companies; Biorisk and AI risk comes from corporations as well as governments. There is however no other way to conduct business, and they bring about benefits such as technological development.
I wonder if that may be what AI becomes for a while, before escaping control. I personally do see corporate control (corporate governance, as well as govt. or international control over corporations) as a viable EA cause area and am willing to explore it if someone wants to do it with me.
Apologies for the late reply. To use an analogy: anyone can make a burger better than Mcdonalds’, but hardly anyone can make an organization as successful. If you could for example organize international trade, steel mining or such while avoiding corporations, I would be indeed impressed. In the reality that we live in, this is the only way to conduct business.
To be honest, I thought that the argument related to corporations was the weakest argument and that the post would be been strengthened by focusing on the other arguments.
It feels to me that there’s two very strong disanalogies: a) that the incentives of different employees are very often heavily at odds with each other and that this becomes more likely the more that a corporation scales and the more levels of hierarchy that it obtains b) that the corporation consists of humans who can prevent activities from veering too far into the objectionable in most cases.
I’ll also add an additional point: it’s very difficult for large corporations to engage in massive conspiracies without it eventually leaking. AI has no such problems co-ordinating with itself.
I love the comparison to corporations! I’ve never heard that before and think it’s terrific.
Overall well-written and clever. Good formatting. Readable and skimmable. (This is one of those posts that has the necessity of being a “41 minute read”.) Many reasons to give props to this.
My favorite quote:
Some people in the EA community who are particularly terrified of AI risks might end up saying in response, “well, this scares me almost equally too!” In which case, maybe we can hope for a branch of EA cause areas to focus on all kinds of risks from “large and powerful systems” including mega corporations.
A book “The Corporation” by Joel Bakan suggests that corporations are analogous to psychopaths. The book and an accompanying documentary and set of interviews with various economists, activists, CEO’s, politicians, and intellectuals shared many perspectives on corporations as psychopathic or a source of danger to humanity, the planet, etc. The book was published in 2003, but the perspective goes back further, of course.
Fascinating! @Noah, have you seen this discussed in the EA community as well?
I am speaking from my opinion and the conversations I had face-to-face with EA’s in different situations but this seems broadly true:
Corporations are a necessary evil: generally much of the harm that is imposed on animals for example comes from monopolies, MNCs and such; tax evasion and exploitation of labor is conducted by the largest companies; Biorisk and AI risk comes from corporations as well as governments. There is however no other way to conduct business, and they bring about benefits such as technological development.
I wonder if that may be what AI becomes for a while, before escaping control. I personally do see corporate control (corporate governance, as well as govt. or international control over corporations) as a viable EA cause area and am willing to explore it if someone wants to do it with me.
Well, that’s not really true, right? Do you see some reason(s) why the corporate form of business entity is the best?
Apologies for the late reply. To use an analogy: anyone can make a burger better than Mcdonalds’, but hardly anyone can make an organization as successful. If you could for example organize international trade, steel mining or such while avoiding corporations, I would be indeed impressed. In the reality that we live in, this is the only way to conduct business.
Oh, ok. How would you modify their structure or regulation, to protect the value you see in how they conduct business? What ideas seem right to you?
Not really.
Do you know of any such discussion?
Stuart Russell makes the comparison to corporations in his book Human Compatible.
I have a feeling that Yudkowsky made the comparison in a blog post, but I can’t quickly find it.
To be honest, I thought that the argument related to corporations was the weakest argument and that the post would be been strengthened by focusing on the other arguments.
It feels to me that there’s two very strong disanalogies: a) that the incentives of different employees are very often heavily at odds with each other and that this becomes more likely the more that a corporation scales and the more levels of hierarchy that it obtains b) that the corporation consists of humans who can prevent activities from veering too far into the objectionable in most cases.
I’ll also add an additional point: it’s very difficult for large corporations to engage in massive conspiracies without it eventually leaking. AI has no such problems co-ordinating with itself.