First of all, thanks to whoever is posting these transcripts. I almost definitely would never have watched the video!
One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they’re the only ones who should be doing this governance thing. …
Now all of those things are true.
I think this is considerably more controversial than you assume. While it has been a few years since I studied political philosophy, my understanding is that philosophers have largely given up on the classical problem of political authority—justifying why governments a unique right to coerce people, and why people have an obligation to obey specifically because a government said so. All the attempted justifications are ultimately rather unsatisfying. It seems much more plausible that governments are justified if/when they pass good laws that protect people’s rights and improve welfare—i.e. the morality of laws justifies the government, rather than the government justifying the morality of the laws. But this is obviously rather contingent, and doesn’t suggest that states are in any way the only legitimate source of political authority.
The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is
Here I think you are understanding the potential legitimacy of the influence of a private company. Their justification comes not from market power, but from people freely choosing to buy their products, and the expertise they demonstrate in effectively meeting this demand. To give a mundane example, a major shipping company would have justification in providing major input into international port standardization rules by virtue of their expertise in shipping; expertise which had been implicitly endorsed by everyone who chose to hire them for shipping services.
Hi Jade. I disagree with you. I think you are making a straw man of “regulation” and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.
Regulation doesn’t need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.
These granular rules can be strict rules that limit certain actions, or can be ‘outcome focused regulation’ that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).
Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.
For a related perspective, I’ve written (here for a general audience, here for an academic one) about using self-regulatory organizations, which I think could be a natural extension of this position depending on implementation.
Do people at Gov AI generally agree with the message/messaging of the talk 2–3 years later?
The answer would be a nice data point for “are we clueless to give advice on AI policy” debate/sentiments. And I am curious about how beneficial corporations/financiers can be for ~selfish reasons (cf. BlackRock on environmental sustainability and coronavirus cure/vaccine).
Happy to give my view. Could you say something about what particular views or messages you’re curious about? (I don’t have time to reread the script atm)
Thank you for a speedy reply, Markus! Jade makes three major points (see the attached slide). I would appreciate your high-level impressions on these (if you lack time reporting oneliners like “mostly agree” or “too much nuance to settle on oneliner” still would be valuable).
If you’d take time to elaborate on any of these, I would prefer the last one. Specifically on:
What are the reasons why them preemptively engaging is likely to lead to prosocial regulation? [emphasis mine] Two reasons why. One: the rationale for a firm would be something like, “We should be doing the thing that governance will want us to do, so that they don’t then go in and put in regulation that is not good for us.” And if you assume that governance has that incentive structure to deliver on public goods, then firms, at the very least, will converge on the idea that they should be mitigating their externalities and delivering on prosocial outcomes in the same way that the state regulation probably would.
The more salient one in the case of AI is that public opinion actually plays a fairly large role in dictating what firms think are prosocial. [...]
Some brief thoughts (just my quick takes. My guess is that others might disagree, including at GovAI):
Overall, I think the situation is quite different compared to 2018, when I think the talk was recorded. AI governance / policy issues are much more prominent in the media, in politics, etc. The EU Commission has proposed some pretty comprehensive AI legislation. As such, there’s more pressure on companies as well as governments to take action. I think there’s also better understanding of what AI policy is sensible. All these things update me against 1 (insofar as we are still in the formative stages) and 2. They also update me in favour of thinking something like: governments will want to take a bunch of actions related to AI and so we should try to steer those actions in positive directions.
I think the AI policy / governance field is mature enough at this point that it’s not that helpful to think of an AI governance regime as one unitary thing. I much prefer thinking about specific areas of AI governance. Depending on the area, I’d likely have different views on 1-3. For example, it seems likely that companies are best placed to help develop standards that may be used to inform legislation further down the line. I wouldn’t expect companies to be best placed to figure out what the US should do wrt updates to antitrust regulation.
On 3, I think it’s true that companies have incentives in favour of acting prosocially and that we can boost these incentives. I’m not sure those incentives outweigh their other incentives, though. The view is not that e.g. Facebook, Amazon, Google, are all-things-considered going to act in the public interest. I also don’t think Jade-2018 held that view.
First of all, thanks to whoever is posting these transcripts. I almost definitely would never have watched the video!
I think this is considerably more controversial than you assume. While it has been a few years since I studied political philosophy, my understanding is that philosophers have largely given up on the classical problem of political authority—justifying why governments a unique right to coerce people, and why people have an obligation to obey specifically because a government said so. All the attempted justifications are ultimately rather unsatisfying. It seems much more plausible that governments are justified if/when they pass good laws that protect people’s rights and improve welfare—i.e. the morality of laws justifies the government, rather than the government justifying the morality of the laws. But this is obviously rather contingent, and doesn’t suggest that states are in any way the only legitimate source of political authority.
For more discussion of this, I recommend Michael Huemer’s excellent The Problem of Political Authority. There’s also a Standford Encyclopedia of Philosophy article.
Here I think you are understanding the potential legitimacy of the influence of a private company. Their justification comes not from market power, but from people freely choosing to buy their products, and the expertise they demonstrate in effectively meeting this demand. To give a mundane example, a major shipping company would have justification in providing major input into international port standardization rules by virtue of their expertise in shipping; expertise which had been implicitly endorsed by everyone who chose to hire them for shipping services.
Hi Jade. I disagree with you. I think you are making a straw man of “regulation” and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.
Regulation doesn’t need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.
These granular rules can be strict rules that limit certain actions, or can be ‘outcome focused regulation’ that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).
Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.
For a related perspective, I’ve written (here for a general audience, here for an academic one) about using self-regulatory organizations, which I think could be a natural extension of this position depending on implementation.
Do people at Gov AI generally agree with the message/messaging of the talk 2–3 years later?
The answer would be a nice data point for “are we clueless to give advice on AI policy” debate/sentiments. And I am curious about how beneficial corporations/financiers can be for ~selfish reasons (cf. BlackRock on environmental sustainability and coronavirus cure/vaccine).
Happy to give my view. Could you say something about what particular views or messages you’re curious about? (I don’t have time to reread the script atm)
Thank you for a speedy reply, Markus! Jade makes three major points (see the attached slide). I would appreciate your high-level impressions on these (if you lack time reporting oneliners like “mostly agree” or “too much nuance to settle on oneliner” still would be valuable).
If you’d take time to elaborate on any of these, I would prefer the last one. Specifically on:
Some brief thoughts (just my quick takes. My guess is that others might disagree, including at GovAI):
Overall, I think the situation is quite different compared to 2018, when I think the talk was recorded. AI governance / policy issues are much more prominent in the media, in politics, etc. The EU Commission has proposed some pretty comprehensive AI legislation. As such, there’s more pressure on companies as well as governments to take action. I think there’s also better understanding of what AI policy is sensible. All these things update me against 1 (insofar as we are still in the formative stages) and 2. They also update me in favour of thinking something like: governments will want to take a bunch of actions related to AI and so we should try to steer those actions in positive directions.
I think the AI policy / governance field is mature enough at this point that it’s not that helpful to think of an AI governance regime as one unitary thing. I much prefer thinking about specific areas of AI governance. Depending on the area, I’d likely have different views on 1-3. For example, it seems likely that companies are best placed to help develop standards that may be used to inform legislation further down the line. I wouldn’t expect companies to be best placed to figure out what the US should do wrt updates to antitrust regulation.
On 3, I think it’s true that companies have incentives in favour of acting prosocially and that we can boost these incentives. I’m not sure those incentives outweigh their other incentives, though. The view is not that e.g. Facebook, Amazon, Google, are all-things-considered going to act in the public interest. I also don’t think Jade-2018 held that view.