Yeah, this sounds right to me. At present I feel like a regulator would end up massively overrepresenting at least one of (a) the EA community and (b) large tech corporations with pretty obviously bad incentives.
Hmm, I don’t see what goes wrong if the regulator overrepresents EA. And overrepresenting the major labs is suboptimal but I’d guess it’s better than no regulation—it decreases multipolarity among labs and (insofar as major labs are relatively safe and want to require others to be safe) improves safety directly.
A regulator overrepresenting EA seems bad to me (not an EA) because:
I don’t agree with a lot of the beliefs of the EA community on this subject and so I’d expect an EA-dominated regulator to take actions I don’t approve of.
Dominance by a specific group makes legitimacy much harder.
The EA community is pretty strongly intertwined with the big labs so most of the concerns from there carry over.
I don’t expect (1) to be particularly persuasive for you but maybe (2) and (3) are. I find some of the points in Ways I Expect AI Regulation To Increase X-Risk relevant to issues with overrepresentation of big labs. I think the overrepresentation of big labs would lead to a squashing of open-source, for instance, which I think is currently beneficial and would remain beneficial on the margin for a while.
More generally, I don’t particularly like the flattening of specific disagreements on matters of fact (and thus subsequent actions) to “wants people to be safe”/”doesn’t want people to be safe”. I expect that most people who disagree about the right course of action aren’t doing so out of some weird desire to see people harmed/replaced by AI (I’m certainly not) and it seems a pretty unfair dismissal.
Re “want to require others to be safe”—that was poorly worded, I meant wants to require everyone to follow specific safety practices they already follow, possibly to slow competitors in addition to safety reasons.
Cool, apologies if that came across a bit snarky (on rereading it does to me). I think this was instance N+1 of this phrasing and I’d gotten a bit annoyed by instances 1 through N which you obviously bear no responsibility for! I’m happy to have pushed back on the phrasing but hope I didn’t cause offence.
A more principled version of (1) would be to appeal to moral uncertainty, or to the idea that a regulator should represent all the stakeholders and I worry than an EA-dominated regulator would fail to do so.
Yeah, this sounds right to me. At present I feel like a regulator would end up massively overrepresenting at least one of (a) the EA community and (b) large tech corporations with pretty obviously bad incentives.
Hmm, I don’t see what goes wrong if the regulator overrepresents EA. And overrepresenting the major labs is suboptimal but I’d guess it’s better than no regulation—it decreases multipolarity among labs and (insofar as major labs are relatively safe and want to require others to be safe) improves safety directly.
A regulator overrepresenting EA seems bad to me (not an EA) because:
I don’t agree with a lot of the beliefs of the EA community on this subject and so I’d expect an EA-dominated regulator to take actions I don’t approve of.
Dominance by a specific group makes legitimacy much harder.
The EA community is pretty strongly intertwined with the big labs so most of the concerns from there carry over.
I don’t expect (1) to be particularly persuasive for you but maybe (2) and (3) are. I find some of the points in Ways I Expect AI Regulation To Increase X-Risk relevant to issues with overrepresentation of big labs. I think the overrepresentation of big labs would lead to a squashing of open-source, for instance, which I think is currently beneficial and would remain beneficial on the margin for a while.
More generally, I don’t particularly like the flattening of specific disagreements on matters of fact (and thus subsequent actions) to “wants people to be safe”/”doesn’t want people to be safe”. I expect that most people who disagree about the right course of action aren’t doing so out of some weird desire to see people harmed/replaced by AI (I’m certainly not) and it seems a pretty unfair dismissal.
OK.
Re “want to require others to be safe”—that was poorly worded, I meant wants to require everyone to follow specific safety practices they already follow, possibly to slow competitors in addition to safety reasons.
Cool, apologies if that came across a bit snarky (on rereading it does to me). I think this was instance N+1 of this phrasing and I’d gotten a bit annoyed by instances 1 through N which you obviously bear no responsibility for! I’m happy to have pushed back on the phrasing but hope I didn’t cause offence.
A more principled version of (1) would be to appeal to moral uncertainty, or to the idea that a regulator should represent all the stakeholders and I worry than an EA-dominated regulator would fail to do so.