Thanks for this! I appreciate the clear outline of arguments around an important question.
It sounds like this post might be centered around the question, “Should people who want to improve AGI governance work in the EU?”, rather than the question of the title, “Is the EU Relevant for AGI Governance?”. The focus on the former question seems right to me, since that’s the more decision-relevant question. So I wonder if explicitly reframing (e.g., retitling) this post so that it’s centered on the former question would make this post more clear?
After all, a bunch of the arguments presented seem to address the value of EU AI governance work, rather than its relevance to AI. I’ll try to show that in a subcomment.
4.The EU market and political environment favor AGI safety
I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.
5.Direct influence from inside relevant AI labs is limited
This seems like an argument about relative tractability, rather than relevance.
6.Growing the political capital of AGI-concerned people
This point only seems like an argument for the EU’s relevance if we assume (a) that the EU is relevant, or (b) that political capital transfers well across policy communities. (And that seems like a nontrivial assumption, since (a) is the desired conclusion and (b) seems overstated at best.) The point is much more plausible on its own as an argument for EU AI governance work in the near term being valuable.
Personal fit [...] Low-regret career pathway [...] High personal career capital [...] Neglectedness
These all seem like arguments for EU AI governance work being in some way valuable; I don’t see how any of these are arguments for the EU being relevant to AI’s trajectory.
EU governance would slow down US research towards AGI more than it would Chinese research
This seems to me like an argument for the EU’s relevance to AI (and also for the value of working in EU AI governance—if the EU might take some actions that would be harmful for AI’s trajectory, then working in the EU and preventing these actions would be a way to positively contribute to AI governance).
Higher returns on EA investment in the US and China AI governance space than in the EU’s [...] The EA EU AI governance space is not mature enough for personal career progression and impact
These also strike me as arguments about career value rather than institutional relevance.
I think there is a bit of confusion on several fronts here and in your main comment. On title/framing: The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of “should”, absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?)
Relevance vs value & importance:
I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.
I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance.
The concept of relevance: I am confused by “relevance” in your comment:
This point only seems like an argument for the EU’s relevance if we assume (a) that the EU is relevant
[...]
These all seem like arguments for EU AI governance work being in some way valuable; I don’t see how any of these are arguments for the EU being relevant to AI’s trajectory.
Given your comment mentioning AI’s trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern AI trajectory? Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance.
If you have time, please let me know what I misunderstood.
Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.
Thanks for this! I appreciate the clear outline of arguments around an important question.
It sounds like this post might be centered around the question, “Should people who want to improve AGI governance work in the EU?”, rather than the question of the title, “Is the EU Relevant for AGI Governance?”. The focus on the former question seems right to me, since that’s the more decision-relevant question. So I wonder if explicitly reframing (e.g., retitling) this post so that it’s centered on the former question would make this post more clear?
After all, a bunch of the arguments presented seem to address the value of EU AI governance work, rather than its relevance to AI. I’ll try to show that in a subcomment.
Here’s the in-the-weeds subcomment:
I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.
This seems like an argument about relative tractability, rather than relevance.
This point only seems like an argument for the EU’s relevance if we assume (a) that the EU is relevant, or (b) that political capital transfers well across policy communities. (And that seems like a nontrivial assumption, since (a) is the desired conclusion and (b) seems overstated at best.) The point is much more plausible on its own as an argument for EU AI governance work in the near term being valuable.
These all seem like arguments for EU AI governance work being in some way valuable; I don’t see how any of these are arguments for the EU being relevant to AI’s trajectory.
This seems to me like an argument for the EU’s relevance to AI (and also for the value of working in EU AI governance—if the EU might take some actions that would be harmful for AI’s trajectory, then working in the EU and preventing these actions would be a way to positively contribute to AI governance).
These also strike me as arguments about career value rather than institutional relevance.
I think there is a bit of confusion on several fronts here and in your main comment.
On title/framing:
The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of “should”, absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?)
Relevance vs value & importance:
I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance.
The concept of relevance:
I am confused by “relevance” in your comment:
Given your comment mentioning AI’s trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern AI trajectory? Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance.
If you have time, please let me know what I misunderstood.
Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Or maybe I’ve misunderstood?
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.