I think there is a bit of confusion on several fronts here and in your main comment. On title/framing: The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of “should”, absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?)
Relevance vs value & importance:
I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.
I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance.
The concept of relevance: I am confused by “relevance” in your comment:
This point only seems like an argument for the EU’s relevance if we assume (a) that the EU is relevant
[...]
These all seem like arguments for EU AI governance work being in some way valuable; I don’t see how any of these are arguments for the EU being relevant to AI’s trajectory.
Given your comment mentioning AI’s trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern AI trajectory? Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance.
If you have time, please let me know what I misunderstood.
Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.
I think there is a bit of confusion on several fronts here and in your main comment.
On title/framing:
The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of “should”, absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?)
Relevance vs value & importance:
I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance.
The concept of relevance:
I am confused by “relevance” in your comment:
Given your comment mentioning AI’s trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern AI trajectory? Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance.
If you have time, please let me know what I misunderstood.
Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Or maybe I’ve misunderstood?
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.