Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.
Hm, now I’m also a little confused. I agree with a bunch of the points you clarify (and I think they’re in line with how I had originally interpreted this post). Specifically, I think we’re on the same page about all of these things, among others:
This post aims to outline decision-relevant considerations rather than to make an overall prescription
The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
This post is about the way people govern the trajectory of AI
Relevance is not a binary concept
--
I’ll try to clarify my earlier point. I’m trying to draw a distinction between these two things:
(a) the relevance of EU policy to the trajectory of AI, and
(b) the value of pursuing (AI-related) work in EU policymaking.
I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI’s trajectory, and this makes it an argument for the value of EU AI policy work.
But I don’t think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU’s AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).
More broadly, my understanding is that roughly all of this post’s arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn’t propose an overall judgement.
Or maybe I’ve misunderstood?
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.