Thank you for the quick reply! Totally understand the preference to focus on FLI’s work and areas of specialty. I’ve been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I’d love to hear them!
Many academics and policymakers in the EU probably still don’t think much about the longer-term implications of AI and don’t think that AI progress can have such significant impact (negative or positive)
Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it’s often biased towards short term goals at the expense of bigger long run trends.
Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I’d be very interested to learn about any work on bridging the gap between the two agendas.
Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels’ work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks, I have generally found people to be receptive.
Thank you for the quick reply! Totally understand the preference to focus on FLI’s work and areas of specialty. I’ve been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I’d love to hear them!
Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it’s often biased towards short term goals at the expense of bigger long run trends.
Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I’d be very interested to learn about any work on bridging the gap between the two agendas.
Hi aogara, we coordinate with other tech ngo’s in Brussels and have also backed this statement by European Digital Rights (EDRi), which addressed many concerns around bias, discrimination and fairness: https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf
Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels’ work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks, I have generally found people to be receptive.