Hey, thanks for hosting this. A few questions about your timelines for AI progress:
How long do you expect progress as usual before we see superintelligent behavior from AI systems across a wide range of human domains?
Which technical areas will drive the most growth in AI over the next fifty years: compute, algorithmic improvements within the deep learning paradigm, or new paradigms that replace neural networks?
Which economic industries will see the greatest disruption by artificial intelligence over the next fifty years? Natural language processing, image recognition, and unsupervised learning by RL agents have all seen great progress under the deep learning paradigm of the last 20 years. Would you expect AI progress in these domains to outpace developments in other popular technologies such as virtual reality, efficient energy storage, or blockchain?
Where do your opinions different most greatly from those of academics and policy makers around you?
Thank you, these are some really big questions! Most of them are beyond what we work on, so I’m happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now.
I’ll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still don’t think much about the longer-term implications of AI and don’t think that AI progress can have such significant impact (negative or positive) that we do or don’t think that it is reasonable to focus on it right now. That said, I don’t think that there is necessarily a very big gap between us in practice. For example, many people who are interested in bias, discrimination, fairness, and other issues that are already prevalent, can also be concerned about more general purpose AI systems that will become more available on the market in the future, as these systems can present even bigger challenges and have more significant consequences in terms of bias, discrimination, fairness, etc. In the paper On the Opportunities and Risks of Foundation Models, it was stated that, “Properties of the foundation model can lead to harm in downstream systems. As a result, these intrinsic biases can be measured directly within the foundation model, though the harm itself is only realized when the foundation model is adapted, and thereafter applied.”
Thank you for the quick reply! Totally understand the preference to focus on FLI’s work and areas of specialty. I’ve been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I’d love to hear them!
Many academics and policymakers in the EU probably still don’t think much about the longer-term implications of AI and don’t think that AI progress can have such significant impact (negative or positive)
Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it’s often biased towards short term goals at the expense of bigger long run trends.
Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I’d be very interested to learn about any work on bridging the gap between the two agendas.
Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels’ work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks, I have generally found people to be receptive.
Hey, thanks for hosting this. A few questions about your timelines for AI progress:
How long do you expect progress as usual before we see superintelligent behavior from AI systems across a wide range of human domains?
Which technical areas will drive the most growth in AI over the next fifty years: compute, algorithmic improvements within the deep learning paradigm, or new paradigms that replace neural networks?
Which economic industries will see the greatest disruption by artificial intelligence over the next fifty years? Natural language processing, image recognition, and unsupervised learning by RL agents have all seen great progress under the deep learning paradigm of the last 20 years. Would you expect AI progress in these domains to outpace developments in other popular technologies such as virtual reality, efficient energy storage, or blockchain?
Where do your opinions different most greatly from those of academics and policy makers around you?
Thank you, these are some really big questions! Most of them are beyond what we work on, so I’m happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now.
I’ll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still don’t think much about the longer-term implications of AI and don’t think that AI progress can have such significant impact (negative or positive) that we do or don’t think that it is reasonable to focus on it right now. That said, I don’t think that there is necessarily a very big gap between us in practice. For example, many people who are interested in bias, discrimination, fairness, and other issues that are already prevalent, can also be concerned about more general purpose AI systems that will become more available on the market in the future, as these systems can present even bigger challenges and have more significant consequences in terms of bias, discrimination, fairness, etc. In the paper On the Opportunities and Risks of Foundation Models, it was stated that, “Properties of the foundation model can lead to harm in downstream systems. As a result, these intrinsic biases can be measured directly within the foundation model, though the harm itself is only realized when the foundation model is adapted, and thereafter applied.”
Thank you for the quick reply! Totally understand the preference to focus on FLI’s work and areas of specialty. I’ve been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I’d love to hear them!
Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it’s often biased towards short term goals at the expense of bigger long run trends.
Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I’d be very interested to learn about any work on bridging the gap between the two agendas.
Hi aogara, we coordinate with other tech ngo’s in Brussels and have also backed this statement by European Digital Rights (EDRi), which addressed many concerns around bias, discrimination and fairness: https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf
Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels’ work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks, I have generally found people to be receptive.