It’s probably hard to evaluate the expected value of AI safety because the field is evolving extremely fast in the last year. A year ago we didn’t have DALL-E-2 or GPT-4 and if you would have asked me the same question a year ago I would have told you that:
It’s maybe comparable with Covid, before the pandemic people were advocating for measures to take to prevent or limit the impact of pandemics, but the expected value was very uncertain. Now that Covid happened you have concrete data showing how many people died because of it and can with more certainty say, preventing something similar will have this expected value.
I hope it won’t be necessary for an “AI Covid” to happen for people to start to take things seriously, but I think many very smart people think that there are substantial risks with AI and currently a lot of money is being spent to further the advancement of AI. Chat GPT is the fastest growing product in history!
In comparison the amount of money being spent on AI safety is still from my understanding limited, so if we draw the comparison to pandemic risks, imagine before covid and crisper is open source and the fastest growing product on the planet. Everyone is racing to find ways to make it more accessible, more powerful while, at least funding wise, neglecting safety.
In that timeline people have access to create powerful biological viruses, in our timeline people might have access to powerful computer viruses.
To close, I think it’s hard to evaluate expected value if you haven’t seen the damage yet, but I would hope we don’t need to see the damage and it’s up to each person to make a judgement call on where to spend their time and resources. I wish it was as simple as looking at QALY and then just sort by highest QALY and working on that, but especially in the high risk areas there seems to often be very high uncertainty. Maybe people that have a higher tolerance for uncertainty should focus on those areas because personal fit matters, if you have a low tolerance for uncertainty you might not pursue the field for long.
It’s probably hard to evaluate the expected value of AI safety because the field is evolving extremely fast in the last year. A year ago we didn’t have DALL-E-2 or GPT-4 and if you would have asked me the same question a year ago I would have told you that:
“AI safety will solve itself because of backwards compatibility”
But I was wrong / see it differently now.
It’s maybe comparable with Covid, before the pandemic people were advocating for measures to take to prevent or limit the impact of pandemics, but the expected value was very uncertain. Now that Covid happened you have concrete data showing how many people died because of it and can with more certainty say, preventing something similar will have this expected value.
I hope it won’t be necessary for an “AI Covid” to happen for people to start to take things seriously, but I think many very smart people think that there are substantial risks with AI and currently a lot of money is being spent to further the advancement of AI. Chat GPT is the fastest growing product in history!
In comparison the amount of money being spent on AI safety is still from my understanding limited, so if we draw the comparison to pandemic risks, imagine before covid and crisper is open source and the fastest growing product on the planet. Everyone is racing to find ways to make it more accessible, more powerful while, at least funding wise, neglecting safety.
In that timeline people have access to create powerful biological viruses, in our timeline people might have access to powerful computer viruses.
To close, I think it’s hard to evaluate expected value if you haven’t seen the damage yet, but I would hope we don’t need to see the damage and it’s up to each person to make a judgement call on where to spend their time and resources. I wish it was as simple as looking at QALY and then just sort by highest QALY and working on that, but especially in the high risk areas there seems to often be very high uncertainty. Maybe people that have a higher tolerance for uncertainty should focus on those areas because personal fit matters, if you have a low tolerance for uncertainty you might not pursue the field for long.