@aaron_mai@RachelM I agree that we should come up with a few ways that make the dangers / advantages of AI very clear to people so you can communicate more effectively. You can make a much stronger point if you have a concrete scenario to point to as an example that feels relatable.
I’ll list a few I thought of at the end.
But the problem I see is that this space is evolving so quickly that things change all the time. Scenarios I can imagine being plausible right now might seem unlikely as we learn more about the possibilities and limitations. So just because in the coming month some of the examples I will give below might become unlikely doesn’t necessarily mean that therefor the risk / advantages of AI have also become more limited.
That also makes communication more difficult because if you use an “outdated” example, people might dismiss your point prematurely.
One other aspect is that we’re on human level intelligence and are limited in our reasoning compared to a smarter than human AI, this quote puts it quite nicely:
> “There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.”—Yudkowsky, Staring into the Singularity.
Two examples I can see possible within the next few iterations of something like GPT-4:
- maleware that causes very bad things to happen (you can read up on Stuxnet to see what humans have been already capable of 15 years ago, or if you don’t like to read Wikipedia there is a great podcast episode about it) - detonate nuclear bombs - destroy the electrical grid
- get access to genetic engineering like crisper and then
- engineer a virus way worse than Covid - this virus doesn’t even have to be deadly, imagine it causes sterilization of humans
Both of the above seem very scary to me because they require a lot of intelligence initially, but then the “deployment” of them almost works by itself. Also both scenarios seem within reach because in the case of the computer virus we have already done this as humans ourselves in a more controlled way. And for the biological virus we still don’t know with certainty if Covid didn’t come from a lab, so it doesn’t seem to far fetched that given that we know how fast covid spread a similar virus with different “properties”, potentially no symptoms other than infertility would be terrible.
Please delete this comment if you think that this is an infohazard, I have seen other people mention this term, but honestly to me I didn’t have to spend much time thinking about 2 scenarios I deem as not unlikely bad outcomes, so certainly people much smarter and experienced then me will be able to come up with those and much worse. Not to mention an AI that will be much smarter than any human.
@aaron_mai @RachelM
I agree that we should come up with a few ways that make the dangers / advantages of AI very clear to people so you can communicate more effectively. You can make a much stronger point if you have a concrete scenario to point to as an example that feels relatable.
I’ll list a few I thought of at the end.
But the problem I see is that this space is evolving so quickly that things change all the time. Scenarios I can imagine being plausible right now might seem unlikely as we learn more about the possibilities and limitations. So just because in the coming month some of the examples I will give below might become unlikely doesn’t necessarily mean that therefor the risk / advantages of AI have also become more limited.
That also makes communication more difficult because if you use an “outdated” example, people might dismiss your point prematurely.
One other aspect is that we’re on human level intelligence and are limited in our reasoning compared to a smarter than human AI, this quote puts it quite nicely:
> “There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.”—Yudkowsky, Staring into the Singularity.
Two examples I can see possible within the next few iterations of something like GPT-4:
- maleware that causes very bad things to happen (you can read up on Stuxnet to see what humans have been already capable of 15 years ago, or if you don’t like to read Wikipedia there is a great podcast episode about it)
- detonate nuclear bombs
- destroy the electrical grid
- get access to genetic engineering like crisper and then
- engineer a virus way worse than Covid
- this virus doesn’t even have to be deadly, imagine it causes sterilization of humans
Both of the above seem very scary to me because they require a lot of intelligence initially, but then the “deployment” of them almost works by itself. Also both scenarios seem within reach because in the case of the computer virus we have already done this as humans ourselves in a more controlled way. And for the biological virus we still don’t know with certainty if Covid didn’t come from a lab, so it doesn’t seem to far fetched that given that we know how fast covid spread a similar virus with different “properties”, potentially no symptoms other than infertility would be terrible.
Please delete this comment if you think that this is an infohazard, I have seen other people mention this term, but honestly to me I didn’t have to spend much time thinking about 2 scenarios I deem as not unlikely bad outcomes, so certainly people much smarter and experienced then me will be able to come up with those and much worse. Not to mention an AI that will be much smarter than any human.