It seems like the wrong framing to talk about a “positive vision” for the transition to superintelligence, if that transition involves immense risks and is generally a bad idea.If you think the transition could be “on a par with the evolution of Homo sapiens, or of life itself” but compressed into years, then that surely involves immense risks (of very diverse kinds!).
From what I’ve heard you say elsewhere, I think you basically agree with this. But then, surely you must agree that the priority is to delay this process until we can make sure it’s safe and well-controlled. And if you are going to talk about positive visions, then I would say it’s really important that such visions come with an explicit disclaimer that they are talking about a future we should be actively trying to avoid. I’m afraid that otherwise these articles might give people the wrong idea.
Edit: to make my point clearer, I think a good analogy would be to think of yourself right before the development of nuclear power (including the nuclear bomb). Suppose other people are already talking about the risks, and it seems it’s likely to happen so maybe it’s worth thinking about how we can make a good future with nuclear. Ok. But given the risks (and that many people still aren’t aware of them), talking about a good nuclear future without flagging that the best course of action would be to delay developing this technology until we’re sure we can avoid catastrophe seems like a potential infohazard.
Firstly, that is if you think that it isn’t inevitable and that it is possible to stop or slow down, if nuclear was going to be developed anyway, that changes the calculus. Even if that is the case there’s also this weird thing within human psychology where if you can point out a positive vision of something, it is often easier for people to kind of get it?
“Don’t do this thing” is often a lot worse than saying something like, could you do this specific thing instead when it comes to convincing people of things. This is also true for specific therapeutic techniques like the perfect day exercise and from a predictive processing perspective this is because you’re kind of anchoring your expectations around something better and it enables you to visualise things that are easier to take actions towards? You have an easier time seeing what actions that you actually have to take?
Finally, this is not likely what the underlying reasoning for why Will is doing something like the positive vision as that is more likely to be about the estimated value from improving the future versus reducing existential risk (see the following post).
It seems like the wrong framing to talk about a “positive vision” for the transition to superintelligence, if that transition involves immense risks and is generally a bad idea. If you think the transition could be “on a par with the evolution of Homo sapiens, or of life itself” but compressed into years, then that surely involves immense risks (of very diverse kinds!).
From what I’ve heard you say elsewhere, I think you basically agree with this. But then, surely you must agree that the priority is to delay this process until we can make sure it’s safe and well-controlled. And if you are going to talk about positive visions, then I would say it’s really important that such visions come with an explicit disclaimer that they are talking about a future we should be actively trying to avoid. I’m afraid that otherwise these articles might give people the wrong idea.
Edit: to make my point clearer, I think a good analogy would be to think of yourself right before the development of nuclear power (including the nuclear bomb). Suppose other people are already talking about the risks, and it seems it’s likely to happen so maybe it’s worth thinking about how we can make a good future with nuclear. Ok. But given the risks (and that many people still aren’t aware of them), talking about a good nuclear future without flagging that the best course of action would be to delay developing this technology until we’re sure we can avoid catastrophe seems like a potential infohazard.
Firstly, that is if you think that it isn’t inevitable and that it is possible to stop or slow down, if nuclear was going to be developed anyway, that changes the calculus. Even if that is the case there’s also this weird thing within human psychology where if you can point out a positive vision of something, it is often easier for people to kind of get it?
“Don’t do this thing” is often a lot worse than saying something like, could you do this specific thing instead when it comes to convincing people of things. This is also true for specific therapeutic techniques like the perfect day exercise and from a predictive processing perspective this is because you’re kind of anchoring your expectations around something better and it enables you to visualise things that are easier to take actions towards? You have an easier time seeing what actions that you actually have to take?
Finally, this is not likely what the underlying reasoning for why Will is doing something like the positive vision as that is more likely to be about the estimated value from improving the future versus reducing existential risk (see the following post).