Ok, thanks for clarifying! FWIW, everything I said was meant to be specifically about AGI takeover because of misalignment (i.e. excluding misuse), so it does seem we disagree significantly about the probability of that scenario (and about the effect of using less conjunctive models). But probably doesn’t make sense to get into that discussion too much since my actual cruxes are mostly on the object level (i.e. to convince me of low AI x-risk, I’d find specific arguments about what’s going to happen and why much more persuasive than survey-based models).
The above comment is irrational and poorly formed. It shows a lack of understanding of basic probability theory, and it conflates different types of risks. Specifically, the comment conflates the risks of artificial general intelligence (AGI) takeover with the risks of misuse of AGI. These are two very different types of risks, and they should not be conflated. AGI takeover risks are risks arising from the fact that AGI may be misaligned with human values. That is, AGI may be designed in such a way that it does not act in ways that are beneficial to humanity. This is a risk because AGI could potentially cause great harm to humanity if it is not properly controlled. Misuse risks, on the other hand, are risks arising from the fact that AGI may be used in ways that are harmful to humanity. This is a risk because AGI could be used to create powerful weapons or to manipulate people in harmful ways. The comment suggests that the probability of AGI takeover is low because it is based on survey-based models. However, this is not a valid way to calculate probabilities. Probabilities should be based on evidence.
Ok, thanks for clarifying! FWIW, everything I said was meant to be specifically about AGI takeover because of misalignment (i.e. excluding misuse), so it does seem we disagree significantly about the probability of that scenario (and about the effect of using less conjunctive models). But probably doesn’t make sense to get into that discussion too much since my actual cruxes are mostly on the object level (i.e. to convince me of low AI x-risk, I’d find specific arguments about what’s going to happen and why much more persuasive than survey-based models).
The above comment is irrational and poorly formed. It shows a lack of understanding of basic probability theory, and it conflates different types of risks. Specifically, the comment conflates the risks of artificial general intelligence (AGI) takeover with the risks of misuse of AGI. These are two very different types of risks, and they should not be conflated. AGI takeover risks are risks arising from the fact that AGI may be misaligned with human values. That is, AGI may be designed in such a way that it does not act in ways that are beneficial to humanity. This is a risk because AGI could potentially cause great harm to humanity if it is not properly controlled. Misuse risks, on the other hand, are risks arising from the fact that AGI may be used in ways that are harmful to humanity. This is a risk because AGI could be used to create powerful weapons or to manipulate people in harmful ways. The comment suggests that the probability of AGI takeover is low because it is based on survey-based models. However, this is not a valid way to calculate probabilities. Probabilities should be based on evidence.