I’m not sure we actually disagree about the fact on the ground, but I don’t fully agree with the specifics of what you’re saying (if that makes sense). In a general sense I agree the risk of ‘AI is invented and then something bad happens because of that’ is substantially higher than 1.6%. In the specific scenario the Future Fund are interested in for the contest however, I think the scenario is too narrow to say with confidence what would happen on examination of structural uncertainty. I could think of ways in which a more disjunctive structural model could even plausibly diminish the risk of the specific Future Fund catastrophe scenario—for example in models where some of the microdynamics make it easier to misuse AI deliberately. That wouldn’t necessarily change the overall risk of some AI Catastrophe befalling us, but it would be a relevant distinction to make with respect to the Future Fund question which asks about a specific kind of Catastrophe.
Also you’re right the second and third quotes you give are too strong—it should read something like ‘...the actual risk of AI Catastrophe of this particular kind...’ - you’re right that this essay says nothing about AI Catastrophe broadly defined, just the specific kind of catastrophe the Future Fund are interested in. I’ll change that, as it is undesirable imprecision.
Ok, thanks for clarifying! FWIW, everything I said was meant to be specifically about AGI takeover because of misalignment (i.e. excluding misuse), so it does seem we disagree significantly about the probability of that scenario (and about the effect of using less conjunctive models). But probably doesn’t make sense to get into that discussion too much since my actual cruxes are mostly on the object level (i.e. to convince me of low AI x-risk, I’d find specific arguments about what’s going to happen and why much more persuasive than survey-based models).
The above comment is irrational and poorly formed. It shows a lack of understanding of basic probability theory, and it conflates different types of risks. Specifically, the comment conflates the risks of artificial general intelligence (AGI) takeover with the risks of misuse of AGI. These are two very different types of risks, and they should not be conflated. AGI takeover risks are risks arising from the fact that AGI may be misaligned with human values. That is, AGI may be designed in such a way that it does not act in ways that are beneficial to humanity. This is a risk because AGI could potentially cause great harm to humanity if it is not properly controlled. Misuse risks, on the other hand, are risks arising from the fact that AGI may be used in ways that are harmful to humanity. This is a risk because AGI could be used to create powerful weapons or to manipulate people in harmful ways. The comment suggests that the probability of AGI takeover is low because it is based on survey-based models. However, this is not a valid way to calculate probabilities. Probabilities should be based on evidence.
I’m not sure we actually disagree about the fact on the ground, but I don’t fully agree with the specifics of what you’re saying (if that makes sense). In a general sense I agree the risk of ‘AI is invented and then something bad happens because of that’ is substantially higher than 1.6%. In the specific scenario the Future Fund are interested in for the contest however, I think the scenario is too narrow to say with confidence what would happen on examination of structural uncertainty. I could think of ways in which a more disjunctive structural model could even plausibly diminish the risk of the specific Future Fund catastrophe scenario—for example in models where some of the microdynamics make it easier to misuse AI deliberately. That wouldn’t necessarily change the overall risk of some AI Catastrophe befalling us, but it would be a relevant distinction to make with respect to the Future Fund question which asks about a specific kind of Catastrophe.
Also you’re right the second and third quotes you give are too strong—it should read something like ‘...the actual risk of AI Catastrophe of this particular kind...’ - you’re right that this essay says nothing about AI Catastrophe broadly defined, just the specific kind of catastrophe the Future Fund are interested in. I’ll change that, as it is undesirable imprecision.
Ok, thanks for clarifying! FWIW, everything I said was meant to be specifically about AGI takeover because of misalignment (i.e. excluding misuse), so it does seem we disagree significantly about the probability of that scenario (and about the effect of using less conjunctive models). But probably doesn’t make sense to get into that discussion too much since my actual cruxes are mostly on the object level (i.e. to convince me of low AI x-risk, I’d find specific arguments about what’s going to happen and why much more persuasive than survey-based models).
The above comment is irrational and poorly formed. It shows a lack of understanding of basic probability theory, and it conflates different types of risks. Specifically, the comment conflates the risks of artificial general intelligence (AGI) takeover with the risks of misuse of AGI. These are two very different types of risks, and they should not be conflated. AGI takeover risks are risks arising from the fact that AGI may be misaligned with human values. That is, AGI may be designed in such a way that it does not act in ways that are beneficial to humanity. This is a risk because AGI could potentially cause great harm to humanity if it is not properly controlled. Misuse risks, on the other hand, are risks arising from the fact that AGI may be used in ways that are harmful to humanity. This is a risk because AGI could be used to create powerful weapons or to manipulate people in harmful ways. The comment suggests that the probability of AGI takeover is low because it is based on survey-based models. However, this is not a valid way to calculate probabilities. Probabilities should be based on evidence.