Thanks, this is really interesting—in hindsight I should have included something like this when describing the SDO mechanism, because it illustrates it really nicely. Just to follow up on a comment I made somewhere else, the concept of a ‘conjunctive model’ is something I’ve not seen before and implies a sort of ontology of models which I haven’t seen in the literature. A reasonable definition of a model is that it is supposed to reflect an underlying reality, and this will sometimes involve multiplying probabilities and sometimes involve adding two different sources of probabilities.
I’m not an expert in AI Risk so I don’t have much of a horse in this race, but I do note that if the one published model of AI Risk is highly ‘conjunctive’ / describes a reality where many things need to occur in order for AI Catastrophe to occur then the correct response from the ‘disjunctive’ side is to publish their own model, not argue that conjunctive models are inherently biased—in a sense ‘bias’ is the wrong term to use here because the case for the disjunctive side is that the conjunctive model accurately describes a reality which is not our own.
(I’m not suggesting you don’t know this, just that your comment assumes a bit of background knowledge from the reader I thought could potentially be misinterpreted!)
Thanks, this is really interesting—in hindsight I should have included something like this when describing the SDO mechanism, because it illustrates it really nicely. Just to follow up on a comment I made somewhere else, the concept of a ‘conjunctive model’ is something I’ve not seen before and implies a sort of ontology of models which I haven’t seen in the literature. A reasonable definition of a model is that it is supposed to reflect an underlying reality, and this will sometimes involve multiplying probabilities and sometimes involve adding two different sources of probabilities.
I’m not an expert in AI Risk so I don’t have much of a horse in this race, but I do note that if the one published model of AI Risk is highly ‘conjunctive’ / describes a reality where many things need to occur in order for AI Catastrophe to occur then the correct response from the ‘disjunctive’ side is to publish their own model, not argue that conjunctive models are inherently biased—in a sense ‘bias’ is the wrong term to use here because the case for the disjunctive side is that the conjunctive model accurately describes a reality which is not our own.
(I’m not suggesting you don’t know this, just that your comment assumes a bit of background knowledge from the reader I thought could potentially be misinterpreted!)