This is more of a meta-consideration around shared cultural background and norms. Could it just be a case of allowing yourselves to update toward more scary-sounding probabilities? You have all the information already. This video from Rob Miles (“There’s No Rule That Says We’ll Make It”)[transcript copied from YouTube] made me think along these lines. Aside from background culture considerations around human exceptionalism (inspired by religion) and optimism favouring good endings (Hollywood; perhaps also history to date?), I think there is also an inherent conservatism borne by prestigious mega-philanthropy whereby a doom-laden outlook just doesn’t fit in.
Optimism seems to tilt one in favour of conjunctive reasoning, and pessimism favours disjunctive reasoning. Are you factoring both in?
This is a pretty deep and important point. There may be psychological and cultural biases that make it pretty hard to shift the expected likelihoods of worst-case AI scenarios much higher than they already are—which might bias the essay contest against arguments winning even if they make a logically compelling case for more likely catastrophes.
Maybe one way to reframe this is to consider the prediction “P(misalignment x-risk|AGI)” to also be contingent on us muddling along at the current level of AI alignment effort, without significant increases in funding, talent, insights, or breakthroughs. In other words, probability of very bad things happening, given AGI happening, but also given the status-quo level of effort on AI safety.
This is more of a meta-consideration around shared cultural background and norms. Could it just be a case of allowing yourselves to update toward more scary-sounding probabilities? You have all the information already. This video from Rob Miles (“There’s No Rule That Says We’ll Make It”)[transcript copied from YouTube] made me think along these lines. Aside from background culture considerations around human exceptionalism (inspired by religion) and optimism favouring good endings (Hollywood; perhaps also history to date?), I think there is also an inherent conservatism borne by prestigious mega-philanthropy whereby a doom-laden outlook just doesn’t fit in.
Optimism seems to tilt one in favour of conjunctive reasoning, and pessimism favours disjunctive reasoning. Are you factoring both in?
This is a pretty deep and important point. There may be psychological and cultural biases that make it pretty hard to shift the expected likelihoods of worst-case AI scenarios much higher than they already are—which might bias the essay contest against arguments winning even if they make a logically compelling case for more likely catastrophes.
Maybe one way to reframe this is to consider the prediction “P(misalignment x-risk|AGI)” to also be contingent on us muddling along at the current level of AI alignment effort, without significant increases in funding, talent, insights, or breakthroughs. In other words, probability of very bad things happening, given AGI happening, but also given the status-quo level of effort on AI safety.