Perhaps the more substantive disagreement is what fraction of the work is in which category. I see most but not all ongoing technical work as being in the first category, and I think you see almost all ongoing technical work as being in the second category. (I think you agreed that âpublishing an analysis about what happens if a cosmic ray flips a bitâ goes in the first category.)
Ya, I think this is the crux. Also, considerations like the cosmic ray flips a bit tend to force a lot of things into the second category when they otherwise wouldnât have been, although Iâm not specifically worried about cosmic ray bit flips, since they seems sufficiently unlikely and easy to avoid.
(Luke says âAI-relatedâ but my impression is that he mostly works on AGI governance not technical, and the link is definitely about governance not technical. I would not be at all surprised if proposed governance-related projects were much more heavily weighted towards the second category, and am only saying that technical safety research is mostly first-category.)
(Fair.)
The âcluelessnessâ intuition gets its force from having a strong and compelling upside story weighed against a strong and compelling downside story, I think.
This is actually what Iâm thinking is happening, though (not like the firefighter example), but we arenât really talking much about the specifics. There might indeed be specific cases where I agree that we shouldnât be clueless if we worked through them, but I think there are important potential tradeoffs between incidental and agential s-risks, between s-risks and other existential risks, even between the same kinds of s-risks, etc., and there is a ton of uncertainty in the expected harm from these risks, so much that itâs inappropriate to use a single distribution (without sensitivity analysis to âreasonableâ distributions, and with this sensitivity analysis, things look ambiguous), similar to this example, and weâre talking about âsweeteningâ one side or the other i, but thatâs totally swamped by our uncertainty.
If the first-order effect of a project is âdirectly mitigating an important known s-riskâ, and the second-order effects of the same project are âI dunno, itâs a complicated world, anything could happenâ, then I say we should absolutely do that project.
What I have in mind is more symmetric in upsides and downsides (or at least, Iâm interested in hearing why people think it isnât in practice), and I donât really distinguish between effects by order*. My post points out potential reasons that I actually think could dominate. The standard Iâm aiming for is âCould a reasonable person disagree?â, and I default to believing a reasonable person could disagree when I point out such tradeoffs until we actually carefully work through them in detail and it turns out itâs pretty unreasonable to disagree.
*Although thinking more about it now, I suppose longer chains are more fragile and likely to have unaccounted for effects going in the opposite direction, so maybe we ought to give them less weight, and maybe this solves the issue if we did this formally? I think ignoring higher-order effects is formally irrational using vNM rationality or stochastic dominance, although maybe fine in practice, if what weâre actually doing is just an approximation of giving them far less weight with a skeptical prior and then they actually just get dominated completely by more direct effects.
I donât really distinguish between effects by order*
I agree that direct and indirect effects of an action are fundamentally equally important (in this kind of outcome-focused context) and I hadnât intended to imply otherwise.
Ya, I think this is the crux. Also, considerations like the cosmic ray flips a bit tend to force a lot of things into the second category when they otherwise wouldnât have been, although Iâm not specifically worried about cosmic ray bit flips, since they seems sufficiently unlikely and easy to avoid.
(Fair.)
This is actually what Iâm thinking is happening, though (not like the firefighter example), but we arenât really talking much about the specifics. There might indeed be specific cases where I agree that we shouldnât be clueless if we worked through them, but I think there are important potential tradeoffs between incidental and agential s-risks, between s-risks and other existential risks, even between the same kinds of s-risks, etc., and there is a ton of uncertainty in the expected harm from these risks, so much that itâs inappropriate to use a single distribution (without sensitivity analysis to âreasonableâ distributions, and with this sensitivity analysis, things look ambiguous), similar to this example, and weâre talking about âsweeteningâ one side or the other i, but thatâs totally swamped by our uncertainty.
What I have in mind is more symmetric in upsides and downsides (or at least, Iâm interested in hearing why people think it isnât in practice), and I donât really distinguish between effects by order*. My post points out potential reasons that I actually think could dominate. The standard Iâm aiming for is âCould a reasonable person disagree?â, and I default to believing a reasonable person could disagree when I point out such tradeoffs until we actually carefully work through them in detail and it turns out itâs pretty unreasonable to disagree.
*Although thinking more about it now, I suppose longer chains are more fragile and likely to have unaccounted for effects going in the opposite direction, so maybe we ought to give them less weight, and maybe this solves the issue if we did this formally? I think ignoring higher-order effects is formally irrational using vNM rationality or stochastic dominance, although maybe fine in practice, if what weâre actually doing is just an approximation of giving them far less weight with a skeptical prior and then they actually just get dominated completely by more direct effects.
I agree that direct and indirect effects of an action are fundamentally equally important (in this kind of outcome-focused context) and I hadnât intended to imply otherwise.