I agree with many particular points in this post and the apparent thesis[1], but also think most people[2]should focus on short timelines (contrary to the apparent implication of the post). The reasons why are:
Short timelines have more leverage. This isn’t just because of more neglectedness now, but also because: (1) it’s easier to target approaches towards shorter timelines where less has changed, (2) short timelines are riskier (and I think riskier worlds are more leveraged for most interventions, this is sensitive to my views on risk and the most leveraged interventions), and (3) it’s easier to operate in near mode when targeting short timelines and I expect this has a bunch of benefits (mostly from psychological / cognitive bias perspective).
I put sufficiently high probability on short timelines: maybe 25% in <2.5 years to full AI R&D automation and 50% in <5. I don’t think deference to other experts shifts me towards longer timelines by much.[3]I think there are good arguments for this view, though I certainly agree there isn’t consensus and the arguments aren’t that clear cut or legible.
I expect work explicitly focused on short timelines (across most areas) to transfer pretty well and generally not cause that much downside in longer timelines. I think the transfer in the other direction tends to look less good in practice. (To be clear, I think work focused on short timelines shouldn’t neglect thinking about downsides in longer timelines, I just think this is usually not that big of a deal.)
The counterargument I’m most sympathetic to is that (1) a high fraction of the work should be focused on “better futures” and (2) for better futures work, the leverage is higher in longer timelines. (I don’t currently agree with either of (1) or (2), but I’m very uncertain.)
Assuming the thesis is “our probability distribution should span a wide range (including Daniel’s distribution as an example of a wide range) and we should take this into account in our decision making. ↩︎
Or at least most of the quality weighted labor supply. ↩︎
I might have a small difference between these stated probabilities and my full all considered view including defering to others. To avoid deference cascades, I usually state probabilities somewhat closer to my non-deference view. (It’s hard to fully disentangle deference because my views are based on talking to a wide range of different people.) Post deference my distribution is a bit wider with a correspondingly longer median. But I don’t think this makes much difference either way and deference also pulls up my probability on very short timelines. ↩︎
I agree with many particular points in this post and the apparent thesis[1], but also think most people[2]should focus on short timelines (contrary to the apparent implication of the post). The reasons why are:
Short timelines have more leverage. This isn’t just because of more neglectedness now, but also because: (1) it’s easier to target approaches towards shorter timelines where less has changed, (2) short timelines are riskier (and I think riskier worlds are more leveraged for most interventions, this is sensitive to my views on risk and the most leveraged interventions), and (3) it’s easier to operate in near mode when targeting short timelines and I expect this has a bunch of benefits (mostly from psychological / cognitive bias perspective).
I put sufficiently high probability on short timelines: maybe 25% in <2.5 years to full AI R&D automation and 50% in <5. I don’t think deference to other experts shifts me towards longer timelines by much.[3]I think there are good arguments for this view, though I certainly agree there isn’t consensus and the arguments aren’t that clear cut or legible.
I expect work explicitly focused on short timelines (across most areas) to transfer pretty well and generally not cause that much downside in longer timelines. I think the transfer in the other direction tends to look less good in practice. (To be clear, I think work focused on short timelines shouldn’t neglect thinking about downsides in longer timelines, I just think this is usually not that big of a deal.)
The counterargument I’m most sympathetic to is that (1) a high fraction of the work should be focused on “better futures” and (2) for better futures work, the leverage is higher in longer timelines. (I don’t currently agree with either of (1) or (2), but I’m very uncertain.)
Assuming the thesis is “our probability distribution should span a wide range (including Daniel’s distribution as an example of a wide range) and we should take this into account in our decision making. ↩︎
Or at least most of the quality weighted labor supply. ↩︎
I might have a small difference between these stated probabilities and my full all considered view including defering to others. To avoid deference cascades, I usually state probabilities somewhat closer to my non-deference view. (It’s hard to fully disentangle deference because my views are based on talking to a wide range of different people.) Post deference my distribution is a bit wider with a correspondingly longer median. But I don’t think this makes much difference either way and deference also pulls up my probability on very short timelines. ↩︎