I feel confused about whether there’s actually a disagreement here. Seems possible that we’re just talking past each other.
I agree that Bletchley Park wasn’t mostly focused on cracking Enigma.
I don’t know enough about Bletchley’s history to have an independent view about whether it was underfunded or not. I’ll follow your view that it was well supported.
It does seem like Turing’s work on Enigma wasn’t highly prioritized when he started working on it (”...because no one else was doing anything about it and I could have it to myself”), and this work turned out to be very impactful. I feel confident claiming that Bletchley wasn’t prioritizing Enigma highly enough before Turing decided to work on it. (Curious whether you disagree about this.)
On the present-day stuff:
My claim is that circa 2010 AI alignment work was being (dramatically) underfunded by institutions, not that it wasn’t being funded at all.
It wouldn’t surprise me if 20 years from now the consensus view was “Oh man, we totally should have been putting more effort towards figuring out what safe geoengineering looks like back in 2019.”
I believe Drexler had a hard time getting support to work on nanotech stuff (believe he’s currently working mostly on AI alignment), but I don’t know the full story there. (I’m holding Drexler as someone who is qualified and aligned with EA goals.)
I thought this was a really good comment – well written and well structured.
I feel confident claiming that Bletchley wasn’t prioritizing Enigma highly enough before Turing decided to work on it.
This is obvious from hindsight, but to make that claim you need to show that they could predict that the expected value was high in advance, which does seem to be the whole game.
I think we can drop the Bletchley park discussion. On the present-day stuff, I think they key point is that future-focused interventions have a very different set of questions than present-day non-quantifiable interventions, and you’re plausibly correct that they are underfunded—but I was trying to focus on the present-day non-quantifiable interventions.
I think we can drop the Bletchley park discussion.
Okay, I take it that you agree with my view.
… future-focused interventions have a very different set of questions than present-day non-quantifiable interventions
How are you separating out “future-focused interventions” from “present-day non-quantifiable interventions”?
Plausibly geoengineering safety will be very relevant in 15-30 years. Assuming that’s true, would you categorize geoengineering safety research as future-focused or present-day non-quantifiable?
I think my example of corruption reduction captures most of the types of interventions that people have suggested are useful but hard-to quantify, but other examples would be happiness focused work, or pushing for systemic change of various sorts.
Tech risks involving GCRs that are a decade or more away are much more future-focused in the sense that different arguments apply, as I said in the original post.
I feel confused about whether there’s actually a disagreement here. Seems possible that we’re just talking past each other.
I agree that Bletchley Park wasn’t mostly focused on cracking Enigma.
I don’t know enough about Bletchley’s history to have an independent view about whether it was underfunded or not. I’ll follow your view that it was well supported.
It does seem like Turing’s work on Enigma wasn’t highly prioritized when he started working on it (”...because no one else was doing anything about it and I could have it to myself”), and this work turned out to be very impactful. I feel confident claiming that Bletchley wasn’t prioritizing Enigma highly enough before Turing decided to work on it. (Curious whether you disagree about this.)
On the present-day stuff:
My claim is that circa 2010 AI alignment work was being (dramatically) underfunded by institutions, not that it wasn’t being funded at all.
It wouldn’t surprise me if 20 years from now the consensus view was “Oh man, we totally should have been putting more effort towards figuring out what safe geoengineering looks like back in 2019.”
I believe Drexler had a hard time getting support to work on nanotech stuff (believe he’s currently working mostly on AI alignment), but I don’t know the full story there. (I’m holding Drexler as someone who is qualified and aligned with EA goals.)
I thought this was a really good comment – well written and well structured.
This is obvious from hindsight, but to make that claim you need to show that they could predict that the expected value was high in advance, which does seem to be the whole game.
I think we can drop the Bletchley park discussion. On the present-day stuff, I think they key point is that future-focused interventions have a very different set of questions than present-day non-quantifiable interventions, and you’re plausibly correct that they are underfunded—but I was trying to focus on the present-day non-quantifiable interventions.
Okay, I take it that you agree with my view.
How are you separating out “future-focused interventions” from “present-day non-quantifiable interventions”?
Plausibly geoengineering safety will be very relevant in 15-30 years. Assuming that’s true, would you categorize geoengineering safety research as future-focused or present-day non-quantifiable?
I think my example of corruption reduction captures most of the types of interventions that people have suggested are useful but hard-to quantify, but other examples would be happiness focused work, or pushing for systemic change of various sorts.
Tech risks involving GCRs that are a decade or more away are much more future-focused in the sense that different arguments apply, as I said in the original post.