I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it.
Sure, but the vibe I get from this post is that Will believes in that a lot less than me, and the reasons he cares about those things don’t primarily route through the totalizing view of ASI’s future impact. Again, I could be wrong or confused about Will’s beliefs here, but I have a hard time squaring the way this post is written with the idea that he intended to communicate that people should work on those things because they’re the best ways to marginally improve our odds of getting an aligned ASI. Part of this is the list of things he chose, part of it is the framing of them as being distinct cause areas from “AI safety”—from my perspective, many of those areas already have at least a few people working on them under the label of “AI safety”/”AI x-risk reduction”.
Like, Lightcone has previously and continues to work on “AI for better reasoning, decision-making and coordination”. I can’t claim to speak for the entire org but when I’m doing that kind of work, I’m not trying to move the needle on how good the world ends up being conditional on us making it through, but on how likely we are to make it through at all. I don’t have that much probability mass on “we lose >10% but less than 99.99% of value in the lightcone”[1].
Edit: a brief discussion with Drake Thomas convinced me that 99.99% is probably a pretty crazy bound to have; let’s say 90%. Wqueezing out that extra 10% involves work that you’d probably describe as “macrostrategy”, but that’s a pretty broad label.
Sure, but the vibe I get from this post is that Will believes in that a lot less than me, and the reasons he cares about those things don’t primarily route through the totalizing view of ASI’s future impact. Again, I could be wrong or confused about Will’s beliefs here, but I have a hard time squaring the way this post is written with the idea that he intended to communicate that people should work on those things because they’re the best ways to marginally improve our odds of getting an aligned ASI. Part of this is the list of things he chose, part of it is the framing of them as being distinct cause areas from “AI safety”—from my perspective, many of those areas already have at least a few people working on them under the label of “AI safety”/”AI x-risk reduction”.
Like, Lightcone has previously and continues to work on “AI for better reasoning, decision-making and coordination”. I can’t claim to speak for the entire org but when I’m doing that kind of work, I’m not trying to move the needle on how good the world ends up being conditional on us making it through, but on how likely we are to make it through at all. I don’t have that much probability mass on “we lose >10% but less than 99.99% of value in the lightcone”[1].
Edit: a brief discussion with Drake Thomas convinced me that 99.99% is probably a pretty crazy bound to have; let’s say 90%. Wqueezing out that extra 10% involves work that you’d probably describe as “macrostrategy”, but that’s a pretty broad label.
I haven’t considered the numbers here very carefully.