I think it did a good job of explaining why the metacrisis might be relevant from an EA standpoint (the dialogue format was a great choice!). I made a similar (but different) argument—that Less Wrong should be paying attention to the sensemaking space—back in 2021[1] and it may still be helpful for readers who want to get better sense of the scene[2].
Unfortunately, I’m with Amina. Short AI timelines are looking increasingly likely and culture change tends to take a long time, so the argument for prioritising this isn’t looking as promising as it previously did[3]. It’s a shame that some of these conversations didn’t start happening a decade or two earlier than they did. Some of these conversations could have been great for preparing the (intellectual) soil and it could have provided motiviation for working on generally useful infrastructure at the point in time when it made sense to be doing that.
Another worry I have about the metacrisis framing is that it, by default, it seems to imply that we should think of all these threats as being on par, when that increasingly doesn’t seem to actually be the case.
Amina: This all feels impenetrable to me. They use a bunch of jargon and a lot of it sounds floaty, vague or mental. Diego: Yea I get that too. It reminds me of how I felt the first time I encountered EA.
I felt this response leaned a bit popularist. I think it’s pretty clear that conversation in sensemaking space is much less precise than EA/rationality on average. The flip side of the coin is that the sensemaking space is open to ideas that would be less likely to resonate in EA/rationality. Whether this trade-off is worth it comes down to factors like how valuable these ideas tend to be, how valuable it is to avoid incorrectly adopting confused beliefs vs. incorrectly rejecting fruitful ideas and the purpose of the movement.
FWIW, I was a lot more positive on the sensemaking space back in the day, now I’m a lot more uncertain. I think there’s a lot of fruitful ideas there, but I’m not convinced that the scene has the tools that it needs to identify which ideas are or aren’t fruitful.
Interesting post.
I think it did a good job of explaining why the metacrisis might be relevant from an EA standpoint (the dialogue format was a great choice!). I made a similar (but different) argument—that Less Wrong should be paying attention to the sensemaking space—back in 2021[1] and it may still be helpful for readers who want to get better sense of the scene[2].
Unfortunately, I’m with Amina. Short AI timelines are looking increasingly likely and culture change tends to take a long time, so the argument for prioritising this isn’t looking as promising as it previously did[3]. It’s a shame that some of these conversations didn’t start happening a decade or two earlier than they did. Some of these conversations could have been great for preparing the (intellectual) soil and it could have provided motiviation for working on generally useful infrastructure at the point in time when it made sense to be doing that.
Another worry I have about the metacrisis framing is that it, by default, it seems to imply that we should think of all these threats as being on par, when that increasingly doesn’t seem to actually be the case.
I felt this response leaned a bit popularist. I think it’s pretty clear that conversation in sensemaking space is much less precise than EA/rationality on average. The flip side of the coin is that the sensemaking space is open to ideas that would be less likely to resonate in EA/rationality. Whether this trade-off is worth it comes down to factors like how valuable these ideas tend to be, how valuable it is to avoid incorrectly adopting confused beliefs vs. incorrectly rejecting fruitful ideas and the purpose of the movement.
FWIW, I was a lot more positive on the sensemaking space back in the day, now I’m a lot more uncertain. I think there’s a lot of fruitful ideas there, but I’m not convinced that the scene has the tools that it needs to identify which ideas are or aren’t fruitful.
Though certainly not as well as you are doing here!
Or at least how it was back in 2021, I haven’t really followed it in a while.
Your counter-arguments make reasonable points, but they aren’t strong enough (in my opinion) to outweigh the arguments you’ve put them up against.