I’m not clear on what relevance this holds for EA or any of its cause areas, which is why I’ve tagged this as “Personal Blog”.
The material seems like it might be a better fit for LessWrong, unless you plan to add more detail on ways that this “strategy” might be applied to open questions or difficult trade-offs within a cause area (or something community-related, of course).
I would have been much more interested in this post if it had included explicit links to EA. That could be including EA-relevant examples. It also could be explicitly referencing existing EA ‘literature’ or a positioning this strategy as a solution to a known problem in the EA.
I don’t think the level of abstraction was necessarily the problem; the problem was that it didn’t seem especially relevant to EA.
It’s true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I’d expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It’s lacking the examples and explicit connections though that make this salient. In a future post that I’ve got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I’ll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
If you plan on future posts which will apply elements of this writing, that’s a handy thing to note in the initial post!
You could also see what I’m advocating here as “write posts that bring the base and specifics together”; I think that will make material like this easier to understand for people who run across it when it first gets posted.
If you’re working on posts that rely on a collection of concepts/definitions, you could also consider using Shortform posts to lay out the “pieces” before you assemble them in a post. None of this is mandatory, of course; I just want to lay out what possibilities exist given the Forum’s current features.
I think I like the idea of more abstract posts being on the EA Forum, especially if the main intended eventual use is straightforward EA causes. Arguably, a whole lot of the interesting work to be done is kind of abstract.
This specific post seems to be somewhat related to global stability, from what I can tell?
I’m not sure what the ideal split is between this and LessWrong. I imagine that as time goes one we could do a better cluster analysis.