Iām not clear on what relevance this holds for EA or any of its cause areas, which is why Iāve tagged this as āPersonal Blogā.
The material seems like it might be a better fit for LessWrong, unless you plan to add more detail on ways that this āstrategyā might be applied to open questions or difficult trade-offs within a cause area (or something community-related, of course).
I would have been much more interested in this post if it had included explicit links to EA. That could be including EA-relevant examples. It also could be explicitly referencing existing EA āliteratureā or a positioning this strategy as a solution to a known problem in the EA.
I donāt think the level of abstraction was necessarily the problem; the problem was that it didnāt seem especially relevant to EA.
Itās true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that Iād expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. Itās lacking the examples and explicit connections though that make this salient. In a future post that Iāve got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. Iāll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
If you plan on future posts which will apply elements of this writing, thatās a handy thing to note in the initial post!
You could also see what Iām advocating here as āwrite posts that bring the base and specifics togetherā; I think that will make material like this easier to understand for people who run across it when it first gets posted.
If youāre working on posts that rely on a collection of concepts/ādefinitions, you could also consider using Shortform posts to lay out the āpiecesā before you assemble them in a post. None of this is mandatory, of course; I just want to lay out what possibilities exist given the Forumās current features.
I think I like the idea of more abstract posts being on the EA Forum, especially if the main intended eventual use is straightforward EA causes. Arguably, a whole lot of the interesting work to be done is kind of abstract.
This specific post seems to be somewhat related to global stability, from what I can tell?
Iām not sure what the ideal split is between this and LessWrong. I imagine that as time goes one we could do a better cluster analysis.
Iām not clear on what relevance this holds for EA or any of its cause areas, which is why Iāve tagged this as āPersonal Blogā.
The material seems like it might be a better fit for LessWrong, unless you plan to add more detail on ways that this āstrategyā might be applied to open questions or difficult trade-offs within a cause area (or something community-related, of course).
I would have been much more interested in this post if it had included explicit links to EA. That could be including EA-relevant examples. It also could be explicitly referencing existing EA āliteratureā or a positioning this strategy as a solution to a known problem in the EA.
I donāt think the level of abstraction was necessarily the problem; the problem was that it didnāt seem especially relevant to EA.
Itās true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that Iād expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. Itās lacking the examples and explicit connections though that make this salient. In a future post that Iāve got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. Iāll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
If you plan on future posts which will apply elements of this writing, thatās a handy thing to note in the initial post!
You could also see what Iām advocating here as āwrite posts that bring the base and specifics togetherā; I think that will make material like this easier to understand for people who run across it when it first gets posted.
If youāre working on posts that rely on a collection of concepts/ādefinitions, you could also consider using Shortform posts to lay out the āpiecesā before you assemble them in a post. None of this is mandatory, of course; I just want to lay out what possibilities exist given the Forumās current features.
I think I like the idea of more abstract posts being on the EA Forum, especially if the main intended eventual use is straightforward EA causes. Arguably, a whole lot of the interesting work to be done is kind of abstract.
This specific post seems to be somewhat related to global stability, from what I can tell?
Iām not sure what the ideal split is between this and LessWrong. I imagine that as time goes one we could do a better cluster analysis.