My sense is that there’s a lot of causal/top down planning in EA.
My quick thought here is that EA currently has a very strong “evaluative” function (i.e. strong at assessing the pro / con of existing ideas), and a weak “generative” function (i.e. weak at coming up with new ideas).
I’m bullish on increasing EA generativity from the present margin.
To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.
My quick thought here is that EA currently has a very strong “evaluative” function (i.e. strong at assessing the pro / con of existing ideas), and a weak “generative” function (i.e. weak at coming up with new ideas).
I’m bullish on increasing EA generativity from the present margin.
Just saw this AnnaSalamon comment on LessWrong about generativity & trustworthiness. Excerpt:
Your link to Anna Salamon’s comment goes to the Wikipedia page for sealioning :)
Argh!
Fixed, thanks.