I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be. It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done. What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be.
It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
Please make sure to enter this contest before the deadline!
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.