“I think that it’s potentially very bad that young EAs don’t practice skeptical independent thinking as much (if this is indeed true).”
I agree that this is potentially very bad, but also perhaps difficult to avoid as EA professionalises, because you start needing more background and technical knowledge to weigh in on ongoing debates. Analogous to what happened in science.
On the other hand, we’re literally interested the whole future, about which we currently know almost nothing. So there must be space for new ideas. I guess the problem is that, while “skeptical thinking” about received wisdom is hard, it’s still easier than generative thinking (i.e. coming up with new questions). The problem with EA futurism is not so much that we believe a lot of incorrect statements, but that we haven’t yet thought of most of the relevant concepts. So it may be particularly valuable for people who’ve thought about longtermism a bunch to make public even tentative or wacky ideas, in order to provide more surface area for others to cultivate skeptical thinking and advance the state of our knowledge. (As Buck has in fact done: http://shlegeris.com/2018/10/23/weirdest).
Example 1: a while back there was a post on why animal welfare is an important longtermist priority, and iirc Rob Wiblin replied saying something like “But we’ll have uploaded by then so it won’t be a big deal.” I don’t think that this argument has been made much in the EA context—which makes it both ripe for skeptical independent thinking, but also much less visible as a hypothesis that it’s possible to disagree with.
Example 2: there’s just not very much discussion in EA about what actual utopias might look like. Maybe that’s because, to utilitarians, it’s just hedonium. Or because we’re punting it to the long reflection. But this seems like a very important topic to think about! I’m hoping that if this discussion gets kickstarted, there’ll be a lot of room for people to disagree and come up with novel ideas. Related: a bunch of claims I’ve made about utopia. https://forum.effectivealtruism.org/posts/4jeGFjgCujpyDi6dv/characterising-utopia
I’m reminded of Robin Hanson’s advice to young EAs: “Study the future. … Go actually generate scenarios, explore them, tell us what you found. What are the things that could go wrong there? What are the opportunities? What are the uncertainties? … The world needs more futurists.”
“I think that it’s potentially very bad that young EAs don’t practice skeptical independent thinking as much (if this is indeed true).”
I agree that this is potentially very bad, but also perhaps difficult to avoid as EA professionalises, because you start needing more background and technical knowledge to weigh in on ongoing debates. Analogous to what happened in science.
On the other hand, we’re literally interested the whole future, about which we currently know almost nothing. So there must be space for new ideas. I guess the problem is that, while “skeptical thinking” about received wisdom is hard, it’s still easier than generative thinking (i.e. coming up with new questions). The problem with EA futurism is not so much that we believe a lot of incorrect statements, but that we haven’t yet thought of most of the relevant concepts. So it may be particularly valuable for people who’ve thought about longtermism a bunch to make public even tentative or wacky ideas, in order to provide more surface area for others to cultivate skeptical thinking and advance the state of our knowledge. (As Buck has in fact done: http://shlegeris.com/2018/10/23/weirdest).
Example 1: a while back there was a post on why animal welfare is an important longtermist priority, and iirc Rob Wiblin replied saying something like “But we’ll have uploaded by then so it won’t be a big deal.” I don’t think that this argument has been made much in the EA context—which makes it both ripe for skeptical independent thinking, but also much less visible as a hypothesis that it’s possible to disagree with.
Example 2: there’s just not very much discussion in EA about what actual utopias might look like. Maybe that’s because, to utilitarians, it’s just hedonium. Or because we’re punting it to the long reflection. But this seems like a very important topic to think about! I’m hoping that if this discussion gets kickstarted, there’ll be a lot of room for people to disagree and come up with novel ideas. Related: a bunch of claims I’ve made about utopia. https://forum.effectivealtruism.org/posts/4jeGFjgCujpyDi6dv/characterising-utopia
I’m reminded of Robin Hanson’s advice to young EAs: “Study the future. … Go actually generate scenarios, explore them, tell us what you found. What are the things that could go wrong there? What are the opportunities? What are the uncertainties? … The world needs more futurists.”
See also: https://forum.effectivealtruism.org/posts/Jpmbz5gHJK9CA4aXA/what-are-the-key-ongoing-debates-in-ea