Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.
For what it’s worth, I think this conclusion is extremely non-obvious and I’m somewhat disheartened when I see people taking it for granted. Most people are prone to optimism bias.
Why are we so cautious about raising these issues?
There may be a sampling bias here. People at Stanford EA talk about these issues, and I read about them online all the time. I haven’t interacted much with CEA/Oxford people but my impression is you guys are a lot less willing to acknowledge that anything might be harmful, and less willing to discuss weird ideas.
I don’t want to interpret that post on flow-through effects as representing anything other than Holden’s personal opinion, but it does strike me as pretty naive (in the mathematical sense of “you only thought of the most obvious conclusion and didn’t go into any depth on this”). GiveWell’s lack of (public) reasoning on flow-through effects is a large part of why I don’t follow its charity recommendations.
The post on differential progress is a step in the right direction, and I’m generally more confident that Nick Beckstead is thinking correctly about flow-through effects than I am about anyone else at GiveWell.
EDIT: To Holden’s credit, he does discuss how global catastrophic risks could make technological/economic harmful, so it’s not like he hasn’t thought about this at all.
For what it’s worth, I think this conclusion is extremely non-obvious and I’m somewhat disheartened when I see people taking it for granted. Most people are prone to optimism bias.
There may be a sampling bias here. People at Stanford EA talk about these issues, and I read about them online all the time. I haven’t interacted much with CEA/Oxford people but my impression is you guys are a lot less willing to acknowledge that anything might be harmful, and less willing to discuss weird ideas.
“People at Stanford EA talk about these issues, and I read about them online all the time.”
I’ve visited virtually every EA chapter and I think Stanford is the single most extreme one in this regard.
And GiveWell—their published statements on this matter basically just say they assume it’s good: http://blog.givewell.org/2013/04/04/deep-value-judgments-and-worldview-characteristics/
With a little more detail: http://blog.givewell.org/2013/05/15/flow-through-effects/
But recently there was this cool post: http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/
I don’t want to interpret that post on flow-through effects as representing anything other than Holden’s personal opinion, but it does strike me as pretty naive (in the mathematical sense of “you only thought of the most obvious conclusion and didn’t go into any depth on this”). GiveWell’s lack of (public) reasoning on flow-through effects is a large part of why I don’t follow its charity recommendations.
The post on differential progress is a step in the right direction, and I’m generally more confident that Nick Beckstead is thinking correctly about flow-through effects than I am about anyone else at GiveWell.
EDIT: To Holden’s credit, he does discuss how global catastrophic risks could make technological/economic harmful, so it’s not like he hasn’t thought about this at all.
The level of confidence in ‘broad empowerment’ as a force for good has always been my biggest disagreement with GiveWell.