So here’s a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):
Some important parts of “developed world” culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. “The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try...”
PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.
EA: Agree. But, as an altruist, I tend to focus on preventing bad stuff rather than making good stuff happen (not sure why...).
Broadly, “progress” comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).
PS: Agree, this gloss seems basically fine for now.
EA: Agree, but we really need to improve on this gloss.
Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).
EA & PS: Seems right. Maybe we disagree on where the current margins are?
Let’s try not to destroy ourselves! The future could be wonderful!
EA & PS: Yeah, duh. But also eek—we recognise the dangers ahead.
Markets and governments are quite functional, so that means there’s much more low hanging fruit in pursuing the interests of those who these systems aren’t at all built to serve (e.g. future generations, animals).
PS: Hmm, take a closer look. There are a lot of trillion dollar bills lying around, even in areas where an optimistic EMH would say that markets and government ought to do well.
EA: So I used to be really into the EMH. These days, I’m not so sure...
EA: I haven’t thought about this much. Quick thought is that I’m happy to see some people working on this. I doubt it’s the best option for many of the people we speak to, but it could be a good option for some.
We can make useful predictions about the effects of new technologies.
PS (David Deutsch): I might grudgingly accept an extremely weak formulation of this claim. At least on Fridays. And only if you don’t try to explicitly assign probabilities.
PS: What’s that? Oh, I see. Yeah. Well… I’m all for thinking hard about things, and acting on the assumption that I’m probably wrong about mostly everything. In the end, I guess I’m crossing my fingers, and hoping we can learn by trial and error, without getting ourselves killed. Is there another option?
EA: I think about what I believe. Then I think about whether this is an information hazard, and discuss this possibility with a lot of my friends. Then I say it in a way that makes a lot of sense to people who are a lot like me, but I don’t much think about other audiences.
So here’s a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):
Some important parts of “developed world” culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. “The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try...”
Broadly, “progress” comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).
Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).
Let’s try not to destroy ourselves! The future could be wonderful!
Markets and governments are quite functional, so that means there’s much more low hanging fruit in pursuing the interests of those who these systems aren’t at all built to serve (e.g. future generations, animals).
Broadly promoting industrial literacy is really important.
We can make useful predictions about the effects of new technologies.
You might be missing a crucial consideration!
On Max Daniel’s thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.
How do you give advice?
I think it’s more like: