Apologies, but like so many posts here, this is massive over thinking which obscures the simple reality that there is some limit to the scale of powers which human beings can successfully manage, because human beings are inherently limited, like every other species ever created by evolution.
It’s understandable that intelligent people will attempt to calculate where those human limits are exactly, but so long as we are racing towards those limits as fast as we possibly can, sooner or later we will reach them, which makes such calculations essentially meaningless.
It sounds like you are worried about the limits of human technological development. Is the worry that eventually our technology will reach a limit, but we cannot know where that limit will be?
I’m not worried about the limits of technical development so much. There seems to be near limitless knowledge still available to be discovered.
I’m worried about the limits of the human beings receiving that technical development.
I’m trying to think holistically, and consider all elements of the system, which includes human beings and our limitations. I’m worried about the mismatch between exponential knowledge development and incremental maturity development.
Thanks for the question. What, if anything, are you worried about?
Thanks for your question. I’m worried about the limits of human beings too! When I’m not doing global priorities research, most of my work is about bounded rationality, which just means I want to know what rationality demands of limited agents like us. Here’s a quick piece about my research approach: https://www.dthorstad.com/research.
What I’m worried about in this post is the relationship between two claims: (1) Existential risk is very high, and (2) It’s very important to do what we can to reduce existential risk. You might think that (1) supports (2). In this post, I argue that (1) could tell against (2).
Thanks for the introduction to your work and these concepts. They’re new to me so that’s the beginning of an education.
What I’m worried about in this post is the relationship between two claims: (1) Existential risk is very high, and (2) It’s very important to do what we can to reduce existential risk. You might think that (1) supports (2). In this post, I argue that (1) could tell against (2).
Thanks also for the quick summary of this page, which I admittedly have not read carefully as it’s pretty much over my head. If you wish to continue with the translation in to man in the street language, what does “(1) could tell against (2)” mean?
Are you saying that 2 does not necessarily follow from 1? Or?
Thanks! Sorry for the confusing hedge-y claim: academics tend to be a bit guarded sometimes when we should just say things straight out.
I definitely mean that (2) doesn’t follow from (1). But I mean a lot more than that. I mean that if (1) is true, then it’s much harder for (2) to be true. It’s not impossible for (1) and (2) to be true together. But (1) actually makes it much harder, not easier for (2) to be true.
Apologies, but like so many posts here, this is massive over thinking which obscures the simple reality that there is some limit to the scale of powers which human beings can successfully manage, because human beings are inherently limited, like every other species ever created by evolution.
It’s understandable that intelligent people will attempt to calculate where those human limits are exactly, but so long as we are racing towards those limits as fast as we possibly can, sooner or later we will reach them, which makes such calculations essentially meaningless.
It sounds like you are worried about the limits of human technological development. Is the worry that eventually our technology will reach a limit, but we cannot know where that limit will be?
I’m not worried about the limits of technical development so much. There seems to be near limitless knowledge still available to be discovered.
I’m worried about the limits of the human beings receiving that technical development.
I’m trying to think holistically, and consider all elements of the system, which includes human beings and our limitations. I’m worried about the mismatch between exponential knowledge development and incremental maturity development.
Thanks for the question. What, if anything, are you worried about?
Thanks for your question. I’m worried about the limits of human beings too! When I’m not doing global priorities research, most of my work is about bounded rationality, which just means I want to know what rationality demands of limited agents like us. Here’s a quick piece about my research approach: https://www.dthorstad.com/research.
What I’m worried about in this post is the relationship between two claims: (1) Existential risk is very high, and (2) It’s very important to do what we can to reduce existential risk. You might think that (1) supports (2). In this post, I argue that (1) could tell against (2).
Thanks for the introduction to your work and these concepts. They’re new to me so that’s the beginning of an education.
Thanks also for the quick summary of this page, which I admittedly have not read carefully as it’s pretty much over my head. If you wish to continue with the translation in to man in the street language, what does “(1) could tell against (2)” mean?
Are you saying that 2 does not necessarily follow from 1? Or?
Thanks! Sorry for the confusing hedge-y claim: academics tend to be a bit guarded sometimes when we should just say things straight out.
I definitely mean that (2) doesn’t follow from (1). But I mean a lot more than that. I mean that if (1) is true, then it’s much harder for (2) to be true. It’s not impossible for (1) and (2) to be true together. But (1) actually makes it much harder, not easier for (2) to be true.
No worries, the problem is mine. There is a LOT that I don’t know about these issues.
So if it’s true that existential risk is very high, that makes it much harder to reduce existential risk? Or...
If it’s true that existential risk is very high, it’s not important to do what we can to reduce it?
Or, something else?
Yeah, the second: if it’s true that existential risk is high, it’s less important to do what we can to reduce it.
It can still be important, but not as important.