money compounds over time and because risk increases with time...However, these are instrumental considerations for discounting the future.
Here, by contrast, we are considering whether the intrinsic value of people itself should be discounted.
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”
Is this really true?
No, I mean seriously, is this true? I’m dumb and not a philosopher.
I don’t intrinsically discount the value of people or morally relevant entities. In fact, it would take me time to even come up with reasons why anyone would discount anyone else, whether they are far away in space or time, or alien to us. Like, this literally seems like the definition of evil?
So this seems to make me really incompetent at coming up with an answer to this post.
Now, using knowledge of x-risk acquired from Youtube cartoons, there are kinds of x-risk we can’t prevent. For example, being in an alien zoo, a simulation, or a “false vacuum”, all create forms of x-risk we can’t prevent or even know.
Now, given these x-risks, the reason why we might discount the future is for instrumental reasons, and at least some of these pretty much follow economic arguments: if we think there is a 0.0001% of vacuum decay or some catastrophe that we can’t prevent or even understand, this immediately bounds the long term future.
Now, note that if we parameterize this low percentage (0.0001% or something), it’s likely we can setup some model where current program of longtermism or x-risk reduction, or even much larger, powerful versions, are fully justified, for pretty reasonable ranges of this %.
Hi Charles,
I’m happy to respond, but am reluctant to do so here, since the original post stated that “We don’t want other users discussing other people’s answers, so we will moderate away those comments.”