Thank you, Will, excellent questions. And thanks for drawing out all of the implications here. Yeah I’m a super duper bullet biter. Age hasn’t dulled my moral senses like it has yours! xP
2. But maybe you think it’s just you who has your values and everyone else would converge on something subtly different—different enough to result in the loss of essentially all value. Then the 1-in-1-million would no longer seem so pessimistic.
Yes, I take (2) on the 1 vs 2 horn. I think I’m the only person who has my exact values. Maybe there’s someone else in the world, but not more than a handful at most. This is because I think our descendants will have to make razor-thin choices in computational space about what matters and how much, and these choices will amount to Power Laws of Value.
But if so, then suppose I’m Galactic Emperor and about to turn everything into X, best by my lights… do you really take a 99.9% chance of extinction, and a 0.1% chance of stuff optimised by you, instead?
I generally like your values quite a bit, but you’ve just admitted that you’re highly scope insensitive. So even if we valued the same matter equally as much, depending on the empirical facts it looks like I should value my own judgment potentially nonillions as much as yours, just on scope sensitivity grounds alone!
3. And if so, do you think that Tyler-now has different values than Tyler-2026? Or are you worried that he might have slightly different values, such that you should be trying to bind yourself to the mast in various ways?
Yup, I am worried about this and I am not doing much about it. I’m worried that the best thing that I could do would simply be to go into cryopreservation right now and hope that my brain is uploaded as a logically omniscient emulation with its values fully locked in and extrapolated. But I’m not super excited about making that sacrifice. Any tips on ways to tie myself to the mast?
what’s the probability you have that: i. People in general just converge on what’s right?
It would be something like: P(people converge on my exact tastes without me forcing them to) + [P(kind of moral or theistic realism I don’t understand)*P(the initial conditions are such that this convergence happens)*P(it happens quickly enough before other values are locked in)*P(people are very motivated by these values)]. To hazard an-off-the cuff guess, maybe 10^-8 + 10^-4*0.2*0.3*0.4, or about 2.4*10^-6.
ii. People don’t converge, but a significant enough fraction converge with you that you and others end up with more than 1milllionth of resources?
I should be more humble about this. Maybe it turns out there just aren’t that many free parameters on moral value once you’re a certain kind of hedonistic consequentialist who knows the empirical facts and those people kind of converge to the same things. Suppose that’s 1⁄30 odds vs my “it could be anything” modal view. Then suppose 1⁄20 elites become that kind of hedonistic consequentialist upon deliberation. Then it looks like we control 1/600th of the resources. I’m just making these numbers up, but hopefully they illustrate that this is a useful push that makes me a bit less pessimistic.
iii. You are able to get most of what you want via trade with others?
Maybe 1⁄20 that we do get to a suitably ideal kind of trade. I believe what I want is a pretty rivalrous good, i.e. stars, so at the advent of ideal trade I still won’t get very much of what I want. But it’s worth thinking about whether I could get most of what I want in other ways, such as by trading with digital slave-owners to make their slaves extremely happy, in a relatively non-rivalrous way.
I don’t have a clear view on this and think further reflection on this could change my views a lot.
Thanks! I appreciate you clarifying this, and for being clear about it. Views along these lines are what I always expect subjectivists to have, and they never do, and then I feel confused.
Thank you! IMO the best argument for subjectivists not having these views would be thinking that (1) humans generally value reasoning processes, (2) there are not that many different reasoning processes you could adopt or as a matter of biological or social fact we all value roughly the same reasoning processes, and (3) these processes have clear and determinate implications. Or, in short, Kant was right: if we reason from the standpoint of “reason”, which is some well-defined and unified thing that we all care about, we all end up in the same place. But I reject all of these premises.
The other argument is that our values are only determinate over Earthly things we are familiar with in our ancestral environment, and among Earthly things we empirically all kinda care about the same things. (I discuss this a bit here.)
Thank you, Will, excellent questions. And thanks for drawing out all of the implications here. Yeah I’m a super duper bullet biter. Age hasn’t dulled my moral senses like it has yours! xP
Yes, I take (2) on the 1 vs 2 horn. I think I’m the only person who has my exact values. Maybe there’s someone else in the world, but not more than a handful at most. This is because I think our descendants will have to make razor-thin choices in computational space about what matters and how much, and these choices will amount to Power Laws of Value.
I generally like your values quite a bit, but you’ve just admitted that you’re highly scope insensitive. So even if we valued the same matter equally as much, depending on the empirical facts it looks like I should value my own judgment potentially nonillions as much as yours, just on scope sensitivity grounds alone!
Yup, I am worried about this and I am not doing much about it. I’m worried that the best thing that I could do would simply be to go into cryopreservation right now and hope that my brain is uploaded as a logically omniscient emulation with its values fully locked in and extrapolated. But I’m not super excited about making that sacrifice. Any tips on ways to tie myself to the mast?
It would be something like: P(people converge on my exact tastes without me forcing them to) + [P(kind of moral or theistic realism I don’t understand)*P(the initial conditions are such that this convergence happens)*P(it happens quickly enough before other values are locked in)*P(people are very motivated by these values)]. To hazard an-off-the cuff guess, maybe 10^-8 + 10^-4*0.2*0.3*0.4, or about 2.4*10^-6.
I should be more humble about this. Maybe it turns out there just aren’t that many free parameters on moral value once you’re a certain kind of hedonistic consequentialist who knows the empirical facts and those people kind of converge to the same things. Suppose that’s 1⁄30 odds vs my “it could be anything” modal view. Then suppose 1⁄20 elites become that kind of hedonistic consequentialist upon deliberation. Then it looks like we control 1/600th of the resources. I’m just making these numbers up, but hopefully they illustrate that this is a useful push that makes me a bit less pessimistic.
Maybe 1⁄20 that we do get to a suitably ideal kind of trade. I believe what I want is a pretty rivalrous good, i.e. stars, so at the advent of ideal trade I still won’t get very much of what I want. But it’s worth thinking about whether I could get most of what I want in other ways, such as by trading with digital slave-owners to make their slaves extremely happy, in a relatively non-rivalrous way.
I don’t have a clear view on this and think further reflection on this could change my views a lot.
Thanks! I appreciate you clarifying this, and for being clear about it. Views along these lines are what I always expect subjectivists to have, and they never do, and then I feel confused.
Thank you! IMO the best argument for subjectivists not having these views would be thinking that (1) humans generally value reasoning processes, (2) there are not that many different reasoning processes you could adopt or as a matter of biological or social fact we all value roughly the same reasoning processes, and (3) these processes have clear and determinate implications. Or, in short, Kant was right: if we reason from the standpoint of “reason”, which is some well-defined and unified thing that we all care about, we all end up in the same place. But I reject all of these premises.
The other argument is that our values are only determinate over Earthly things we are familiar with in our ancestral environment, and among Earthly things we empirically all kinda care about the same things. (I discuss this a bit here.)