First, you’re adding the assumption that the framing must be longtermist, and second, even conditional on longtermism you don’t need to be utilitarian, so the supposition that you need a model of what we do with the cosmic endowment would still be unjustified.
You’re not going to be prioritizing between extinction risk and long term trajectory changes based on tractability if you don’t care about the far future. And for any moral theory you can ask “why do you think this will be a good outcome?” and as long as you don’t value life intrinsically you’ll have to state some empirical hypotheses about the far future
There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
I don’t understand why that matters. Whatever discount rate you have, if you’re prioritizing between extinction risk and trajectory change you will have some parameters that tell you something about what is going to happen over N years. It doesn’t matter how long this time horizon is. I think you’re not thinking about whether your claims have bearing on the actual matter at hand.
It would probably be most useful for you to try to articulate a view that avoids the dilemma I mentioned in the first comment of this thread.
we can make powerful AI agents that determine what happens in the lightcone
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.
Any disagreement about longtermist prioritization should presuppose longtermism
First, you’re adding the assumption that the framing must be longtermist, and second, even conditional on longtermism you don’t need to be utilitarian, so the supposition that you need a model of what we do with the cosmic endowment would still be unjustified.
You’re not going to be prioritizing between extinction risk and long term trajectory changes based on tractability if you don’t care about the far future. And for any moral theory you can ask “why do you think this will be a good outcome?” and as long as you don’t value life intrinsically you’ll have to state some empirical hypotheses about the far future
There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
I don’t understand why that matters. Whatever discount rate you have, if you’re prioritizing between extinction risk and trajectory change you will have some parameters that tell you something about what is going to happen over N years. It doesn’t matter how long this time horizon is. I think you’re not thinking about whether your claims have bearing on the actual matter at hand.
It would probably be most useful for you to try to articulate a view that avoids the dilemma I mentioned in the first comment of this thread.
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.