There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
I don’t understand why that matters. Whatever discount rate you have, if you’re prioritizing between extinction risk and trajectory change you will have some parameters that tell you something about what is going to happen over N years. It doesn’t matter how long this time horizon is. I think you’re not thinking about whether your claims have bearing on the actual matter at hand.
It would probably be most useful for you to try to articulate a view that avoids the dilemma I mentioned in the first comment of this thread.
we can make powerful AI agents that determine what happens in the lightcone
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.
There is a huge range of “far future” that different views will prioritize differently, and not all need to care about the cosmic endowment at all—people can care about the coming 2-3 centuries based on low but nonzero discount rates, for example, but not care about the longer term future very much.
I don’t understand why that matters. Whatever discount rate you have, if you’re prioritizing between extinction risk and trajectory change you will have some parameters that tell you something about what is going to happen over N years. It doesn’t matter how long this time horizon is. I think you’re not thinking about whether your claims have bearing on the actual matter at hand.
It would probably be most useful for you to try to articulate a view that avoids the dilemma I mentioned in the first comment of this thread.
I think that you should articulate a view that explains why you think AI alignment of superintelligent systems is tractable, so that I can understand how you think it’s tractable to allow such systems to be built. That seems like a pretty fundamental disconnect that makes me not understand your )in my view, facile and unconsidered) argument about the tractablity of doing something that seems deeply unlikely to happen.