Thanks for sharing your thoughts. I’ll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.
On Suffering-focused Ethics: To be honest, I’ve never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1]
Such a moral view doesn’t really make sense to me, to be honest, so I’m not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if you’re convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.
On complex cluelessness: On this side, I think you’re right about a lot of things, but that’s a good thing not a bad one!
I think you’re right about the ‘time of perils’ assumption, but you really should increase your scepticism of any intervention which claims to have “lasting, positive effects over millennia” since we can’t get the feedback on the millennia long impact of our interventions.
You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and there’s no use beating yourself up for the default state that everyone is in.
You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesn’t have to involve you spending hundreds of hours trying to solve the problems of cluelessness.
I’d argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since that’s the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.
You seem to be asking e.g. “will lowering existential risk increase the expected amount of future suffering” instead of “will lowering existential risk increase the amount of total preferences satisfied/non frustrated” for example.
Thanks for sharing your thoughts. I’ll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.
On Suffering-focused Ethics: To be honest, I’ve never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1]
Such a moral view doesn’t really make sense to me, to be honest, so I’m not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if you’re convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.
On complex cluelessness: On this side, I think you’re right about a lot of things, but that’s a good thing not a bad one!
I think you’re right about the ‘time of perils’ assumption, but you really should increase your scepticism of any intervention which claims to have “lasting, positive effects over millennia” since we can’t get the feedback on the millennia long impact of our interventions.
You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and there’s no use beating yourself up for the default state that everyone is in.
You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesn’t have to involve you spending hundreds of hours trying to solve the problems of cluelessness.
I’d argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since that’s the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.
You seem to be asking e.g. “will lowering existential risk increase the expected amount of future suffering” instead of “will lowering existential risk increase the amount of total preferences satisfied/non frustrated” for example.
To clairfy, this sentence specifically referred to lexical suffering views, not all forms of SFE that are less strong in their formulation