Thanks for sharing your thoughts. Iâll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.
On Suffering-focused Ethics: To be honest, Iâve never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1]
Such a moral view doesnât really make sense to me, to be honest, so Iâm not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if youâre convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.
On complex cluelessness: On this side, I think youâre right about a lot of things, but thatâs a good thing not a bad one!
I think youâre right about the âtime of perilsâ assumption, but you really should increase your scepticism of any intervention which claims to have âlasting, positive effects over millenniaâ since we canât get the feedback on the millennia long impact of our interventions.
You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and thereâs no use beating yourself up for the default state that everyone is in.
You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesnât have to involve you spending hundreds of hours trying to solve the problems of cluelessness.
Iâd argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since thatâs the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.
You seem to be asking e.g. âwill lowering existential risk increase the expected amount of future sufferingâ instead of âwill lowering existential risk increase the amount of total preferences satisfied/ânon frustratedâ for example.
Thanks for sharing your thoughts. Iâll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.
On Suffering-focused Ethics: To be honest, Iâve never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1]
Such a moral view doesnât really make sense to me, to be honest, so Iâm not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if youâre convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.
On complex cluelessness: On this side, I think youâre right about a lot of things, but thatâs a good thing not a bad one!
I think youâre right about the âtime of perilsâ assumption, but you really should increase your scepticism of any intervention which claims to have âlasting, positive effects over millenniaâ since we canât get the feedback on the millennia long impact of our interventions.
You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and thereâs no use beating yourself up for the default state that everyone is in.
You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesnât have to involve you spending hundreds of hours trying to solve the problems of cluelessness.
Iâd argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since thatâs the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.
You seem to be asking e.g. âwill lowering existential risk increase the expected amount of future sufferingâ instead of âwill lowering existential risk increase the amount of total preferences satisfied/ânon frustratedâ for example.
To clairfy, this sentence specifically referred to lexical suffering views, not all forms of SFE that are less strong in their formulation