Great points, thanks! I think the well-being enhancements you describe definitely fit this post’s definition of mind enhancement and could in many ways also affect ‘Benevolence, Intelligence, Power’ (especially ‘Power’). This means that in this regard most of the post’s considerations would equally apply to well-being enhancements too.
However, the aspects I list mostly focus on the instrumental implications of mind enhancements, i.e. how they could increase/decrease effective-altruist impact done by certain actors/society. As the enhancements you describe could be seen as constituting direct impact on QoL/QALY, other considerations would also become important.
E.g. in some cases there could be trade-offs like certain well-being enhancements enhancing subjective quality of life but decreasing ‘Benevolence, Intelligence, Power’. In such a case, expected desirability would depend a lot on your set of assumptions regarding the world like existential risk, long-termism, which could make it much harder to draw any definitive conclusions there.
Definitely a very interesting sub-area and probably also very neglected and worthy of thorough EA examination! :)
Great points, thanks!
I think the well-being enhancements you describe definitely fit this post’s definition of mind enhancement and could in many ways also affect ‘Benevolence, Intelligence, Power’ (especially ‘Power’). This means that in this regard most of the post’s considerations would equally apply to well-being enhancements too.
However, the aspects I list mostly focus on the instrumental implications of mind enhancements, i.e. how they could increase/decrease effective-altruist impact done by certain actors/society. As the enhancements you describe could be seen as constituting direct impact on QoL/QALY, other considerations would also become important.
E.g. in some cases there could be trade-offs like certain well-being enhancements enhancing subjective quality of life but decreasing ‘Benevolence, Intelligence, Power’. In such a case, expected desirability would depend a lot on your set of assumptions regarding the world like existential risk, long-termism, which could make it much harder to draw any definitive conclusions there.
Definitely a very interesting sub-area and probably also very neglected and worthy of thorough EA examination! :)