You seem to indicate that one who is âmaximizingâ for some value, such as the well-being of moral patients across spacetime would lead to, or tend to lead to, poor mental health. I can understand how one might think this for a ânaĂŻve maximizationâ, where one depletes oneself by giving of oneself, in terms of ones effort, time, and resources, at a rate that either makes one burnout, or barely able to function. But this is like suggesting if you want to get the most out of a car, you should drive it as frequently and relentlessly, without providing the vehicle needed upkeep and repairs.
But one who does not incorporate oneâs own needs, including mental health needs, into oneâs determination of how to maximize for a value is not operating optimally as a maximizer. I will note that there have been others who have indicated that when they view the satisfaction of their own needs or desires as primarily instrumental, rather than terminal goals, that this somewhat diminishes them. In my personal experience, I strive to âmaximizeâ- I want to live my life in a way that best calculated toward reducing suffering and increasing flourishing of conscious beings- but I recognize that taking care of my health is part of how to do so.
I would be curious if other âmaximizersâ would say that they are capable of integrating their own health into their decisions such that they can maintain adequate health.
I hold the same view towards ânon-naiveâ maximization being suboptimal for some people. Further clarification in my other comment.
I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across âticking time bombâ scenarios that Iâm using as a sort of Pascalâs mugging (except that thereâs plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isnât a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.
I donât reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say itâs not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesnât directly align with known risks of ticking time bombs?
You seem to indicate that one who is âmaximizingâ for some value, such as the well-being of moral patients across spacetime would lead to, or tend to lead to, poor mental health. I can understand how one might think this for a ânaĂŻve maximizationâ, where one depletes oneself by giving of oneself, in terms of ones effort, time, and resources, at a rate that either makes one burnout, or barely able to function. But this is like suggesting if you want to get the most out of a car, you should drive it as frequently and relentlessly, without providing the vehicle needed upkeep and repairs.
But one who does not incorporate oneâs own needs, including mental health needs, into oneâs determination of how to maximize for a value is not operating optimally as a maximizer. I will note that there have been others who have indicated that when they view the satisfaction of their own needs or desires as primarily instrumental, rather than terminal goals, that this somewhat diminishes them. In my personal experience, I strive to âmaximizeâ- I want to live my life in a way that best calculated toward reducing suffering and increasing flourishing of conscious beings- but I recognize that taking care of my health is part of how to do so.
I would be curious if other âmaximizersâ would say that they are capable of integrating their own health into their decisions such that they can maintain adequate health.
I hold the same view towards ânon-naiveâ maximization being suboptimal for some people. Further clarification in my other comment.
I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across âticking time bombâ scenarios that Iâm using as a sort of Pascalâs mugging (except that thereâs plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isnât a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.
I donât reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say itâs not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesnât directly align with known risks of ticking time bombs?