Assuming the parts about sentience work, someone who is both rational and altruistic (a rational altruist, as you say) might still have normativereasons not to run these trainings.
Some (non-exhaustive) reasons I can think of, that are based around, or are compatible with an expected value framework (some of these assume that at least some suffering is due to running the trainings—e.g., through suffering subroutines which may result in incidental s-risks, or that there are forgone opportunities to reduce suffering/increase happiness elsewhere) include:
Asymmetry ideas in population ethics (e.g., ‘making people happy, not making happy people’).
A position of diminishing returns (or zero returns after a certain point) on the value of happiness.
Ideas objecting to intrapersonal or interpersonal tradeoffs of creating happiness at the price of creating suffering (in this case you might have multiple expected (dis)values to work with).
Value lexicality that says that some bads are always worse than goods in some or any amount (some conceive this as listing and comparing expected (dis)values on vectors).
Different forms of negative utilitarianism. (It is worth emphasizing that the expected (dis)value of happiness and suffering depend on both our subjective valuation of the experiences we imagine to occur, and the subjective probability that these experiences actually occur.)
These could be motivated by the thinking that the (dis)value of suffering and happiness are orthogonal and don’t ‘cancel out’ each other. I think Magnus Vinding’s book on SFE has more clearly presented insights than I can offer—so checking that out could be useful if you’d like to learn more.
Assuming the parts about sentience work, someone who is both rational and altruistic (a rational altruist, as you say) might still have normative reasons not to run these trainings.
Some (non-exhaustive) reasons I can think of, that are based around, or are compatible with an expected value framework (some of these assume that at least some suffering is due to running the trainings—e.g., through suffering subroutines which may result in incidental s-risks, or that there are forgone opportunities to reduce suffering/increase happiness elsewhere) include:
Asymmetry ideas in population ethics (e.g., ‘making people happy, not making happy people’).
A position of diminishing returns (or zero returns after a certain point) on the value of happiness.
Ideas objecting to intrapersonal or interpersonal tradeoffs of creating happiness at the price of creating suffering (in this case you might have multiple expected (dis)values to work with).
Value lexicality that says that some bads are always worse than goods in some or any amount (some conceive this as listing and comparing expected (dis)values on vectors).
Different forms of negative utilitarianism. (It is worth emphasizing that the expected (dis)value of happiness and suffering depend on both our subjective valuation of the experiences we imagine to occur, and the subjective probability that these experiences actually occur.)
These could be motivated by the thinking that the (dis)value of suffering and happiness are orthogonal and don’t ‘cancel out’ each other. I think Magnus Vinding’s book on SFE has more clearly presented insights than I can offer—so checking that out could be useful if you’d like to learn more.