Mati_Roy
Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?
I feel like a consequentialist would care about the harm itself whether or not it was caused by them.
And a deontologist wouldn’t act in a certain way even if it meant they would act that way less in the future.
Here’s an example (it’s just a toy example; let’s not argue whether it’s true or not).
A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.
A deontologist wouldn’t eat honey even if they knew they would crack in the future and start eating meat.
If you care much more about the harm caused by you, you might act differently than both of them. You wouldn’t eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.
A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.
I wonder if most self-label deontologist would actually prefer this framework I’m proposing.
EtA: I’m not sure how well “directly caused” can be cached out. Anyone has a model for that?
x-post: https://www.facebook.com/groups/2189993411234830/ (post currently pending)
I wish people x-posting between LessWrong and the EA Forum encouraged users to only comment on one to centralize comments. And to increase the probability that people do follow this suggestion, for posts (which take a long time to read anyway, compare to the time of clicking on a link), I would just put the post on one of the 2 and a link to it on the other
Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.
Which could be used to pay for cryonics among other things.
EA isn’t (supposed to be) dogmatic, and hence doesn’t have clearly defined values.
I agree.
I think this is a big reason why people have chosen to focus on behavior and community involvement.
Community involvement is just instrumental to the goals of EA movement building. I think the outcomes we want to measure are things like career and donations. We also want to measure things that are instrumental to this, but I think we should keep those separated.
Related: my comment on “How have you become more (or less) engaged with EA in the last year?”
I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.
Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.
I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).
Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that’s the argument, then that’s fine I guess.
Somewhat related: The community’s conception of value drifting is sometimes too narrow.
What are your egoistic preferences? (ex.: hedonism peak, hedonism intensity times length, learning, life extension, relationships, etc.)
(why) do you focus on near-term animal welfare and poverty alleviation?
yeah, ‘shift’ or ‘change’ work better for neutral terms. other suggestion: ‘change in reveal preferences’
I see, thanks!
Ok yeah, my explanations didn’t make the connection clear. I’ll elaborate.
I have the impression “drift” has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don’t want to value drift. If you call rational surface-value update “value drift”, it could confuse people, and make them less prone to make those updates.
If you only use ‘value drift’ only to refer to EA-value drift, it also sneaks in an implication that other value changes are not “drifts”. Language shapes our thoughts, so this usage could modify one’s model of the world in such a way that they are more likely to become more EA than they value.
I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that’s the intent.
This seems reasonable to me. I do use the shortcut myself in various contexts. But I think using it on someone when you know it’s because they have different values is rude.
I use value drift to refer to fundamental values. If your surface level values change because you introspected more, I wouldn’t call it a drift. Drift has a connotation of not being in control. Maybe I would rather call it value enlightenment.
I think another term would better fit your description. Maybe “executive failure”.
I don’t see it as a micro death
Me neither. Nor do I see it as a value drift though.
If they have the same value, but just became worse at fulfilling them, then it’s more something like “epistemic drift”; although I would probably discourage using that term.
On the other end, if they started caring more about homeless people intrinsically for some reason, then it would be a value drift. But they wouldn’t be “less effective”, they would, presumably, be as effective, but just at a different goal.
Other thoughts:
It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people’s point of view.
But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this ‘value drift’.
Thanks!
I agree with your precisions.
levels of engagement with the EA community reduces drop-out rates
“drop-out” meaning 0 engagement, right? so the claim has the form of “the more you do X, the less likely you are of stopping doing X completely”. it’s not clear to me to which extent it’s causal, but yeah, still seems useful info!
I think most of the other 9 areas you mention seem like they already receive substantial non-EA attention
oh, that’s plausible!
The post Reducing long-term risks from malevolent actors is arguably one example of EAs considering efforts that would have that sort of scope and difficulty and that would potentially, in effect, increase altruism
Good point! In my post, I was mostly thinking at the individual level. Looking at a population level and on a longer term horizon, I should probably add other possible interventions such as:
Incentives to have children (political, economical, social)
Immigration policies
Economic system
Genetic engineering
Dating dynamics
Cultural evolution
Thanks.
I think “negative value drift” is still too idiosyncratic; it doesn’t say negative for whom. For the value holder, any value drift generally has negative consequences.
I (also) think it’s a step in the right direction to explicitly state that a post isn’t trying to define value drift, but just provide empirical info. Hopefully my post will have provided that definition, and people will now be able to build on this.
if by “something good” you mean “something altruistic”, then yes I agree. it’s good for someone when others become altruistic towards them.
The community’s conception of value drifting is sometimes too narrow
It’s a convergent instrumental goal to preserve one’s values. If you change your goals / values, you will generally achieve / fulfill them less.
Value-drifting someone else might be positive for you, at least if you only consider the first-order consequences, but it generally seems pretty unvirtuous and uncooperative to me. A world where value-drifting people is socially acceptable is probably worse than a world where it’s not.
Awesome! Documented on Moral economics—Cause Prioritisation Wiki