I’m part of the target audience, I think, but this post isn’t very helpful to me. Mistrust of arguments which tell me to calm down may be a part of it, but it seems like you’re looking for reasons to excuse caring for other things than effective altruism, rather than weighing the evidence for what works better for getting EA results.
Your “two considerations”,
If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition.
Even if EA becomes your only goal, it does not mean that you should necessarily spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)
, look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals. Maybe you mean that some people are ‘partial EAs’ and others are ‘full EAs (who are far from highly productive EA work in willpower -space)’, but it isn’t very clear.
Now, on ‘partial EAs’: If you agree that effective altruism = good (if you don’t, adjust your EA accordingly, IMO), then agency attached to something with different goals is bad compared to agency towards EA. Even if those goals can’t be changed right now, they would still be worse, just like death is bad even if we can’t change it (yet (except maybe with cryonics)). If you are a ‘partial EA’ who feels guilty about not being a ‘full EA’, this seems like an accurate weighing of the relative moral values, only wrong if the guilt makes you weaker rather than stronger. Your explanation doesn’t look like a begrudging acceptance of the circumstances, it looks almost like saying ‘partial EAs’ and ‘full EAs’ are morally equivalent.
Concerning ‘full EAs who are far from being very effective EAs in willpower -space”, this triggers many alarm bells in my mind, warning of the risk of it turning into an excuse to merely try. You reduce effective effective altruists’ productivity to a personality trait (and ‘skills’ which in context sound unlearnable), which doesn’t match 80,000hours’ conclusion that people can’t estimate well how good they are at things or how much they’ll enjoy things before they’ve tried.
Your statement on compartmentalisation (and Ben Kuhn’s original post) both just seem to assume that because denying yourself social contact because you could be making money itself is bad, therefore compartmentalisation is good. But the reasoning for this compartmentalisation—it causes happiness, which causes productivity—isn’t (necessarily) compartmentalised, so why compartmentalise at all? Your choice isn’t just between a delicious candy bar and deworming someone, it’s between a delicious candy bar which empowers you to work to deworm two people and deworming one person. This choice isn’t removed when you use the compartmentalisation heuristic, it’s just hidden. You’re “freeing your mind from the moral dilemma”, but that is exactly what evading cognitive dissonance is.
I don’t have a good answer. I still have an ugh field around making actual decisions and a whole bunch of stress, but this doesn’t sound like it should convince anyone.
Thanks for replying. (note: I’m making this up as I go along. I’m forgoing self-consistency for accuracy).
Merely trying isn’t the same as pretending to try. It isn’t on the same axis as emotionally caring, it’s the (lack of) agency towards achieving a goal. Someone who is so emotionally affected by EA that they give up is definitely someone who ‘merely tried’ to affect the world, because you can’t just give up if you care in an agentic sense.
What we want is for people to be emotionally healthy—not caring too much or too little, and with control over how affected they are—but with high agency. Telling people they don’t need to be like highly agentic EA people affects both, and to me at least it isn’t obvious if you meant that people should still try their hardest to be highly agentic but merely not beat themselves up over falling short.
Whose “right” are we talking about, here? If it’s “right” according to effective altruism, that is obviously false: someone who discovers they like murdering is wrong by EA standards (as well as those of the general population). “Careful reflection” also isn’t enough for humans to converge on an answer for themselves. If it was, tens of thousands of philosophers should have managed to map out morality, and we wouldn’t need the likes of MIRI.
Why should (some) people who are partial EAs not be pushed to become full EAs? Or why should (some) full EAs not be pushed to become partial EAs? Do you expect people to just happen to have the morality which has highest utility^1 by this standard? I suppose there is the trivial solution where people should always have the morality they have, but in that case we can’t judge people who like murdering.
People’s goals can be changed and/or people can be wrong about their goals, depending on what you consider proper “goals”. I’m sufficiently confident that I’m either misunderstanding you or that you’re wrong about your morality that I can point out that the best way to achieve “minimise suffering, without caring about death” is to kill things as painlessly as possible (and by extension, to kill everything everywhere). I would expect people who believe they are suffering-minimisers to be objectively wrong.
Just because there is no objective morality, that doesn’t mean people can’t be wrong about their own morality. We can observe that people can be convinced to become more altruistic, which contradicts your model: if they were true partial EAs, they would refuse because anything other than what they believe is worse. I don’t expect warring ideological states to be made up of people who all happened to be born with the right moral priors at the right time to oppose one another; their environment is much more likely to play a deciding role in what they believe. And environments can be changed, for example by telling people that they’re wrong and you’re right.
Regarding your second confusion, not knowing how “good” works in a framework of moral anti-realism. Basically, in that case, every agent has its morality where doing good is “good” and doing bad is bad. What’s good according to the cat is bad according to the mouse. Humans are sort of like agents and we’re all sort of similar, so our moralities tend to always be sort of the same. So much so that I can say many things are good according to humanity, and have it make a decent amount of sense. In common speech, we drop the “according to [x]”. Also note that agents can judge each other just as they can judge objects. We can say that Effective Altruism is good and murder is bad, so we can say that an agent becoming more likely to do effective altruism is good and one becoming less likely to commit murder is good.
That isn’t trivial. If 1 out of X miserable people manages to find a way to make things work eventually they could be more productive than Y people who chose to give up on levelling up and to be ‘regular’ EAs instead, with Y greater than X, and in that case we should advice people to keep trying even if they’re depressed and miserable. But more importantly, it’s a false choice: it should be possible to have people be less miserable but still to continue trying, and you could give advice on how to do that, if you know it. Signing up for a CFAR workshop might help, or showing some sort of clear evidence that happiness increases productivity. Compared to lesswrong posts, this is very light on evidence.
This looks like you’re contradicting yourself, so I’m not sure if I understand you correctly. But if you mean the first two sentences, do you have a source for that, or could you otherwise explain why you believe it? It doesn’t seem obvious to me, and if it’s true I need to change my mind.
[1] This may include their personal happiness, EA productivity, right not to have their minds overwritten, etc.