Finding it hard to retain my belief in altruism

Hey ev­ery­body,
Re­cently, I’ve found it hard to re­tain my be­lief in al­tru­ism. I’m re­ally hop­ing that one of you has some­thing to say that might turn me back, be­cause I don’t want to lose my be­lief in this won­der­ful thing.
Ever since I was very young, I’ve found a util­i­tar­ian style of think­ing very nat­u­ral. I’ve always wanted to max­i­mize hap­piness. I also de­cided that other peo­ple’s hap­piness ought to mat­ter, and be­cause in my think­ing I’m very log­i­cal, I tried to roughly ar­bi­trate how much more valuable my own hap­piness is than that of a strangers. This was the ques­tion that I thought of:
If you have to die to save x num­ber of ran­dom Amer­i­cans (or what­ever na­tion­al­ity you hap­pen to be) your age, what would the min­i­mum num­ber be?
I’ve asked this ques­tion to maybe 200 peo­ple over the years. It doesn’t come up all the time, but ev­ery now and then I find it an in­ter­est­ing topic. About 80-90% of peo­ple will say an amount be­tween 1-10, and the rest will say some very high num­ber such as 10000000. My num­ber was some­where be­tween 3-15 (I know a big range, but the ques­tion is kind of hard to re­ally de­cide on any­ways).
This definitely made me more pas­sion­ate about al­tru­ism. If you re­ally be­lieve that your life is worth about a dozen lives, then you should ded­i­cate your life to helping oth­ers. The rea­son be­ing that there are so many ways to save/​help far, far, far more peo­ple than that and still live a good life.
I’m only 20 right now, and al­though this was one of my core be­liefs, it was for sure one of those ‘eas­ier said than done be­liefs’ and I knew that. I was always very wor­ried that I would get re­ject this be­lief later on in life in fa­vor of self­ish­ness. This seems to be what’s hap­pen­ing now.
Re­cently, I have re­ally put this be­lief to the test. In short, I never went to col­lege and am a self taught pro­gram­mer, a pretty suc­cess­ful one. After read­ing 80,000 hours ca­reer guide, I re­al­ized that work­ing in the field of ar­tifi­cial in­tel­li­gence would be much, much, much more benefi­cial then work­ing as a web de­vel­oper and donat­ing like 20-30k a year. So, I started study­ing for the SATs and ap­ply­ing for col­leges.
This pe­riod went on for about 2 months. Dur­ing this pe­riod, I was on an around the world back­pack­ing trip, and paused it to do this work. Still, I was stay­ing in a hos­tel, and so many peo­ple would go up to me and ask me why I was study­ing. I used this as an op­por­tu­nity to have a dis­cus­sion about effec­tive al­tru­ism with them.
While hav­ing these dis­cus­sions, I would ask the ques­tion men­tioned above about dy­ing to save x num­ber of peo­ple. I re­al­ized though, that the an­swers I was now get­ting were much higher than be­fore. This prob­a­bly had to do with the fact that I was pref­ac­ing it with a dis­cus­sion of effec­tive al­tru­ism in­stead of dis­cussing it af­ter­wards. Some­thing as small as that was rad­i­cally chang­ing the an­swers to this ques­tion, one which was a core be­lief of mine for al­tru­ism.
So, I dug deeper. This is where I had a truly de­press­ing re­al­iza­tion. Upon talk­ing with peo­ple, it now seems to me that peo­ple don’t in­trin­si­cally value the hap­piness of a stranger. That is, they’ll do some­thing be­cause they fol­low their heart (as do I), but ul­ti­mately they’re do­ing it to not feel bad, to feel good, or to help a loved one. Even though be­fore they very of­ten an­swered some­thing be­tween 1 and 10 to the ques­tion, the ques­tion is very flawed be­cause it’s too ar­bi­trary. Peo­ple were be­ing more op­ti­mistic than re­al­is­tic with their an­swers I think.
Be­cause of the na­ture of be­lief, we find it very easy to be­lief what ev­ery­body else think and very hard to be­lief some­thing that prac­ti­cally no one else thinks. The be­liefs of oth­ers sup­port ours, and when that sup­port is gone, it’s easy to find our own be­lief crum­bling. After re­al­iz­ing that other peo­ple didn’t in­trin­si­cally value the hap­piness of some­one they didn’t know, I ques­tioned my own pas­sion for helping strangers. Now, I have a hard time think­ing of why I should value in­trin­si­cally the hap­piness of strangers, and so my log­i­cal be­lief for al­tru­ism has mostly gone away.
In my heart, I still care about helping oth­ers. I’ve always looked at effec­tive al­tru­ism as be­ing reached from two differ­ent paths. One, the logic that the ap­prox­i­mate ra­tio for how im­por­tant your life is vs oth­ers is not su­per su­per high, and that you can help a num­ber of peo­ple much higher than what­ever your ra­tio is through effec­tive al­tru­ism (this is what I was talk­ing about above). Two would be that your heart wants to help peo­ple and that effec­tive al­tru­ism is a great way to do that. I view the first be­lief as a much stronger be­lief. Fol­low­ing your heart more of­ten leads to self­ish­ness for your­self than it does self­less­ness for a stranger. This is why so many peo­ple are not donat­ing effec­tively and why so many peo­ple would choose ca­reers that make them feel good about helping the world but don’t ac­tu­ally help very much: these peo­ple aren’t be­ing al­tru­is­tic out of logic, but out of emo­tion. Be­cause the na­ture of emo­tion is self­ish, they don’t re­ally have that much of a de­sire to care about how to max­i­mize their help for peo­ple; car­ing in it­self is enough to make them feel good, even if it helps 1% as much as they could oth­er­wise. The heart will help strangers, but it rarely will it sac­ri­fice a lot for strangers un­less the de­ci­sion is im­pul­sive.
Even though my log­i­cal be­lief to­wards al­tru­ism (stem­ming from no longer valu­ing in­trin­si­cally the hap­piness of a stranger) is gone, my heart will always want to help those who re­ally need help through effec­tive al­tru­ism. I don’t think that’s good enough though and re­ally hope some­body can re­con­vince me to be­lief log­i­cally in al­tru­ism in­stead of just emo­tion­ally. If this doesn’t hap­pen, I’ll still donate 10-20% of my in­come to char­ity, but I won’t want to make the big 10 year sac­ri­fice of go­ing to col­lege, study­ing to get a PhD in ma­chine learn­ing, in or­der to fi­nally work in ar­tifi­cial in­tel­li­gence. I would ac­tu­ally en­joy work­ing in ar­tifi­cial in­tel­li­gence, but I would hate the 10 years of study­ing in­volved. This is re­ally bad I think, be­cause I could be helping far, far more peo­ple with this path, even though it would make me less happy. When I log­i­cally be­lieved in al­tru­ism I was will­ing to do this, but now I just don’t care enough.


Hope­fully you were able to fol­low that, I’m sorry if the rea­son­ing is a bit messy, it was a bit hard to ex­plain over writ­ing!