You seem to be working under the assumption that we have either emotional or logical motivations for doing something. I think that this is mistaken: logic is a tool for achieving our motivations, and all of our motivations ultimately ground in emotional reasons. In fact, it has been my experience that focusing too much on trying to find “logical” motivations for our actions may lead to paralysis, since absent an emotional motive, logic doesn’t provide any persuasive reason to do one thing over another.
You said that people act altruistically because “ultimately they’re doing it to not feel bad, to feel good, or to help a loved one”. I interpret this to mean that these are all reasons which you think are coming from the heart. But can you think of any reason for doing anything which does *not* ultimately ground in something like these reasons?
I don’t know you, so I don’t want to suggest that I think that I know how your mind works… but reading what you’ve written, I can’t help getting the feeling that the thought of doing something which is motivated in emotion rather than logic makes you feel bad, and that the reason why you don’t want to do things which are motivated by emotion is that you have an emotional aversion to it. In my experience, it’s very common for people to have an emotional aversion to what they think emotional reasoning is, causing them to convince themselves that they are making their decisions based on logic rather than emotion. If someone has a strong (emotional) conviction that logic is good and emotion is bad, then they will be strongly motivated to try to ground all of their actions in logical reasoning. All the while being unmotivated to notice the reason why they are so invested in logical reasoning. I used to do something like this, which is how I became convinced of the inadequacy of logical reasoning for resolving conflicts such as these. I tried and failed for a rather long time before switching tactics.
The upside of this is that you don’t really need to find a logical reason for acting altruistically. Yes, many people who are driven by emotion end up acting selfishly rather than altruistically. But since everyone is ultimately driven by emotions, then as long as you believe that there are people who act altruistically, then that implies that it’s possible to act altruistically while being motivated emotionally.
What I would suggest, would be to embrace everything being driven by emotion, and then trying to find a solution which satisfies all of your emotional needs. You say that studying to get a PhD in machine learning would make you feel bad, and also that not doing it is also bad. I don’t think that either of these feelings is going to just go away: if you just chose to do a machine learning PhD, or just chose to not do it, then the conflict would keep bothering you regardless, and you’d feel unhappy either way you chose. I’d recommend figuring out the reasons why you would hate the machine learning path, and also the conditions under which you feel bad about not doing enough altruistic work, and then figuring out a solution which would satisfy all of your emotional needs. (CFAR’s workshops teach exactly this kind of thing .)
I should also remark that I was recently in a somewhat similar situation as you: I felt that the right thing to do would be to work on AI stuff, but also that I didn’t want to. Eventually I came to the conclusion that the reason why I didn’t want it was that a part of my mind was convinced that the kind of AI work that I could do, wouldn’t actually be as impactful as other things that I could be doing—and this judgment has mostly held up under logical analysis. This is not to say that doing the ML PhD would genuinely be a bad idea for you as well, but I do think that it would be worth examining the reasons for why exactly you wouldn’t want to do studies. Maybe your emotions are actually trying to tell you something important? (In my experience, they usually are, though of course it’s also possible for them to be mistaken.)
One particular question that I would ask is: you say you would enjoy working in AI, but you wouldn’t enjoy learning the stuff that you need to do in order to work in AI. This might make sense in a field where you are required to study something that’s entirely unrelated to what’s useful for your job. But particularly once you get around doing doing your graduate studies, much of that stuff will be directly relevant for your work. If you think that you would hate to be in an environment where you get to spend most of your time learning about AI, why do you think that you would enjoy a research job, which also requires you to spend a lot of time learning about AI?
You seem to be working under the assumption that we have either emotional or logical motivations for doing something. I think that this is mistaken: logic is a tool for achieving our motivations, and all of our motivations ultimately ground in emotional reasons. In fact, it has been my experience that focusing too much on trying to find “logical” motivations for our actions may lead to paralysis, since absent an emotional motive, logic doesn’t provide any persuasive reason to do one thing over another.
You said that people act altruistically because “ultimately they’re doing it to not feel bad, to feel good, or to help a loved one”. I interpret this to mean that these are all reasons which you think are coming from the heart. But can you think of any reason for doing anything which does *not* ultimately ground in something like these reasons?
I don’t know you, so I don’t want to suggest that I think that I know how your mind works… but reading what you’ve written, I can’t help getting the feeling that the thought of doing something which is motivated in emotion rather than logic makes you feel bad, and that the reason why you don’t want to do things which are motivated by emotion is that you have an emotional aversion to it. In my experience, it’s very common for people to have an emotional aversion to what they think emotional reasoning is, causing them to convince themselves that they are making their decisions based on logic rather than emotion. If someone has a strong (emotional) conviction that logic is good and emotion is bad, then they will be strongly motivated to try to ground all of their actions in logical reasoning. All the while being unmotivated to notice the reason why they are so invested in logical reasoning. I used to do something like this, which is how I became convinced of the inadequacy of logical reasoning for resolving conflicts such as these. I tried and failed for a rather long time before switching tactics.
The upside of this is that you don’t really need to find a logical reason for acting altruistically. Yes, many people who are driven by emotion end up acting selfishly rather than altruistically. But since everyone is ultimately driven by emotions, then as long as you believe that there are people who act altruistically, then that implies that it’s possible to act altruistically while being motivated emotionally.
What I would suggest, would be to embrace everything being driven by emotion, and then trying to find a solution which satisfies all of your emotional needs. You say that studying to get a PhD in machine learning would make you feel bad, and also that not doing it is also bad. I don’t think that either of these feelings is going to just go away: if you just chose to do a machine learning PhD, or just chose to not do it, then the conflict would keep bothering you regardless, and you’d feel unhappy either way you chose. I’d recommend figuring out the reasons why you would hate the machine learning path, and also the conditions under which you feel bad about not doing enough altruistic work, and then figuring out a solution which would satisfy all of your emotional needs. (CFAR’s workshops teach exactly this kind of thing .)
I should also remark that I was recently in a somewhat similar situation as you: I felt that the right thing to do would be to work on AI stuff, but also that I didn’t want to. Eventually I came to the conclusion that the reason why I didn’t want it was that a part of my mind was convinced that the kind of AI work that I could do, wouldn’t actually be as impactful as other things that I could be doing—and this judgment has mostly held up under logical analysis. This is not to say that doing the ML PhD would genuinely be a bad idea for you as well, but I do think that it would be worth examining the reasons for why exactly you wouldn’t want to do studies. Maybe your emotions are actually trying to tell you something important? (In my experience, they usually are, though of course it’s also possible for them to be mistaken.)
One particular question that I would ask is: you say you would enjoy working in AI, but you wouldn’t enjoy learning the stuff that you need to do in order to work in AI. This might make sense in a field where you are required to study something that’s entirely unrelated to what’s useful for your job. But particularly once you get around doing doing your graduate studies, much of that stuff will be directly relevant for your work. If you think that you would hate to be in an environment where you get to spend most of your time learning about AI, why do you think that you would enjoy a research job, which also requires you to spend a lot of time learning about AI?