Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.
Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be at least a tiny bit useful there.
But then at my last meditation retreat, I had this powerful 15 minute experience of feeling nothing but love for all beings. And then it was so clear that helping farmed animals is the most important cause, or at least the cause I personally should continue working on since I have 5 years of experience in it. It was partly because we don’t know if the future we are trying to save with x-risk reduction will contain more happiness than suffering. I don’t trust my reasoning on this topic, my opinion on the question might flip many times if I were to think about it deeply. But I know that I can help millions of animals now.
It would be amazing if you keep on working on farmed animals. Your work on it so far was extremely helpful and partially lead to the creation of some cost-effective charities. The field is also extremely talent-constrained, and I want to cry whenever I hear “I was into animals but now I want to work on AI” at EA conferences. I know you can still change your mind but just want to say, that counterfactually it seems to me that you are much more needed on the farmed animals side than you will ever be on the x-risk reduction.
Hi Ula. I just somehow want to let you know that I used to work on animal welfare and I moved on to work on AI. But I didn’t stop doing animal welfare, because I do AI&animals.
I second the recommendation for Saulius to continue to work on farmed animal welfare. But I disagree with the view that uncertainty alone can undermine the whole case of longtermism.
Thank you, that was a beautiful response. I’m glad I asked!
I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it’s good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.
Hmm, I think I ultimately rely only on my emotions. I’ve always been a proponent of “Do The Math, Then Burn The Math and Go With Your Gut”. When it comes to the question of personal cause prioritization, the question is basically “what do I want to do with my life?” No spreadsheet will tell me an answer to that, it’s all emotions. I use spreadsheets to inform my emotions because if I didn’t, a part of me would be unhappy and would nag me to do it.
This is getting very off-topic but I’m now thinking that maybe all decisions like that. Maybe my only goal in life is to be happy in the moment. I do altruistic things because when I don’t do it for a while, a part of me nags me to do it and that makes me less happy. I don’t eat candy constantly because I’d be unhappy in the moment before eating it (or buying it) since it might ruin my health. I think that 2+2=4 because it feels good and 2+2=3 feels bad. If you disagree with some of that (and there are probably good reasons to disagree, I partly just made that up), then you might disagree with what I said in the parent comment (the one starting with “Hmm”) for the same reason.[1]
[EDIT, Feb 17th: I expressed this in a confusing way. Most of what I meant is that I try to drop the “shoulds”, which is what many therapists recommend. I use spreadsheets for prioritizing causes but I do it because I want to not because I should. I felt I needed to say this probably because I misinterpreted what Denise said in a weird way because I was confused. The question of how much to trust feelings vs spreadsheets does make sense. There is something else I’m saying in this comment that I still believe but I won’t get into it because it’s off-topic.]
I wonder if you’re Goodharting yourself (as in Goodhart’s law) or oversimplifying. Your emotions reflect what you care about and serve to motivate you to act on what you care about. They’re one particular way your goals (and your impressions of how satisfied/frustrated they are or will be) are aggregated, but you shouldn’t forget that there are separately valuable goals there.
I wouldn’t say someone can’t be selfless just because they want to help others and helping others satisfies this desire or makes them happy. And I definitely wouldn’t say their only goal is to be happy in the moment. They have a goal to help others, and they feel good or bad depending on how much they think they’re helping others.
EDIT: Also, it could be that things might feel more right/less wrong without feeling emotionally/hedonically/affectively better. I’m not sure all of my judgements have an affective component, or one that lines up with how preferable something is.
Maybe part of the brain just wants to be happy, and other parts of the brain condition rewards of happiness on alignment with various other goals like helping others or using spreadsheets.
Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.
Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be at least a tiny bit useful there.
But then at my last meditation retreat, I had this powerful 15 minute experience of feeling nothing but love for all beings. And then it was so clear that helping farmed animals is the most important cause, or at least the cause I personally should continue working on since I have 5 years of experience in it. It was partly because we don’t know if the future we are trying to save with x-risk reduction will contain more happiness than suffering. I don’t trust my reasoning on this topic, my opinion on the question might flip many times if I were to think about it deeply. But I know that I can help millions of animals now.
I don’t know what I’ll choose to do yet.
It would be amazing if you keep on working on farmed animals. Your work on it so far was extremely helpful and partially lead to the creation of some cost-effective charities. The field is also extremely talent-constrained, and I want to cry whenever I hear “I was into animals but now I want to work on AI” at EA conferences. I know you can still change your mind but just want to say, that counterfactually it seems to me that you are much more needed on the farmed animals side than you will ever be on the x-risk reduction.
Hi Ula. I just somehow want to let you know that I used to work on animal welfare and I moved on to work on AI. But I didn’t stop doing animal welfare, because I do AI&animals.
Beautiful answer, indeed !
I’d also strongly recommend working for farm animals : the long-term stuff is so uncertain when it comes to determining the net impact.
I second the recommendation for Saulius to continue to work on farmed animal welfare. But I disagree with the view that uncertainty alone can undermine the whole case of longtermism.
Thank you, that was a beautiful response. I’m glad I asked!
I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it’s good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.
Hmm, I think I ultimately rely only on my emotions. I’ve always been a proponent of “Do The Math, Then Burn The Math and Go With Your Gut”. When it comes to the question of personal cause prioritization, the question is basically “what do I want to do with my life?” No spreadsheet will tell me an answer to that, it’s all emotions. I use spreadsheets to inform my emotions because if I didn’t, a part of me would be unhappy and would nag me to do it.
This is getting very off-topic but I’m now thinking that maybe all decisions like that. Maybe my only goal in life is to be happy in the moment. I do altruistic things because when I don’t do it for a while, a part of me nags me to do it and that makes me less happy. I don’t eat candy constantly because I’d be unhappy in the moment before eating it (or buying it) since it might ruin my health. I think that 2+2=4 because it feels good and 2+2=3 feels bad. If you disagree with some of that (and there are probably good reasons to disagree, I partly just made that up), then you might disagree with what I said in the parent comment (the one starting with “Hmm”) for the same reason.[1]
[EDIT, Feb 17th: I expressed this in a confusing way. Most of what I meant is that I try to drop the “shoulds”, which is what many therapists recommend. I use spreadsheets for prioritizing causes but I do it because I want to not because I should. I felt I needed to say this probably because I misinterpreted what Denise said in a weird way because I was confused. The question of how much to trust feelings vs spreadsheets does make sense. There is something else I’m saying in this comment that I still believe but I won’t get into it because it’s off-topic.]
I wonder if you’re Goodharting yourself (as in Goodhart’s law) or oversimplifying. Your emotions reflect what you care about and serve to motivate you to act on what you care about. They’re one particular way your goals (and your impressions of how satisfied/frustrated they are or will be) are aggregated, but you shouldn’t forget that there are separately valuable goals there.
I wouldn’t say someone can’t be selfless just because they want to help others and helping others satisfies this desire or makes them happy. And I definitely wouldn’t say their only goal is to be happy in the moment. They have a goal to help others, and they feel good or bad depending on how much they think they’re helping others.
EDIT: Also, it could be that things might feel more right/less wrong without feeling emotionally/hedonically/affectively better. I’m not sure all of my judgements have an affective component, or one that lines up with how preferable something is.
Maybe part of the brain just wants to be happy, and other parts of the brain condition rewards of happiness on alignment with various other goals like helping others or using spreadsheets.
That was beautifully put, Saulius.