[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/emotion, so please read that first]
Trying my best to understand the argument in this post, you seem to have become a “dinoman exteremist” (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that “You should care about whatever you care about.” and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying “There are lots of equally valid goals to choose from. Infinitely many, in fact.” This might be true descriptively perhaps, but you aren’t making a descriptive claim, you’re clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or “saving the world”, even if you don’t want to.
Perhaps a better understanding of what your saying is that people’s life goals and moral values shouldn’t be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/Ctrl+V them with some community consensus, but that just empirically doesn’t seem true? Not to say that there are no problems with deferral in the community, but mostly there’s a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesn’t seem incompatible with what you say here.
But going back to your ‘all values are valid’ claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what ‘anything goes’ means, but I don’t think you do, and it’s basically a dispositive result of trying to take that moral rule seriously.
This is linked into the bit where you say “Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway.” where instead of arguing for choosing values such as individual flourishing you’re just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and I’m sorry if this comes off as rude, reminds me of a smart-ish teenager who’s just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesn’t make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.
Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right “Doing things because they sound nice and pretty and someone else says they’re morally good” as if that is a description of what’s going on, again, that just doesn’t match up with the world I live in, or the EAs I know or am aware of. But it really seems like you’re not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrases—like referring to EAs as “cucks”, “RLHF-ed chatbots”, part of a “culty clique” and so on. Perhaps this is understandble as a reaction against/rejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.
To be very clear about the above point—You leaving/not-liking EA doesn’t make me think less of you, butbeing needlessly cruel to others in such a careless way does.
To be frank, I don’t think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.
[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/emotion, so please read that first]
Trying my best to understand the argument in this post, you seem to have become a “dinoman exteremist” (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that “You should care about whatever you care about.” and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying “There are lots of equally valid goals to choose from. Infinitely many, in fact.” This might be true descriptively perhaps, but you aren’t making a descriptive claim, you’re clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or “saving the world”, even if you don’t want to.
Perhaps a better understanding of what your saying is that people’s life goals and moral values shouldn’t be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/Ctrl+V them with some community consensus, but that just empirically doesn’t seem true? Not to say that there are no problems with deferral in the community, but mostly there’s a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesn’t seem incompatible with what you say here.
But going back to your ‘all values are valid’ claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what ‘anything goes’ means, but I don’t think you do, and it’s basically a dispositive result of trying to take that moral rule seriously.
This is linked into the bit where you say “Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway.” where instead of arguing for choosing values such as individual flourishing you’re just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and I’m sorry if this comes off as rude, reminds me of a smart-ish teenager who’s just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesn’t make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.
Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right “Doing things because they sound nice and pretty and someone else says they’re morally good” as if that is a description of what’s going on, again, that just doesn’t match up with the world I live in, or the EAs I know or am aware of. But it really seems like you’re not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrases—like referring to EAs as “cucks”, “RLHF-ed chatbots”, part of a “culty clique” and so on. Perhaps this is understandble as a reaction against/rejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.
To be very clear about the above point—You leaving/not-liking EA doesn’t make me think less of you, but being needlessly cruel to others in such a careless way does.
To be frank, I don’t think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.