[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/âemotion, so please read that first]
Trying my best to understand the argument in this post, you seem to have become a âdinoman exteremistâ (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that âYou should care about whatever you care about.â and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying âThere are lots of equally valid goals to choose from. Infinitely many, in fact.â This might be true descriptively perhaps, but you arenât making a descriptive claim, youâre clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or âsaving the worldâ, even if you donât want to.
Perhaps a better understanding of what your saying is that peopleâs life goals and moral values shouldnât be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/âCtrl+V them with some community consensus, but that just empirically doesnât seem true? Not to say that there are no problems with deferral in the community, but mostly thereâs a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesnât seem incompatible with what you say here.
But going back to your âall values are validâ claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what âanything goesâ means, but I donât think you do, and itâs basically a dispositive result of trying to take that moral rule seriously.
This is linked into the bit where you say âChoose values that sound exciting because lifeâs short, timeâs short, and none of it matters in the end anyway.â where instead of arguing for choosing values such as individual flourishing youâre just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and Iâm sorry if this comes off as rude, reminds me of a smart-ish teenager whoâs just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesnât make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.
Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right âDoing things because they sound nice and pretty and someone else says theyâre morally goodâ as if that is a description of whatâs going on, again, that just doesnât match up with the world I live in, or the EAs I know or am aware of. But it really seems like youâre not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrasesâlike referring to EAs as âcucksâ, âRLHF-ed chatbotsâ, part of a âculty cliqueâ and so on. Perhaps this is understandble as a reaction against/ârejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.
To be very clear about the above pointâYou leaving/ânot-liking EA doesnât make me think less of you, butbeing needlessly cruel to others in such a careless way does.
To be frank, I donât think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.
[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/âemotion, so please read that first]
Trying my best to understand the argument in this post, you seem to have become a âdinoman exteremistâ (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that âYou should care about whatever you care about.â and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying âThere are lots of equally valid goals to choose from. Infinitely many, in fact.â This might be true descriptively perhaps, but you arenât making a descriptive claim, youâre clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or âsaving the worldâ, even if you donât want to.
Perhaps a better understanding of what your saying is that peopleâs life goals and moral values shouldnât be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/âCtrl+V them with some community consensus, but that just empirically doesnât seem true? Not to say that there are no problems with deferral in the community, but mostly thereâs a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesnât seem incompatible with what you say here.
But going back to your âall values are validâ claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what âanything goesâ means, but I donât think you do, and itâs basically a dispositive result of trying to take that moral rule seriously.
This is linked into the bit where you say âChoose values that sound exciting because lifeâs short, timeâs short, and none of it matters in the end anyway.â where instead of arguing for choosing values such as individual flourishing youâre just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and Iâm sorry if this comes off as rude, reminds me of a smart-ish teenager whoâs just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesnât make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.
Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right âDoing things because they sound nice and pretty and someone else says theyâre morally goodâ as if that is a description of whatâs going on, again, that just doesnât match up with the world I live in, or the EAs I know or am aware of. But it really seems like youâre not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrasesâlike referring to EAs as âcucksâ, âRLHF-ed chatbotsâ, part of a âculty cliqueâ and so on. Perhaps this is understandble as a reaction against/ârejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.
To be very clear about the above pointâYou leaving/ânot-liking EA doesnât make me think less of you, but being needlessly cruel to others in such a careless way does.
To be frank, I donât think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.