I do think that EA is made of human beings, and injections of positivity can help humans avoid burning out or getting so stressed that the quality of our reasoning and arguments declines.
Hm, this phrasing makes it sound sort of like I think the only reason for EAs to treat each other with kindness, try to make EA a nice place to be, etc., is to prevent “burnout” (which sounds like it’s about keeping EAs productive) and protect the quality of our epistemics.
So, to be clear: I also endorse EAs being nice because niceness is good in its own right. EAs deserve happiness too, and I just plain endorse human beings flourishing and having good lives.
(I think the underlying generators/attitudes/perspectives behind this are good by scope-insensitive consequentialist lights (given how humans actually work in real life), but I don’t think every local act of kindness needs to be justified by explicitly consequentialist reasoning.)
Hm, this phrasing makes it sound sort of like I think the only reason for EAs to treat each other with kindness, try to make EA a nice place to be, etc., is to prevent “burnout” (which sounds like it’s about keeping EAs productive) and protect the quality of our epistemics.
So, to be clear: I also endorse EAs being nice because niceness is good in its own right. EAs deserve happiness too, and I just plain endorse human beings flourishing and having good lives.
(I think the underlying generators/attitudes/perspectives behind this are good by scope-insensitive consequentialist lights (given how humans actually work in real life), but I don’t think every local act of kindness needs to be justified by explicitly consequentialist reasoning.)