One thing I definitely believe, and have commented on before[1], is that median EAโs (I.e, EAโs without an unusual amount of influence) are over-optimising for the image of EA as a whole, which sometimes conflicts with actually trying to do effective altruism. Let the PR people and the intellectual leaders of EA handle thatโpeople outside that should be focusing on saying what we sincerely believe to be true
FWIW, Iโm directly updating on this (and on the slew of aggressively bad faith criticism from detractors following this event).
Iโll stop trying to think about how we should optimise for and manage PR, and default to honesty and accurate representation (as opposed to strategic presentation of our positions to make them more appealing/โeasier to accept).
(This is not to imply that I ever condoned lying, but I have thought that it may be better to e.g. change which parts of EA messaging we highlight based on what people seem best receptive to vs our real cruxes: e.g. justify existential risk mitigation because 8 billion people dying is bad, instead of via inaccessible longtermist arguments.)
FWIW, Iโm directly updating on this (and on the slew of aggressively bad faith criticism from detractors following this event).
Iโll stop trying to think about how we should optimise for and manage PR, and default to honesty and accurate representation (as opposed to strategic presentation of our positions to make them more appealing/โeasier to accept). (This is not to imply that I ever condoned lying, but I have thought that it may be better to e.g. change which parts of EA messaging we highlight based on what people seem best receptive to vs our real cruxes: e.g. justify existential risk mitigation because 8 billion people dying is bad, instead of via inaccessible longtermist arguments.)