I think this is a good question, but probably a separate one from the one originally asked. From a utilitarian/consequentialist POV (which most EAs seem to use for most prioritization) you probably would care enough about the impact on the funding base to flex towards “ugly” rhetoric if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
But on the original point, I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence) really has anything to do with 100x better returns. If EA was getting widely panned because of the uncoolness of shrimp welfare then shrimp welfare advocates could argue that based on certain welfare assumptions the impact of pursuing what they’re doing is 100x better than doing something with “better optics”. I’m not sure such highly multiplicative returns are easily applied to promoting ‘edgy’ politicos, shortcircuiting process in a way which appears to create conflicts of interest or funding marginal projects which look like conspicuous consumption to altruistically-inclined outsiders
if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence)
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
Even if Elon and Thiel’s philanthropic priorities were driven mainly by whether people associated with the organization offended enough people (which seems unlikely, looking at what they do spend most of their money on, which incidentally isn’t EA despite Elon at least being very aware, and somewhat aligned on AI safety) it seems unlikely their willingness and ability to fund it exceeds everybody else.
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
I’d agree those were bigger overall than the more regular drama on here I was referring to, but they’re also actions people generally didn’t try to defend or dismiss at all. Whereas the stuff that comes up here about Person X saying offensive stuff or Organization Y having alleged conflicts of interest or Grant Z looking frivolous frequently do get dismissed on the basis that optics shouldn’t be a consideration.
I think this is a good question, but probably a separate one from the one originally asked. From a utilitarian/consequentialist POV (which most EAs seem to use for most prioritization) you probably would care enough about the impact on the funding base to flex towards “ugly” rhetoric if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
But on the original point, I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence) really has anything to do with 100x better returns. If EA was getting widely panned because of the uncoolness of shrimp welfare then shrimp welfare advocates could argue that based on certain welfare assumptions the impact of pursuing what they’re doing is 100x better than doing something with “better optics”. I’m not sure such highly multiplicative returns are easily applied to promoting ‘edgy’ politicos, shortcircuiting process in a way which appears to create conflicts of interest or funding marginal projects which look like conspicuous consumption to altruistically-inclined outsiders
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
Even if Elon and Thiel’s philanthropic priorities were driven mainly by whether people associated with the organization offended enough people (which seems unlikely, looking at what they do spend most of their money on, which incidentally isn’t EA despite Elon at least being very aware, and somewhat aligned on AI safety) it seems unlikely their willingness and ability to fund it exceeds everybody else.
I’d agree those were bigger overall than the more regular drama on here I was referring to, but they’re also actions people generally didn’t try to defend or dismiss at all. Whereas the stuff that comes up here about Person X saying offensive stuff or Organization Y having alleged conflicts of interest or Grant Z looking frivolous frequently do get dismissed on the basis that optics shouldn’t be a consideration.