We generally believe that the top 1% of philanthropic opportunities are often 100x better than the median. This seems to imply that the marginal better decision has big impacts. Isn’t this likely to be true on a macro scale also (suggesting truth-seeking is very valuable)? Likewise many influence-seeking orgs make trivial mistakes that vastly reduce their positive impact.
Frankly, the uneven distribution of philanthropic opportunities also suggests that influence seeking can be very valuable if EA wants to diversify its funding base away from Moskovitz/Tuna. There are a lot of people giving away a lot of money to causes that may be 100x worse than causes they could be giving to who could be appealed to: it’s hard to impute comparably high value to “not worrying about PR” or high decoupling which is pretty orthogonal to truth seeking anyway.
Evidence that something might be exceptionally high value might be a good reason to pursue it to find out whether it is true despite it being unpopular, but even when the controversies on here concern the award of grants (which they usually don’t), they tend not to be the ones the funders thought were 100x better than alternatives.
So I’m not sure if I agree or don’t but this seems surprisingly like arguing that EA should bend it’s values to whoever is willing to give resources. I see you argue in favour of flexing away from “ugly” views. Would you argue in favour of flexing towards them if there was a donor in that direction?
I think this is a good question, but probably a separate one from the one originally asked. From a utilitarian/consequentialist POV (which most EAs seem to use for most prioritization) you probably would care enough about the impact on the funding base to flex towards “ugly” rhetoric if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
But on the original point, I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence) really has anything to do with 100x better returns. If EA was getting widely panned because of the uncoolness of shrimp welfare then shrimp welfare advocates could argue that based on certain welfare assumptions the impact of pursuing what they’re doing is 100x better than doing something with “better optics”. I’m not sure such highly multiplicative returns are easily applied to promoting ‘edgy’ politicos, shortcircuiting process in a way which appears to create conflicts of interest or funding marginal projects which look like conspicuous consumption to altruistically-inclined outsiders
if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence)
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
Even if Elon and Thiel’s philanthropic priorities were driven mainly by whether people associated with the organization offended enough people (which seems unlikely, looking at what they do spend most of their money on, which incidentally isn’t EA despite Elon at least being very aware, and somewhat aligned on AI safety) it seems unlikely their willingness and ability to fund it exceeds everybody else.
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
I’d agree those were bigger overall than the more regular drama on here I was referring to, but they’re also actions people generally didn’t try to defend or dismiss at all. Whereas the stuff that comes up here about Person X saying offensive stuff or Organization Y having alleged conflicts of interest or Grant Z looking frivolous frequently do get dismissed on the basis that optics shouldn’t be a consideration.
Most truths have ~0 effect magnitude concerning any action plausibly within EA’s purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn’t change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA’s competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn’t valuable.
As relevant to the scientific racism discussion, I don’t see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don’t think the course of EA would be different in any meaningful way.
More broadly, I’d note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That’s not small potatoes, but it generally isn’t appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it’s probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.
What i currently take from this is you think that if we start some work which seems unpopular or controversial we should stop because we can discover it later?
If not, how much work should we do before we decide it’s not worth the reputation cost to discuss it carefully?
No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
Discussion prompt:
We generally believe that the top 1% of philanthropic opportunities are often 100x better than the median. This seems to imply that the marginal better decision has big impacts. Isn’t this likely to be true on a macro scale also (suggesting truth-seeking is very valuable)? Likewise many influence-seeking orgs make trivial mistakes that vastly reduce their positive impact.
Frankly, the uneven distribution of philanthropic opportunities also suggests that influence seeking can be very valuable if EA wants to diversify its funding base away from Moskovitz/Tuna. There are a lot of people giving away a lot of money to causes that may be 100x worse than causes they could be giving to who could be appealed to: it’s hard to impute comparably high value to “not worrying about PR” or high decoupling which is pretty orthogonal to truth seeking anyway.
Evidence that something might be exceptionally high value might be a good reason to pursue it to find out whether it is true despite it being unpopular, but even when the controversies on here concern the award of grants (which they usually don’t), they tend not to be the ones the funders thought were 100x better than alternatives.
So I’m not sure if I agree or don’t but this seems surprisingly like arguing that EA should bend it’s values to whoever is willing to give resources. I see you argue in favour of flexing away from “ugly” views. Would you argue in favour of flexing towards them if there was a donor in that direction?
I think this is a good question, but probably a separate one from the one originally asked. From a utilitarian/consequentialist POV (which most EAs seem to use for most prioritization) you probably would care enough about the impact on the funding base to flex towards “ugly” rhetoric if “ugly” rhetoric was the most effective way of attracting more money to 100x causes, but this doesn’t appear to be the world we actually live in.
But on the original point, I don’t think much of the influence-undermining stuff that keeps coming up on here (some of which concerns “ugly” comments and some of which doesn’t, but all of which seems to gets the blanket “we shouldn’t care about optics” defence) really has anything to do with 100x better returns. If EA was getting widely panned because of the uncoolness of shrimp welfare then shrimp welfare advocates could argue that based on certain welfare assumptions the impact of pursuing what they’re doing is 100x better than doing something with “better optics”. I’m not sure such highly multiplicative returns are easily applied to promoting ‘edgy’ politicos, shortcircuiting process in a way which appears to create conflicts of interest or funding marginal projects which look like conspicuous consumption to altruistically-inclined outsiders
It seems as plausible to me that we live in this world as the world you suggest. Seems far easier to try and bend towards Elon or Theil bucks than unnanmed philanthropists I haven’t heard of.
Seems untrue. FTX and the Time article sexual harassment stuff seem like the two biggest reputational factors and neither of those got the “we shouldn’t care about optics” defence
Even if Elon and Thiel’s philanthropic priorities were driven mainly by whether people associated with the organization offended enough people (which seems unlikely, looking at what they do spend most of their money on, which incidentally isn’t EA despite Elon at least being very aware, and somewhat aligned on AI safety) it seems unlikely their willingness and ability to fund it exceeds everybody else.
I’d agree those were bigger overall than the more regular drama on here I was referring to, but they’re also actions people generally didn’t try to defend or dismiss at all. Whereas the stuff that comes up here about Person X saying offensive stuff or Organization Y having alleged conflicts of interest or Grant Z looking frivolous frequently do get dismissed on the basis that optics shouldn’t be a consideration.
Most truths have ~0 effect magnitude concerning any action plausibly within EA’s purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn’t change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA’s competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn’t valuable.
As relevant to the scientific racism discussion, I don’t see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don’t think the course of EA would be different in any meaningful way.
More broadly, I’d note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That’s not small potatoes, but it generally isn’t appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it’s probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.
What i currently take from this is you think that if we start some work which seems unpopular or controversial we should stop because we can discover it later?
If not, how much work should we do before we decide it’s not worth the reputation cost to discuss it carefully?
No, I think that extends beyond what I’m saying. I am not proposing a categorical rule here.
However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an “error” is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I’ve seen at least one comment suggesting “HBD” is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.
From a longtermist perspective, [1] I would assume that “we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring” is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don’t think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)
I do not identify as a longtermist, but I think it’s even harder to come up with a theory of impact for scientific racism on neartermist grounds.