Yeah, Cari and Dustin especially are a large part of what made a lot of this ecosystem possible in the first place, they seem like sincere people, and ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good.
I think that Cari and Dustin’s funding has obviously created a lot of value. Maybe, even ~60% of all shapley EA value.
I personally don’t feel like I know much of what Cari and Dustin actually believe about things, other than that they funded OP. They both seem to have been fairly private.
At this point, I’m hesitant to trust any authority figure that much. “Billionaires in tech, focused on AI safety issues” currently has a disappointingly mixed track record.
‘ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good. ’ → Sure, but this is also true of all many[1] big donors. I think that all big donors should probably get more evaluation and criticism. I’m not sure how many specific actions to take differently when knowing “it’s their choice to do with their money what they genuinely believe can achieve the most good.”
“genuinely believe can achieve the most good” → Small point, but I’m sure some of this is political. “Achieve the most good” often means, “makes the funder look better, arguably so that they could do more good later on”. Some funders pay for local arts museums as a way for getting favor, EA funders sometimes pay for some causes with particularly-good-optics for similar reasons. My guess is that the EA funders generally do this for ultimately altruistic reasons, but would admit that this set of incentives is pretty gnarly.
[1] Edited, after someone flagged. I had a specific reference class in mind, “all” is inaccurate.
Thinking about this more: I think there’s some frame behind this, like,
”There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
In contrast, I try to think about it more like,
”There’s a bunch of humans, doing human things with human motivations. Some wind up producing more value than others. There’s expected value to be had by understanding that value, and understanding the positives/negatives of the most important human institutions. Often more attention should be spent in trying to find mistakes being made, than in highlighting things going well.”
“There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.
That said, all of this is hard to navigate well in practice.
Agreed! Ideally, “getting a lot of attention and criticism, but people generally are favorable”, should be looked at far more favorably than “just not getting attention”. I think VCs get this, but many people online don’t.
Yeah, Cari and Dustin especially are a large part of what made a lot of this ecosystem possible in the first place, they seem like sincere people, and ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good.
I generally agree, but with reservations.
I think that Cari and Dustin’s funding has obviously created a lot of value. Maybe, even ~60% of all shapley EA value.
I personally don’t feel like I know much of what Cari and Dustin actually believe about things, other than that they funded OP. They both seem to have been fairly private.
At this point, I’m hesitant to trust any authority figure that much. “Billionaires in tech, focused on AI safety issues” currently has a disappointingly mixed track record.
‘ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good. ’ → Sure, but this is also true of
allmany[1] big donors. I think that all big donors should probably get more evaluation and criticism. I’m not sure how many specific actions to take differently when knowing “it’s their choice to do with their money what they genuinely believe can achieve the most good.”“genuinely believe can achieve the most good” → Small point, but I’m sure some of this is political. “Achieve the most good” often means, “makes the funder look better, arguably so that they could do more good later on”. Some funders pay for local arts museums as a way for getting favor, EA funders sometimes pay for some causes with particularly-good-optics for similar reasons. My guess is that the EA funders generally do this for ultimately altruistic reasons, but would admit that this set of incentives is pretty gnarly.
[1] Edited, after someone flagged. I had a specific reference class in mind, “all” is inaccurate.
Thinking about this more:
I think there’s some frame behind this, like,
”There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
In contrast, I try to think about it more like,
”There’s a bunch of humans, doing human things with human motivations. Some wind up producing more value than others. There’s expected value to be had by understanding that value, and understanding the positives/negatives of the most important human institutions. Often more attention should be spent in trying to find mistakes being made, than in highlighting things going well.”
Thanks for elucidating your thoughts more here.
I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.
That said, all of this is hard to navigate well in practice.
Agreed! Ideally, “getting a lot of attention and criticism, but people generally are favorable”, should be looked at far more favorably than “just not getting attention”. I think VCs get this, but many people online don’t.