I agree with Lukas that “influence seeking” although important doesn’t quite resonate with me as a positive virtue in the other side of “truth seeking”
That said I’ll comment on the topic anyway :D
To do the most good we can, we need both truth and influence. They are both critically important, without one the other can become useless. If we could maximize both that might be best, but obviously we can’t so at some point there will be tradeoffs.
I think this tradeoff might not actually be as critical a some seem to think think (but won’t get into this here). I don’t think it’s that hard to sacrifice a little truth right on the margins of what is socially acceptable to avoid a lot of influence loss.
I don’t think most “weird” ideas and thought trains actually carry much risk of reputational loss. As much as people have said there has been “reputational risk” in working on invertebrates welfare and would animal welfare, I struggle to see any actual evidence of influence lost through these. Influence may (unsure) have been lost through long-termism focus, but there also may have been influence gains there through attracting a wider range of people to EA and also donors who are interested in long term is sm
. I don’t think we lose much truth seeking ability by just avoiding a few relatively unimportant topics which day high risk of influence loss (see Hananiah/ Bostrum scandal).
I also think more reputational loss and loss of influence may have actually come through what I see as unrelated mistakes (close association with SBF, Abbey purchase and handling etc.) which are at best peripherally associated with truth seeking.
Truth seeking seems to me worthless without influence. For example we could design the perfect AI alignment strategy, but if we have no influence to implement it then what have we achieved? Or we could figure out that protecting digital mind welfare was the most important thing we could do to reduce suffering, but if we have zero influence and no one is prepared to fund the work that needs doing to fix the problem then what’s the point?
And if we are extremely influential without truth, then the influence is meaningless. We will just revert to societal norms and do no good on the margins. As a halfway example if we decided to optimize mostly for influence, then we might completely ditch all (or most) longtermist work for a while in the wake of SBF, which might be dangerous.
Those aren’t the best examples but I hope the ideas comes through.
How do you think we could maintain that equilibrium?
To me it seems that once topics move beyond discussion it’s very expensive to get them back. It doesn’t feel like we’ll have an honest community conversation about the Bostrom/Time article stuff for a while yet. Sometimes it feels like we still aren’t capable of having an honest community conversation about Leverage.
I agree with Lukas that “influence seeking” although important doesn’t quite resonate with me as a positive virtue in the other side of “truth seeking”
That said I’ll comment on the topic anyway :D
To do the most good we can, we need both truth and influence. They are both critically important, without one the other can become useless. If we could maximize both that might be best, but obviously we can’t so at some point there will be tradeoffs.
I think this tradeoff might not actually be as critical a some seem to think think (but won’t get into this here). I don’t think it’s that hard to sacrifice a little truth right on the margins of what is socially acceptable to avoid a lot of influence loss.
I don’t think most “weird” ideas and thought trains actually carry much risk of reputational loss. As much as people have said there has been “reputational risk” in working on invertebrates welfare and would animal welfare, I struggle to see any actual evidence of influence lost through these. Influence may (unsure) have been lost through long-termism focus, but there also may have been influence gains there through attracting a wider range of people to EA and also donors who are interested in long term is sm
. I don’t think we lose much truth seeking ability by just avoiding a few relatively unimportant topics which day high risk of influence loss (see Hananiah/ Bostrum scandal).
I also think more reputational loss and loss of influence may have actually come through what I see as unrelated mistakes (close association with SBF, Abbey purchase and handling etc.) which are at best peripherally associated with truth seeking.
Truth seeking seems to me worthless without influence. For example we could design the perfect AI alignment strategy, but if we have no influence to implement it then what have we achieved? Or we could figure out that protecting digital mind welfare was the most important thing we could do to reduce suffering, but if we have zero influence and no one is prepared to fund the work that needs doing to fix the problem then what’s the point?
And if we are extremely influential without truth, then the influence is meaningless. We will just revert to societal norms and do no good on the margins. As a halfway example if we decided to optimize mostly for influence, then we might completely ditch all (or most) longtermist work for a while in the wake of SBF, which might be dangerous.
Those aren’t the best examples but I hope the ideas comes through.
So you think the right tradeoff is like 90% truthseeking? Is that fair?
That might technically be correct but saying that appears to minimize the importance of influence
I would more frame it like we might only need to tradeoff 10 percent of truth seeking to have minimal impact on influence.
How do you think we could maintain that equilibrium?
To me it seems that once topics move beyond discussion it’s very expensive to get them back. It doesn’t feel like we’ll have an honest community conversation about the Bostrom/Time article stuff for a while yet. Sometimes it feels like we still aren’t capable of having an honest community conversation about Leverage.