I think (at least) somebody at Open Philanthropy needs to start thinking about reacting to an increasing move towards portraying it, either sincerely or strategically, as a shadowy cabal-like entity influencing the world in an âevil/âsinisterâ way, similar to how many right-wingers across the world believe that George Soros is contributing to the decline of Western Civilization through his political philanthropy.
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Ămile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles. In many think pieces responding to WWOTF or FTX or SBF, they get extensively cited as a primary EA-critic, for example.
I think the âignore itâ strategy was a mistake and Iâm afraid the same mistake might happen again, with potentially worse consequences.
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Ămile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.
You say this as if there were ways to respond which would have prevented this. Iâm not sure these exist, and in general I think âignore itâ is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. Itâs not going to get Ămile to publicly recant anything. The goal is just for people who find Ămileâs stuff to see that thereâs another side to the story. They arenât going to publicly say âyo Ămile I think there might be another side to the storyâ. But fewer of them will signal boost their writings on the theory that âEAs have nothing to say in their own defense, therefore they are guiltyâ. Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
Maybe it would be useful to discuss concrete examples of engagement and think about whatâs been helpful/âharmful.
Offhand, I would guess that the holiday fundraiser that Ămile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Ămile in good faith is potentially dangerous.
Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.
Offhand, I would guess that the holiday fundraiser that Ămile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.
I would guess any legitimization of Ămile by Nathan was symmetrical with a legitimization of Nathan by Ămile. However I didnât get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the âleft-ish academic circlesâ reading Ămile who might otherwise believe that Ămile is against all EA causes/âorganizations. (And among âleft-ish academicsâ who might otherwise believe that Nathan scorns ânear-termistâ causes.)
Thereâs a lot of cause prioritization disagreement within EA, but it doesnât usually get vicious, in part because EAs have âskin in the gameâ with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Ămileâs audience to feel some genuine curiosity about how to make their holiday giving effective, theyâll wonder why some people are longtermists. I think itâs absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and itâs worth understanding why they give to the causes they do.
most of the donations funging heavily against other EA causes
Do you have specific reasons to believe this? Itâs a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didnât consider GiveDirectly a top pick on its own, they might have considered âGiveDirectly plus better relations with Ămile with no extra costâ to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization.
BTW, a mental model here is:
...it is striking by how often these shifts in opinion appear, upon closer inspection, to be triggered by Torres experiencing a feeling of rejection, such as being denied a job, not being invited to a podcast, or having a book collaboration terminated. Torresâs subsequent ârealizationâ that these people and communities, once held in such high esteem, were in fact profoundly evil or dangerous routinely comes after those personal setbacks, as a post hoc rationalization.
If Ămile is motivated to attack EA because they feel rejected by it, itâs conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fundraiser thing makes me think this could work if done well, although the Helen Pluckrose thing from Markâs post makes me think itâs risky. But if itâs private, especially from a person whoâs not particularly well-known, I assume it wouldnât run the specific risk of legitimization.
1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, itâs worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - Iâm not sure itâs clear that âthe silent majority can often already see their mistakesâ in this case. I donât think this is a minor view on EA. I think a lot of people are sympathetic to Torresâ point of view, and a signficiant part of that is (in my opinion) because there wasnât a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I donât think much could have been much done to stop Ămile turning against EA,[1] but I absolutely donât think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! Theyâre partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I donât think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who donât fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, âignore itâ doesnât look like a good strategy
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I canât find any big name EAâs on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I donât want to fully re-litigate this history, as Iâm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but youâd think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
Agreed. It predated Emileâs public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldnât have become allies who have strong ideological influence over a large part of AI research space.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isnât âare the important harms to be prioritised the existential harms or the non-existential ones?â, âwill AI be agents or not?â, nor âwill AI be stochastic parrots or superintelligence?â Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
For the record, Iâm very willing to be corrected/âamend my Quick Take (and my beliefs on this is in general) if âignore itâ isnât an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/âEA organisations that Iâm not aware of? I still think the net effect of EA actions in any case was closer to âignore itâ, but the literal strong claim may be incorrect.
Edit: I actually think these considertions should go for many of the comments in this sub-thread, not just my own. Thereâs a lot to disagree about, but I donât think any comment in this chain is worthy of downvotes? (Especially strong ones)
A meta-thought on this take given the discussion its generated.
Currently this is at net 10 upvotes from 20 total votes at time of writingâbut is ahead 8 to 6 on agree/âdisagree votes. Based on Forum voting norms, I donât think this is particularly deserving of downvotes given the suggested criteria? Especially strong ones? Disagreevotesâgo ahead, be my guest! Comments pointing out where Iâve gone wrongâI actively encourage you to do so!
I put this in a Quick Take, not a top-level post so itâs not as visible on the Forum front page. (and the point of a QT is for âexploratory, draft-stage, rough thoughts like this).I led off with saying âI thinkââIâm just voicing my concerns about the atmosphere surrounding OpenPhil and its perception. Itâs written in good faith, albeit with a concerned tone. I donât think it violates what the EA Forum should be about.[1]
I know these kind of comments are annoying but still, I wanted to point out that this vote distribution feels a bit unfair, or at least unexplained to me. Sure, silent downvoting is a signal, but itâs a crude and noisy signal and I donât really have much to update off here.
If you downvoted but donât want to get involved in a public discussion about it, feel free to send me a DM with feedback instead. We donât have to get into a discussion about the merits (if you donât want to!), Iâm just confused at the vote distribution.
The harsh crticism of EA has only been a good thing, forcing us to have higher standards and rigour. We donât want an echochamber.
I would see it as a thoroughly good thing if Open Philanthropy were to combat the protrayal of itself as a shadowy cabal (like in the recent politico piece) for example by:
Having more democratic buy-in with the public
e.g. Having a bigger public presence in media, relying on a more diverse pool of funding than (i.e. less billionarie funding)
Engaged in less political lobbying
More transparent about the network of organisations around them
e.g. from the Politico article: â⊠said Open Philanthropyâs use of Horizon ⊠suggests an attempt to mask the programâs ties to Open Philanthropy, the effective altruism movement or leading AI firmsâ
I am not convinced that âhaving a bigger public presence in mediaâ is a reliable way to get democratic buy-in. (There is also some âdamned if you, damned if you donâtâ dynamic going on hereâif OP was constantly engaging in media interactions, theyâd probably be accused of âunduly influencing the discourse/âthe media landscapeâ)
Could you describe what a more democratic OP would look like?
You mention âless billionaire fundingââOP was built on the idea of giving away Dustinâs and Cariâs money in the most effective way. OP is not fundraising, it is grantmaking! So how could it, as you put it, ârely on a more diverse pool of fundingâ?
(also: https://ââforum.effectivealtruism.org/ââposts/ââzuqpqqFoue5LyutTv/ââthe-ea-community-does-not-own-its-donors-money)
I also suspect we would see the same dynamic as above: If OP did actively try to secure additional money in the forms of government grants, theyâd be maligned for absorbing public resources in spite of their own wealth.
I think a blanket condemnation of political lobbying or the suggestion to âdo lessâ of it is not helpful. Advocating for better policies (in animal welfare, GHD, pandemic preparedness etc.) is in my view one of the most impactful things you can do. I fear we are throwing the baby out with the bathwater here.
I think (at least) somebody at Open Philanthropy needs to start thinking about reacting to an increasing move towards portraying it, either sincerely or strategically, as a shadowy cabal-like entity influencing the world in an âevil/âsinisterâ way, similar to how many right-wingers across the world believe that George Soros is contributing to the decline of Western Civilization through his political philanthropy.
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Ămile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles. In many think pieces responding to WWOTF or FTX or SBF, they get extensively cited as a primary EA-critic, for example.
I think the âignore itâ strategy was a mistake and Iâm afraid the same mistake might happen again, with potentially worse consequences.
Do people realise that theyâve going to release a documentary sometime soon?
You say this as if there were ways to respond which would have prevented this. Iâm not sure these exist, and in general I think âignore itâ is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. Itâs not going to get Ămile to publicly recant anything. The goal is just for people who find Ămileâs stuff to see that thereâs another side to the story. They arenât going to publicly say âyo Ămile I think there might be another side to the storyâ. But fewer of them will signal boost their writings on the theory that âEAs have nothing to say in their own defense, therefore they are guiltyâ. Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
Maybe it would be useful to discuss concrete examples of engagement and think about whatâs been helpful/âharmful.
Offhand, I would guess that the holiday fundraiser that Ămile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Ămile in good faith is potentially dangerous.
Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.
What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.
I would guess any legitimization of Ămile by Nathan was symmetrical with a legitimization of Nathan by Ămile. However I didnât get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the âleft-ish academic circlesâ reading Ămile who might otherwise believe that Ămile is against all EA causes/âorganizations. (And among âleft-ish academicsâ who might otherwise believe that Nathan scorns ânear-termistâ causes.)
Thereâs a lot of cause prioritization disagreement within EA, but it doesnât usually get vicious, in part because EAs have âskin in the gameâ with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Ămileâs audience to feel some genuine curiosity about how to make their holiday giving effective, theyâll wonder why some people are longtermists. I think itâs absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and itâs worth understanding why they give to the causes they do.
Do you have specific reasons to believe this? Itâs a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didnât consider GiveDirectly a top pick on its own, they might have considered âGiveDirectly plus better relations with Ămile with no extra costâ to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization.
BTW, a mental model here is:
https://ââmarkfuentes1.substack.com/ââp/ââemile-p-torress-history-of-dishonesty
If Ămile is motivated to attack EA because they feel rejected by it, itâs conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fundraiser thing makes me think this could work if done well, although the Helen Pluckrose thing from Markâs post makes me think itâs risky. But if itâs private, especially from a person whoâs not particularly well-known, I assume it wouldnât run the specific risk of legitimization.
[edited to fix pronouns, sorry!!]
FYI, Ămileâs pronouns are they/âthem.
[Edit: I really donât like that this comment got downvoted and disagree voted...]
I agree itâs a solid heuristic, but heuristics arenât foolproof and itâs important to be able to realise where theyâre not working.
I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this:
1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, itâs worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - Iâm not sure itâs clear that âthe silent majority can often already see their mistakesâ in this case. I donât think this is a minor view on EA. I think a lot of people are sympathetic to Torresâ point of view, and a signficiant part of that is (in my opinion) because there wasnât a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I donât think much could have been much done to stop Ămile turning against EA,[1] but I absolutely donât think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! Theyâre partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I donât think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who donât fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, âignore itâ doesnât look like a good strategy
Though they were part of EA space for a while, so thereâs probably some âcommon knowledgeâ that some people might have that paints a
I think the whole thread this tweet is a part of is worth reading
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I canât find any big name EAâs on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I donât want to fully re-litigate this history, as Iâm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but youâd think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
Agreed. It predated Emileâs public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldnât have become allies who have strong ideological influence over a large part of AI research space.
Iâd like to think so too, but this is a bridge that needs to build from both ends imo, as I wouldnât recommend a unilateral action unless I really trusted the other parties involved.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
What do you see as the risk of building a bridge if itâs not reciprocated?
âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
https://ââforum.effectivealtruism.org/ââposts/ââQ4rg6vwbtPxXW6ECj/ââwe-are-fighting-a-shared-battle-a-call-for-a-different (Itâs been a while since I read this so Iâm not sure it is what you are looking for, but Gideon Futerman had some ideas for what âbridge buildingâ might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
.
For the record, Iâm very willing to be corrected/âamend my Quick Take (and my beliefs on this is in general) if âignore itâ isnât an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/âEA organisations that Iâm not aware of? I still think the net effect of EA actions in any case was closer to âignore itâ, but the literal strong claim may be incorrect.
Edit: I actually think these considertions should go for many of the comments in this sub-thread, not just my own. Thereâs a lot to disagree about, but I donât think any comment in this chain is worthy of downvotes? (Especially strong ones)
A meta-thought on this take given the discussion its generated.
Currently this is at net 10 upvotes from 20 total votes at time of writingâbut is ahead 8 to 6 on agree/âdisagree votes. Based on Forum voting norms, I donât think this is particularly deserving of downvotes given the suggested criteria? Especially strong ones? Disagreevotesâgo ahead, be my guest! Comments pointing out where Iâve gone wrongâI actively encourage you to do so!
I put this in a Quick Take, not a top-level post so itâs not as visible on the Forum front page. (and the point of a QT is for âexploratory, draft-stage, rough thoughts like this).I led off with saying âI thinkââIâm just voicing my concerns about the atmosphere surrounding OpenPhil and its perception. Itâs written in good faith, albeit with a concerned tone. I donât think it violates what the EA Forum should be about.[1]
I know these kind of comments are annoying but still, I wanted to point out that this vote distribution feels a bit unfair, or at least unexplained to me. Sure, silent downvoting is a signal, but itâs a crude and noisy signal and I donât really have much to update off here.
If you downvoted but donât want to get involved in a public discussion about it, feel free to send me a DM with feedback instead. We donât have to get into a discussion about the merits (if you donât want to!), Iâm just confused at the vote distribution.
Again, especially in Quick Takes
The harsh crticism of EA has only been a good thing, forcing us to have higher standards and rigour. We donât want an echochamber.
I would see it as a thoroughly good thing if Open Philanthropy were to combat the protrayal of itself as a shadowy cabal (like in the recent politico piece) for example by:
Having more democratic buy-in with the public
e.g. Having a bigger public presence in media, relying on a more diverse pool of funding than (i.e. less billionarie funding)
Engaged in less political lobbying
More transparent about the network of organisations around them
e.g. from the Politico article: â⊠said Open Philanthropyâs use of Horizon ⊠suggests an attempt to mask the programâs ties to Open Philanthropy, the effective altruism movement or leading AI firmsâ
I am not convinced that âhaving a bigger public presence in mediaâ is a reliable way to get democratic buy-in. (There is also some âdamned if you, damned if you donâtâ dynamic going on hereâif OP was constantly engaging in media interactions, theyâd probably be accused of âunduly influencing the discourse/âthe media landscapeâ) Could you describe what a more democratic OP would look like?
You mention âless billionaire fundingââOP was built on the idea of giving away Dustinâs and Cariâs money in the most effective way. OP is not fundraising, it is grantmaking! So how could it, as you put it, ârely on a more diverse pool of fundingâ? (also: https://ââforum.effectivealtruism.org/ââposts/ââzuqpqqFoue5LyutTv/ââthe-ea-community-does-not-own-its-donors-money) I also suspect we would see the same dynamic as above: If OP did actively try to secure additional money in the forms of government grants, theyâd be maligned for absorbing public resources in spite of their own wealth.
I think a blanket condemnation of political lobbying or the suggestion to âdo lessâ of it is not helpful. Advocating for better policies (in animal welfare, GHD, pandemic preparedness etc.) is in my view one of the most impactful things you can do. I fear we are throwing the baby out with the bathwater here.