I generally believe they are all valuable (in expectation anyway).
Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally. Digital minds work, for example, primes the idea that our AI safety concerns are focused on consciousness-driven catalysts (“Terminator scenarios”), when in reality that is just one of a wide variety of ways AI can result in catastrophe.
[What I thought I was affirming here seems to be universally misunderstood by readers, so I’m taking it back.]
I hope to see everything funded by a more diverse group of actors, so that their dollar and non-dollar costs are more distributed. Per my other comment, I believe you (Oliver) want that too.
I would be very surprised if digital minds work of all things would end up PR-costly in relevant ways. Indeed, my sense is many of the “weird” things that you made a call to defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem, and I expect will continue to be the attractor for both funding and talent to many of the world’s most important priorities.
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path. Not doing so because some people will walk away with a very shallow understanding of “consciousness” does not seem to me like a good reason to not do that work.
I think you absolutely have a right to care and value your personal reputation, but I do not think your judgement of what would hurt the “movement/EA brand” is remotely accurate here.
I think something worth noting is that most (all?) the negative PR on EA over the last year has focused on areas that will continue to be funded. The areas that were cut for the most part have, to my knowledge, not drawn negative media coverage (maybe Wytham Abbey is an exception — not sure if the sale was related to this though).
Of course, these areas could still be PR risks, and the other areas could be worth funding in spite of the PR risks.
Edit: For disagree voters, I’m curious why you disagree? A quick Google of the negative coverage of OpenPhil or EA all appear to be areas that OpenPhil has not pulled out of, at least to my knowledge. I’m not arguing that they shouldn’t have made this determination, but I’d be interested in counter-examples if you disagree (negative media coverage of EA work in an area they are not granting in any more). I’m sure there are some, but my read is that most the negative media is covering work they are still doing. I see some minor off hand remarks about digital sentience, but negative media is overwhelmingly focused on AI x-risk work or FTX/billionaire philanthropy.
And of course a lot less positive attention. Indeed a very substantial fraction of the current non-OP funding (like Vitalik’s and Jed’s and Jaan’s funding) is downstream of these “weirder” things. “Less is less” requires there to be less, but my sense is that your actions substantially decrease the total support that EA and adjacent efforts will receive both reputationally and financially.
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
I don’t know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world’s most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
more serious consideration from the world’s smartest people,
or more people respecting the integrity and honesty of the people working on AI Safety,
or the ability for people to successfully coordinate around the safe development of extremely powerful AI systems,
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I’ve picked up through the grapevine, you did not implement something that is best described as “reducing the number of core priority areas” but instead is better described as “blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have”. That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
The aggregate portfolio of Open Phil and Good Ventures is quite bad for the world… I also genuinely think you are causing pretty huge harm for the world.
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
This comment overall isn’t obviously immediately relevant
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don’t currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling)
I don’t really know what you mean by this. I don’t want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don’t have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don’t blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
I don’t really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don’t completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description “into the fold”. The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the “inviting into the fold” here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition)
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn’t seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like “you feel like you have to personally defend or engage with everything that OP funds”.
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn’t generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don’t know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don’t want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.
Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
You and I disagree on this, but it feels important to say we disagree on this. To me LessWrong has about a similar amount of edgelordism to the forum (not much).
Who are these “fringe opinion holders” brough “into the fold”. To the extent that this is maybe a comment about Hanania, it seems pretty unfair to blame that on rationality. Manifest is not a rationalist event significantly more than it is an EA one (if anything most of the money was from OP, not SFF right?).
To the extent this is a load-bearing part of your decision-making it just seems not true, rationalism isn’t getting more fringe, rationality seems to have about as much edginess as EA.
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
I didn’t mean to argue against the Digital Minds work here and explicitly see them as within the circle of moral concern. However, I believe that a different funder within the EA community would still mitigate the costs I’m talking about tactically. By bringing up the topic there, I only meant to say this isn’t all about personal/selfish ends from my POV (*not* that I think it nets out to being bad to do).
It is not a lot of money to fund at this stage as I understand it, and I hope to see if funded by someone who will also engage with the intellectual and comms work. For GV, I feel more than fully occupied with AI Safety.
Appreciate you engaging thoughtfully with these questions!
I’m slightly confused about this specific point—it seems like you’re saying that work on digital minds (for example) might impose PR costs on the whole movement, and that you hope another funder might have the capacity to fund this while also paying a lot of attention to the public perception.
But my guess is that other funders might actually be less cautious about the PR of the whole movement, and less invested in comms that don’t blow back on (for example) AI safety.
Like, personally I am in favour of funder diversity but it seems like one of the main things you lose as things get more decentralised is the ability to limit the support that goes to things that might blow back on the movement. To my taste at least, one of the big costs of FTX was the rapid flow of funding into things that looked (and imo were) pretty bad in a way that has indirectly made EA and OP look bad. Similarly, even if OP doesn’t fund things like Lighthaven for maybe-optics-ish reasons, it still gets described in news articles as an EA venue.
Basically, I think better PR seems good, and more funding diversity seems good, but I don’t expect the movement is actually going to get both?
(I do buy that the PR cost will be more diffused across funders though, and that seems good, and in particular I can see a case for preserving GV as something that both is and seems reasonable and sane, I just don’t expect this to be true of the whole movement)
First I feel like you are conflating 2 issues here. You start and finish by talking about PR, but in the middle you argue the important of the future I think it’s important to separate these two issues to avoid confusion, I’ll just discuss the PR angle
I disagree and think there’s a smallish but significant risk of PR badness here. From my experience talking to even my highly educated friends who aren’t into EA, they find it very strange that money is invested into researching the welfare of future AI minds at all and often flat out disagree that money should be spent on that. That indicates to me (weakly from anecdata) that there is at least some PR risk here.
I also think there are pretty straightforward framings like “millions poured into welfare of robot minds which don’t Even exist yet” which could certainly be bad for PR. If I were anti EA I could write a pretty good hit piece about rich people in silicon valley prioritizing their digital mind AI hobby horse ahead of millions of real minds that are suffering right now.
What are your grounds for thinking that this has a almost insignificant chance of being “PR costly”?
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of the many working in areas not defunded, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
“defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem,”
That said I probably do agree with this...
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path”
But don’t want to conflate that with the PR risk....
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of global health and animal welfare people, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
For what it’s worth, as a minor point, the animal welfare issues I think are most important, and the interventions I suspect are the most cost-effective right now (e.g. shrimp stunning), are basically only fundable because of EA being weird in the past and willing to explore strange ideas. I think some of this does entail genuine PR risk in certain ways, but I don’t think we would have gotten most of the most valuable progress that EA has made for animal welfare if we paid attention to PR between 2010 and 2021, and the animal welfare space would be much worse off. That doesn’t mean PR shouldn’t be a consideration now, but as a historical matter, I think it is correct that impact in the animal space has largely been driven by comfort with weird ideas. I think the new funding environment is likely a lot worse for making meaningful progress on the most important animal welfare issues.
The “non-weird” animal welfare ideas that are funded right now (corporate chicken campaigns and alternative proteins?) were not EA innovations and were already being pursued by non-EA animal groups when EA funding entered the space. If these are the best interventions OpenPhil can fund due to PR concerns, animals are a lot worse off.
I personally would rather more animal and global health groups distanced themselves from EA if there were PR risks, than EA distancing itself from PR risks. It seems like groups could just make determinations about the right strategies for their own work with regard to PR, instead of there being top down enforcement of a singular PR strategy, which I think is likely what this change will mostly cause. E.g. I think that the EA-side origins of wild animal welfare work are highly risky from a PR angle, but the most effective implementation of them, WAI, both would not have occurred without that PR risky work (extremely confident), and is now exceedingly normal / does not pose a PR risk to EA at all (fairly confident) nor does EA pose one to it (somewhat confident). It just reads as just a normal wild animal scientific research group to basically any non-EA who engages with it.
Thanks for the reply! I wasnt actually aware that animal welfare has run into major PR issues. I didn’t think the public took much interest in wild animal or shrimp welfare. I probably missed it but would be interested to see the articles / hit pieces.
I don’t think how “weird” something is necessarily correlates to PR risk. It’s definitely a factor but there are others too. For example buying Wytham Abbey wasn’t weird, but appeared to many in the public at least inconsistent with EA values.
I agree that I make two separate points. I think evaluating digital sentience seems pretty important from a “try to be a moral person” perspective, and separately, I think it’s just a very reasonable and straightforward question to ask that I expect smart people to be interested in and where smart people will understand why someone might want to do research on this question. Like, sure, you can frame everything in some horribly distorting way, and find some insult that’s vaguely associated with that framing, but I don’t think that’s very predictive of actual reputational risk.
and also dismissive of global health and animal welfare people, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
Most of the sub-cause areas that I know about that have been defunded are animal welfare priorities. Things like insect suffering and wild animal welfare are two of the sub-cause areas that are getting defunded, which I both considered to be among the more important animal welfare priorities (due to their extreme neglectedness). I am not being dismissive of either global health or animal welfare people, they are being affected by this just as much (I know less about global health, and my sense is the impact of these changes are less bad there, but I still expect a huge negative chilling effect on people trying to think carefully about the issues around global health).
Specifically with digital minds I still disagree that it’s a super unlikely area to be as PR risk. To me it seems easier than other areas to take aim at, the few people I’ve talked to about it find it more objectionable than other EA stuff I’ve talked about. and there seems to me some prior as it could be associated with other long termist EA work that has already taken PR hits.
Thanks for the clarification about the defunded areas I just assumed it was only long termist areas defunded my bad I got that wrong. Have corrected my reply.
Would be good to see an actual list of the defunded areas...
Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally.… I hope to see everything funded by a more diverse group of actors, so that their dollar and non-dollar costs are more distributed.
Do you think that these “PR” costs would be mitigated if there were more large (perhaps more obscure) donors? Also, do you think that “weird” stuff like artificial sentience should be funded at all or just not by Good Ventures?
Yes, I’m explicitly pro-funding by others. Framing the costs as “PR” limits the way people think about mitigating costs. It’s not just “lower risk” but more shared responsibility and energy to engage with decision making, persuading, defending, etc.
@Dustin Moskovitz I think some of the confusion is resulting from this:
Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally. Digital minds work, for example, primes the idea that our AI safety concerns are focused on consciousness-driven catalysts (“Terminator scenarios”), when in reality that is just one of a wide variety of ways AI can result in catastrophe.
In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
Then, people started challenging things like “how much should reputational risk to the EA movement matter and what really are the second-order effects of things like digital minds research.”
Then, I was expecting you to just say something like “yeah, we probably disagree on the importance of reputation and second-order effects.”
But instead, it feels (to me) like you kind of backtracked and said “no actually, it’s not really about reputation. It’s more about limited capacity– we have finite energy, attention, stress, etc. Also shared responsibility.”
It’s plausible that I’m misunderstanding something, but it felt (at least to me) like your earlier message made it seem like PR/reputation was the central factor and your later messages made it seem like it’s more about limited capacity/energy. These feel like two pretty different rationales, so it might be helpful for you to clarify which one is more influential (or present a clearer synthesis of the two rationales).
(Also, I don’t think you necessarily owe the EAF an explanation– it’s your money etc etc.)
>> In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
I agree people are paraphrasing me like this. Let’s go back to the quote I affirmed: “Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari’s public reputation in an adverse way, or are generally “weird” in a way that might impose more costs on Dustin and Cari.”
I read the part after “or” as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it. The example in my comment is not about reputation. Later comments from Oliver seem to imply he really did mean just PR risk so I was wrong to affirm this.
If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything. I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now. [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.
I want to echo the other replies here, and thank you for how much you’ve already engaged on this post, although I can see why you want to stop now.
I did in fact round off what you were saying as being about PR risk yesterday, and I commented as such, and you replied to correct that, and I found that really helpful—I’m guessing a lot of others did too. I suppose if I had already understood, I wouldn’t have commented.
I’m not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is “give an inch, fight a mile”. So I’ll choose to be less legible instead
At the risk of overstepping or stating the obvious:
It seems to me like there’s been less legibility lately, and I think that means that a lot more confusion brews under the surface. So more stuff boils up when there is actually an outlet.
That’s definitely not your responsibility, and it’s particularly awkward if you end up taking the brunt of it by actually stepping forward to engage. But from my perspective, you engaging here has been good in most regards, with the notable exception that it might have left you more wary to engage in future.
I read the part after “or” as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it.
Ah, gotcha. This makes sense– thanks for the clarification.
If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything
I’ve looked over the comments here a few times, and I suspect you might think you’re coming off more clearly than you actually are. It’s plausible to me that since you have all the context of your decision-making, you don’t see when you’re saying things that would genuinely confuse others.
For example, even in statement you affirmed, I see how if one is paying attention to the “or”, one could see you technically only/primarily endorsing the non-PR part of the phrase.
But in general, I think it’s pretty reasonable and expected that people ended up focusing on the PR part.
More broadly, I think some of your statements have been kind of short and able to be interpreted in many ways. EG, I don’t get a clear sense of what you mean by this:
It’s not just “lower risk” but more shared responsibility and energy to engage with decision making, persuading, defending, etc.
I think it’s reasonable for you to stop engaging here. Communication is hard and costly, misinterpretations are common and drain energy, etc. Just noting that– from my POV– this is less of a case of “people were interpreting you uncharitably” and more of a case of “it was/is genuinely kind of hard to tell what you believe, and I suspect people are mostly engaging in good faith here.”
I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now.
Sorry to hear that, several people I’ve spoken to about this offline also feel that you are being open and agreeable and the discussion reads from the outside as fairly civil, so except perhaps with the potential heat of this exchange with Ollie, I’d say most people get it and are happy you participated, particularly given that you didn’t need to. For myself, the bulk of my concern is with how I perceive OP to have handled this given their place in the EA community, rather than my personal and irrelevant partial disagreement with your personal funding decisions.
I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now.
Noooo, sorry you feel that way. T_T I think you sharing your thinking here is really helpful for the broader EA and good-doer field, and I think it’s an unfortunate pattern that online communications quickly feels (or even is) somewhat exhausting and combative.
Just an idea, maybe you would have a much better time doing an interview with e.g. Spencer Greenberg on his Clearer Thinking podcast, or Robert Wiblin on the 80,000 Hours podcast? I feel like they are pretty good interviewers who can ask good questions that make for accurate and informative interviews.
To be clear, I definitely didn’t just mean PR risks! (Or I meant them in a way that was intended in a quite broad way that includes lots of the other things you talk about). I tried to be quite mindful of that in for example my latest comment.
I generally believe they are all valuable (in expectation anyway).
Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally. Digital minds work, for example, primes the idea that our AI safety concerns are focused on consciousness-driven catalysts (“Terminator scenarios”), when in reality that is just one of a wide variety of ways AI can result in catastrophe.[What I thought I was affirming here seems to be universally misunderstood by readers, so I’m taking it back.]
I hope to see everything funded by a more diverse group of actors, so that their dollar and non-dollar costs are more distributed. Per my other comment, I believe you (Oliver) want that too.
I would be very surprised if digital minds work of all things would end up PR-costly in relevant ways. Indeed, my sense is many of the “weird” things that you made a call to defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem, and I expect will continue to be the attractor for both funding and talent to many of the world’s most important priorities.
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path. Not doing so because some people will walk away with a very shallow understanding of “consciousness” does not seem to me like a good reason to not do that work.
I think you absolutely have a right to care and value your personal reputation, but I do not think your judgement of what would hurt the “movement/EA brand” is remotely accurate here.
I think something worth noting is that most (all?) the negative PR on EA over the last year has focused on areas that will continue to be funded. The areas that were cut for the most part have, to my knowledge, not drawn negative media coverage (maybe Wytham Abbey is an exception — not sure if the sale was related to this though).
Of course, these areas could still be PR risks, and the other areas could be worth funding in spite of the PR risks.
Edit: For disagree voters, I’m curious why you disagree? A quick Google of the negative coverage of OpenPhil or EA all appear to be areas that OpenPhil has not pulled out of, at least to my knowledge. I’m not arguing that they shouldn’t have made this determination, but I’d be interested in counter-examples if you disagree (negative media coverage of EA work in an area they are not granting in any more). I’m sure there are some, but my read is that most the negative media is covering work they are still doing. I see some minor off hand remarks about digital sentience, but negative media is overwhelmingly focused on AI x-risk work or FTX/billionaire philanthropy.
Yes, there will definitely still be a lot of negative attention. I come from the “less is less” school of PR.
And of course a lot less positive attention. Indeed a very substantial fraction of the current non-OP funding (like Vitalik’s and Jed’s and Jaan’s funding) is downstream of these “weirder” things. “Less is less” requires there to be less, but my sense is that your actions substantially decrease the total support that EA and adjacent efforts will receive both reputationally and financially.
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
I don’t know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world’s most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
more serious consideration from the world’s smartest people,
or more people respecting the integrity and honesty of the people working on AI Safety,
or the ability for people to successfully coordinate around the safe development of extremely powerful AI systems,
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I’ve picked up through the grapevine, you did not implement something that is best described as “reducing the number of core priority areas” but instead is better described as “blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have”. That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
He recently made this comment on LessWrong, which expresses some of his views on the harm that OP causes.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don’t currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
I don’t really know what you mean by this. I don’t want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don’t have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don’t blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
I don’t really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don’t completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description “into the fold”. The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the “inviting into the fold” here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn’t seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like “you feel like you have to personally defend or engage with everything that OP funds”.
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn’t generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don’t know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don’t want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.
You and I disagree on this, but it feels important to say we disagree on this. To me LessWrong has about a similar amount of edgelordism to the forum (not much).
Who are these “fringe opinion holders” brough “into the fold”. To the extent that this is maybe a comment about Hanania, it seems pretty unfair to blame that on rationality. Manifest is not a rationalist event significantly more than it is an EA one (
if anything most of the money was from OP, not SFF right?).To the extent this is a load-bearing part of your decision-making it just seems not true, rationalism isn’t getting more fringe, rationality seems to have about as much edginess as EA.
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
Anyone know what post Dustin was referring to? EDIT: as per a DM, probably this one.
I didn’t mean to argue against the Digital Minds work here and explicitly see them as within the circle of moral concern. However, I believe that a different funder within the EA community would still mitigate the costs I’m talking about tactically. By bringing up the topic there, I only meant to say this isn’t all about personal/selfish ends from my POV (*not* that I think it nets out to being bad to do).
It is not a lot of money to fund at this stage as I understand it, and I hope to see if funded by someone who will also engage with the intellectual and comms work. For GV, I feel more than fully occupied with AI Safety.
Appreciate you engaging thoughtfully with these questions!
I’m slightly confused about this specific point—it seems like you’re saying that work on digital minds (for example) might impose PR costs on the whole movement, and that you hope another funder might have the capacity to fund this while also paying a lot of attention to the public perception.
But my guess is that other funders might actually be less cautious about the PR of the whole movement, and less invested in comms that don’t blow back on (for example) AI safety.
Like, personally I am in favour of funder diversity but it seems like one of the main things you lose as things get more decentralised is the ability to limit the support that goes to things that might blow back on the movement. To my taste at least, one of the big costs of FTX was the rapid flow of funding into things that looked (and imo were) pretty bad in a way that has indirectly made EA and OP look bad. Similarly, even if OP doesn’t fund things like Lighthaven for maybe-optics-ish reasons, it still gets described in news articles as an EA venue.
Basically, I think better PR seems good, and more funding diversity seems good, but I don’t expect the movement is actually going to get both?
(I do buy that the PR cost will be more diffused across funders though, and that seems good, and in particular I can see a case for preserving GV as something that both is and seems reasonable and sane, I just don’t expect this to be true of the whole movement)
“PR risk” is an unnecessarily narrow mental frame for why we’re focusing.
Risky things are risky in multiple ways. Diffusing across funders mitigates some of them, some of the time.
AND there are other bandwidth issues: energy, attention, stress, political influence. Those are more finite than capital.
First I feel like you are conflating 2 issues here. You start and finish by talking about PR, but in the middle you argue the important of the future I think it’s important to separate these two issues to avoid confusion, I’ll just discuss the PR angle
I disagree and think there’s a smallish but significant risk of PR badness here. From my experience talking to even my highly educated friends who aren’t into EA, they find it very strange that money is invested into researching the welfare of future AI minds at all and often flat out disagree that money should be spent on that. That indicates to me (weakly from anecdata) that there is at least some PR risk here.
I also think there are pretty straightforward framings like “millions poured into welfare of robot minds which don’t Even exist yet” which could certainly be bad for PR. If I were anti EA I could write a pretty good hit piece about rich people in silicon valley prioritizing their digital mind AI hobby horse ahead of millions of real minds that are suffering right now.
What are your grounds for thinking that this has a almost insignificant chance of being “PR costly”?
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of the many working in areas not defunded, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
“defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem,”
That said I probably do agree with this...
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path”
But don’t want to conflate that with the PR risk....
For what it’s worth, as a minor point, the animal welfare issues I think are most important, and the interventions I suspect are the most cost-effective right now (e.g. shrimp stunning), are basically only fundable because of EA being weird in the past and willing to explore strange ideas. I think some of this does entail genuine PR risk in certain ways, but I don’t think we would have gotten most of the most valuable progress that EA has made for animal welfare if we paid attention to PR between 2010 and 2021, and the animal welfare space would be much worse off. That doesn’t mean PR shouldn’t be a consideration now, but as a historical matter, I think it is correct that impact in the animal space has largely been driven by comfort with weird ideas. I think the new funding environment is likely a lot worse for making meaningful progress on the most important animal welfare issues.
The “non-weird” animal welfare ideas that are funded right now (corporate chicken campaigns and alternative proteins?) were not EA innovations and were already being pursued by non-EA animal groups when EA funding entered the space. If these are the best interventions OpenPhil can fund due to PR concerns, animals are a lot worse off.
I personally would rather more animal and global health groups distanced themselves from EA if there were PR risks, than EA distancing itself from PR risks. It seems like groups could just make determinations about the right strategies for their own work with regard to PR, instead of there being top down enforcement of a singular PR strategy, which I think is likely what this change will mostly cause. E.g. I think that the EA-side origins of wild animal welfare work are highly risky from a PR angle, but the most effective implementation of them, WAI, both would not have occurred without that PR risky work (extremely confident), and is now exceedingly normal / does not pose a PR risk to EA at all (fairly confident) nor does EA pose one to it (somewhat confident). It just reads as just a normal wild animal scientific research group to basically any non-EA who engages with it.
Thanks for the reply! I wasnt actually aware that animal welfare has run into major PR issues. I didn’t think the public took much interest in wild animal or shrimp welfare. I probably missed it but would be interested to see the articles / hit pieces.
I don’t think how “weird” something is necessarily correlates to PR risk. It’s definitely a factor but there are others too. For example buying Wytham Abbey wasn’t weird, but appeared to many in the public at least inconsistent with EA values.
I don’t think these areas have run into PR issues historically, but they are perceived as PR risks.
I agree that I make two separate points. I think evaluating digital sentience seems pretty important from a “try to be a moral person” perspective, and separately, I think it’s just a very reasonable and straightforward question to ask that I expect smart people to be interested in and where smart people will understand why someone might want to do research on this question. Like, sure, you can frame everything in some horribly distorting way, and find some insult that’s vaguely associated with that framing, but I don’t think that’s very predictive of actual reputational risk.
Most of the sub-cause areas that I know about that have been defunded are animal welfare priorities. Things like insect suffering and wild animal welfare are two of the sub-cause areas that are getting defunded, which I both considered to be among the more important animal welfare priorities (due to their extreme neglectedness). I am not being dismissive of either global health or animal welfare people, they are being affected by this just as much (I know less about global health, and my sense is the impact of these changes are less bad there, but I still expect a huge negative chilling effect on people trying to think carefully about the issues around global health).
Specifically with digital minds I still disagree that it’s a super unlikely area to be as PR risk. To me it seems easier than other areas to take aim at, the few people I’ve talked to about it find it more objectionable than other EA stuff I’ve talked about. and there seems to me some prior as it could be associated with other long termist EA work that has already taken PR hits.
Thanks for the clarification about the defunded areas I just assumed it was only long termist areas defunded my bad I got that wrong. Have corrected my reply.
Would be good to see an actual list of the defunded areas...
Do you think that these “PR” costs would be mitigated if there were more large (perhaps more obscure) donors? Also, do you think that “weird” stuff like artificial sentience should be funded at all or just not by Good Ventures?
[edit: see this other comment by Dustin]
Yes, I’m explicitly pro-funding by others. Framing the costs as “PR” limits the way people think about mitigating costs. It’s not just “lower risk” but more shared responsibility and energy to engage with decision making, persuading, defending, etc.
@Dustin Moskovitz I think some of the confusion is resulting from this:
In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
Then, people started challenging things like “how much should reputational risk to the EA movement matter and what really are the second-order effects of things like digital minds research.”
Then, I was expecting you to just say something like “yeah, we probably disagree on the importance of reputation and second-order effects.”
But instead, it feels (to me) like you kind of backtracked and said “no actually, it’s not really about reputation. It’s more about limited capacity– we have finite energy, attention, stress, etc. Also shared responsibility.”
It’s plausible that I’m misunderstanding something, but it felt (at least to me) like your earlier message made it seem like PR/reputation was the central factor and your later messages made it seem like it’s more about limited capacity/energy. These feel like two pretty different rationales, so it might be helpful for you to clarify which one is more influential (or present a clearer synthesis of the two rationales).
(Also, I don’t think you necessarily owe the EAF an explanation– it’s your money etc etc.)
>> In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
I agree people are paraphrasing me like this. Let’s go back to the quote I affirmed: “Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari’s public reputation in an adverse way, or are generally “weird” in a way that might impose more costs on Dustin and Cari.”
I read the part after “or” as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it. The example in my comment is not about reputation. Later comments from Oliver seem to imply he really did mean just PR risk so I was wrong to affirm this.
If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything.
I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now.[Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.I want to echo the other replies here, and thank you for how much you’ve already engaged on this post, although I can see why you want to stop now.
I did in fact round off what you were saying as being about PR risk yesterday, and I commented as such, and you replied to correct that, and I found that really helpful—I’m guessing a lot of others did too. I suppose if I had already understood, I wouldn’t have commented.
At the risk of overstepping or stating the obvious:
It seems to me like there’s been less legibility lately, and I think that means that a lot more confusion brews under the surface. So more stuff boils up when there is actually an outlet.
That’s definitely not your responsibility, and it’s particularly awkward if you end up taking the brunt of it by actually stepping forward to engage. But from my perspective, you engaging here has been good in most regards, with the notable exception that it might have left you more wary to engage in future.
Ah, gotcha. This makes sense– thanks for the clarification.
I’ve looked over the comments here a few times, and I suspect you might think you’re coming off more clearly than you actually are. It’s plausible to me that since you have all the context of your decision-making, you don’t see when you’re saying things that would genuinely confuse others.
For example, even in statement you affirmed, I see how if one is paying attention to the “or”, one could see you technically only/primarily endorsing the non-PR part of the phrase.
But in general, I think it’s pretty reasonable and expected that people ended up focusing on the PR part.
More broadly, I think some of your statements have been kind of short and able to be interpreted in many ways. EG, I don’t get a clear sense of what you mean by this:
I think it’s reasonable for you to stop engaging here. Communication is hard and costly, misinterpretations are common and drain energy, etc. Just noting that– from my POV– this is less of a case of “people were interpreting you uncharitably” and more of a case of “it was/is genuinely kind of hard to tell what you believe, and I suspect people are mostly engaging in good faith here.”
Sorry to hear that, several people I’ve spoken to about this offline also feel that you are being open and agreeable and the discussion reads from the outside as fairly civil, so except perhaps with the potential heat of this exchange with Ollie, I’d say most people get it and are happy you participated, particularly given that you didn’t need to. For myself, the bulk of my concern is with how I perceive OP to have handled this given their place in the EA community, rather than my personal and irrelevant partial disagreement with your personal funding decisions.
[edited to add “partial” in the last sentence]
Noooo, sorry you feel that way. T_T I think you sharing your thinking here is really helpful for the broader EA and good-doer field, and I think it’s an unfortunate pattern that online communications quickly feels (or even is) somewhat exhausting and combative.
Just an idea, maybe you would have a much better time doing an interview with e.g. Spencer Greenberg on his Clearer Thinking podcast, or Robert Wiblin on the 80,000 Hours podcast? I feel like they are pretty good interviewers who can ask good questions that make for accurate and informative interviews.
To be clear, I definitely didn’t just mean PR risks! (Or I meant them in a way that was intended in a quite broad way that includes lots of the other things you talk about). I tried to be quite mindful of that in for example my latest comment.
Can you give an example of a non-PR risk that you had in mind?