It’s also noteworthy that Dustin/GV/OP is both by far the largest EA donor and the one EA folks most often single out to express frustration & disappointment about. I get why that is but you gotta laugh
EA folks most often single out to express frustration & disappointment about
I’ve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything / do anything that’s not associated with Lightcone and its affiliates, so I don’t know if he should count as expressing frustration at Dustin in particular.
I’ve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything / do anything that’s not associated with Lightcone and its affiliates, so I don’t know if he should count as expressing frustration at Dustin in particular.
There is a huge amount of work I am deeply grateful for that as far as I can tell is not “associated with Lightcone and its affiliates”. Some examples:
The historical impact of the Future of Humanity Institute
The historical impact of MIRI
Gwern’s writing and work
Joe Carlsmith’s writing
Basically all of Holden’s intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways)
Basically all of Owen Cotton Barratt’s contributions (bar the somewhat obvious fuckups that came out last year, though I think they don’t outshine his positive contributions)
John Wentworth’s contributions
Ryan Greenblatt’s and Buck Shlegeris’s contributions
Paul Christiano’s and Mark Xu’s research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great)
Rohin Shah’s many great contributions over the years
More broadly the Deepmind safety team
Zach Stein Perlman’s work on carefully analyzing lab policies and commitments
There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/EA/AI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
(“Disliking” feels a bit shallow, though I think a fair gloss. I have huge respect for a huge number of people at Open Philanthropy as well as many of the things you’ve done, in addition to many hard feelings and grievances.
It does sadly to me seem like we are in a world where getting many things right, but some things wrong, still can easily flip the sign on the impact of one’s actions and make it possible to cause large amounts of harm, which is my feeling with regards to OP.
I feel confused what exact emotional and social relationship that should cause me to have with OP. I have similar feelings about e.g. many people at Anthropic. In many respects they seem so close to having an enormous positive impact, they think carefully through many important considerations, and try pretty hard to create an environment for good thinking, but in expected impact space so far away from where I wish they were.)
Lol, it did totally work and I quite liked your comment and laughed about it. I just wanted to clarify since IDK, it does all still feel a bit high-stakes and being clear seems valuable, but I think your comment was great and did indeed ease a bunch of tension in me.
Ok great. Well I just want to re-emphasize the distinction again between “OP” and the people who work at OP. It’s not a homogenous blob of opinions, and AFAIK we didn’t fire anybody related to this, so a lot of the individuals who work there definitely agree with you/want to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I don’t work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasn’t what I was aiming for, and I’ll try to mitigate it if I can.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I don’t work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasn’t what I was aiming for, and I’ll try to mitigate it if I can.
Thanks, I appreciate that. I might message you at random points in the coming months/years with chilling effects I notice (in as much as that won’t exacerbate them), and maybe ideas to mitigate them.
I won’t expect any response or much engagement, I am already very grateful for the bandwidth you’ve given me here, as one of the people in the world with the highest opportunity cost.
You don’t appear to be majorly used for safety-washing
You don’t appear to be under the same amount of crazy NDAs as I’ve seen from OpenAI and Anthropic
You don’t seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and don’t seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isn’t making a statement about what Google is doing, or Deepmind at large is doing, it’s just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/Brain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
I’d say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/GV/OP have mostly appeared as an aggregate blob—“where the money comes from”. And I’ve heard so much frustration & disappointment about OP over the years! (Along with a lot of praise of course.) That said, I get the spirit of your comment, I wouldn’t want to overstate how negative people are about Dustin or OP.
And for the record I’ve spent considerable energy criticizing OP myself, though not quite so far as “frustration” or “disappointment”.
As the author of the comment linked for “criticizing OP for departing grantees too quickly,” I’d note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I don’t see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.
It’s also noteworthy that Dustin/GV/OP is both by far the largest EA donor and the one EA folks most often single out to express frustration & disappointment about. I get why that is but you gotta laugh
I’ve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything / do anything that’s not associated with Lightcone and its affiliates, so I don’t know if he should count as expressing frustration at Dustin in particular.
Are you including things like criticizing OP for departing grantees too quickly and for not departing grantees quickly enough, or do you have something else in mind?
I view Dustin and OP as quite separate, especially before Holden’s departure, so that might also explain our different experience.
There is a huge amount of work I am deeply grateful for that as far as I can tell is not “associated with Lightcone and its affiliates”. Some examples:
The historical impact of the Future of Humanity Institute
The historical impact of MIRI
Gwern’s writing and work
Joe Carlsmith’s writing
Basically all of Holden’s intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways)
Basically all of Owen Cotton Barratt’s contributions (bar the somewhat obvious fuckups that came out last year, though I think they don’t outshine his positive contributions)
John Wentworth’s contributions
Ryan Greenblatt’s and Buck Shlegeris’s contributions
Paul Christiano’s and Mark Xu’s research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great)
Rohin Shah’s many great contributions over the years
More broadly the Deepmind safety team
Zach Stein Perlman’s work on carefully analyzing lab policies and commitments
There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/EA/AI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
I can confirm that Oliver dislikes us especially, and that other people dislike us as well.
(“Disliking” feels a bit shallow, though I think a fair gloss. I have huge respect for a huge number of people at Open Philanthropy as well as many of the things you’ve done, in addition to many hard feelings and grievances.
It does sadly to me seem like we are in a world where getting many things right, but some things wrong, still can easily flip the sign on the impact of one’s actions and make it possible to cause large amounts of harm, which is my feeling with regards to OP.
I feel confused what exact emotional and social relationship that should cause me to have with OP. I have similar feelings about e.g. many people at Anthropic. In many respects they seem so close to having an enormous positive impact, they think carefully through many important considerations, and try pretty hard to create an environment for good thinking, but in expected impact space so far away from where I wish they were.)
Apologies, again, for putting words in your mouth. I was using a little gallows humor to try to break the tension. It didn’t work.
Lol, it did totally work and I quite liked your comment and laughed about it. I just wanted to clarify since IDK, it does all still feel a bit high-stakes and being clear seems valuable, but I think your comment was great and did indeed ease a bunch of tension in me.
Ok great. Well I just want to re-emphasize the distinction again between “OP” and the people who work at OP. It’s not a homogenous blob of opinions, and AFAIK we didn’t fire anybody related to this, so a lot of the individuals who work there definitely agree with you/want to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I don’t work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasn’t what I was aiming for, and I’ll try to mitigate it if I can.
Thanks, I appreciate that. I might message you at random points in the coming months/years with chilling effects I notice (in as much as that won’t exacerbate them), and maybe ideas to mitigate them.
I won’t expect any response or much engagement, I am already very grateful for the bandwidth you’ve given me here, as one of the people in the world with the highest opportunity cost.
Sometimes I wish we had a laughing emoji here...
I’m pleasantly surprised at the DeepMind alignment team being the only industry team called out! I’m curious what you think we’re doing right?
You don’t appear to be majorly used for safety-washing
You don’t appear to be under the same amount of crazy NDAs as I’ve seen from OpenAI and Anthropic
You don’t seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and don’t seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isn’t making a statement about what Google is doing, or Deepmind at large is doing, it’s just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/Brain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
I’d say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/GV/OP have mostly appeared as an aggregate blob—“where the money comes from”. And I’ve heard so much frustration & disappointment about OP over the years! (Along with a lot of praise of course.) That said, I get the spirit of your comment, I wouldn’t want to overstate how negative people are about Dustin or OP.
And for the record I’ve spent considerable energy criticizing OP myself, though not quite so far as “frustration” or “disappointment”.
As the author of the comment linked for “criticizing OP for departing grantees too quickly,” I’d note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I don’t see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.