There are 8.1 billion people on the planet and afaict 8,099,999,999 of them donate less to my favorite causes & orgs than @Dustin Moskovitz. That was true before this update and it will remain true after it. Like everyone else I have elaborate views on how GV/âOP should spend money/âbe structured etc but let the record also show that I appreciate the hell out of Dustin & Cari, we got so lucky đ„Č
Itâs also noteworthy that Dustin/âGV/âOP is both by far the largest EA donor and the one EA folks most often single out to express frustration & disappointment about. I get why that is but you gotta laugh
EA folks most often single out to express frustration & disappointment about
Iâve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything /â do anything thatâs not associated with Lightcone and its affiliates, so I donât know if he should count as expressing frustration at Dustin in particular.
Iâve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything /â do anything thatâs not associated with Lightcone and its affiliates, so I donât know if he should count as expressing frustration at Dustin in particular.
There is a huge amount of work I am deeply grateful for that as far as I can tell is not âassociated with Lightcone and its affiliatesâ. Some examples:
The historical impact of the Future of Humanity Institute
The historical impact of MIRI
Gwernâs writing and work
Joe Carlsmithâs writing
Basically all of Holdenâs intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways)
Basically all of Owen Cotton Barrattâs contributions (bar the somewhat obvious fuckups that came out last year, though I think they donât outshine his positive contributions)
John Wentworthâs contributions
Ryan Greenblattâs and Buck Shlegerisâs contributions
Paul Christianoâs and Mark Xuâs research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great)
Rohin Shahâs many great contributions over the years
More broadly the Deepmind safety team
Zach Stein Perlmanâs work on carefully analyzing lab policies and commitments
There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/âEA/âAI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
(âDislikingâ feels a bit shallow, though I think a fair gloss. I have huge respect for a huge number of people at Open Philanthropy as well as many of the things youâve done, in addition to many hard feelings and grievances.
It does sadly to me seem like we are in a world where getting many things right, but some things wrong, still can easily flip the sign on the impact of oneâs actions and make it possible to cause large amounts of harm, which is my feeling with regards to OP.
I feel confused what exact emotional and social relationship that should cause me to have with OP. I have similar feelings about e.g. many people at Anthropic. In many respects they seem so close to having an enormous positive impact, they think carefully through many important considerations, and try pretty hard to create an environment for good thinking, but in expected impact space so far away from where I wish they were.)
Lol, it did totally work and I quite liked your comment and laughed about it. I just wanted to clarify since IDK, it does all still feel a bit high-stakes and being clear seems valuable, but I think your comment was great and did indeed ease a bunch of tension in me.
Ok great. Well I just want to re-emphasize the distinction again between âOPâ and the people who work at OP. Itâs not a homogenous blob of opinions, and AFAIK we didnât fire anybody related to this, so a lot of the individuals who work there definitely agree with you/âwant to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I donât work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasnât what I was aiming for, and Iâll try to mitigate it if I can.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I donât work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasnât what I was aiming for, and Iâll try to mitigate it if I can.
Thanks, I appreciate that. I might message you at random points in the coming months/âyears with chilling effects I notice (in as much as that wonât exacerbate them), and maybe ideas to mitigate them.
I wonât expect any response or much engagement, I am already very grateful for the bandwidth youâve given me here, as one of the people in the world with the highest opportunity cost.
You donât appear to be majorly used for safety-washing
You donât appear to be under the same amount of crazy NDAs as Iâve seen from OpenAI and Anthropic
You donât seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and donât seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isnât making a statement about what Google is doing, or Deepmind at large is doing, itâs just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/âBrain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
Iâd say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/âGV/âOP have mostly appeared as an aggregate blobââwhere the money comes fromâ. And Iâve heard so much frustration & disappointment about OP over the years! (Along with a lot of praise of course.) That said, I get the spirit of your comment, I wouldnât want to overstate how negative people are about Dustin or OP.
And for the record Iâve spent considerable energy criticizing OP myself, though not quite so far as âfrustrationâ or âdisappointmentâ.
As the author of the comment linked for âcriticizing OP for departing grantees too quickly,â Iâd note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I donât see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.
There are 8.1 billion people on the planet and afaict 8,099,999,999 of them donate less to my favorite causes & orgs than @Dustin Moskovitz. That was true before this update and it will remain true after it. Like everyone else I have elaborate views on how GV/âOP should spend money/âbe structured etc but let the record also show that I appreciate the hell out of Dustin & Cari, we got so lucky đ„Č
Itâs also noteworthy that Dustin/âGV/âOP is both by far the largest EA donor and the one EA folks most often single out to express frustration & disappointment about. I get why that is but you gotta laugh
Iâve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything /â do anything thatâs not associated with Lightcone and its affiliates, so I donât know if he should count as expressing frustration at Dustin in particular.
Are you including things like criticizing OP for departing grantees too quickly and for not departing grantees quickly enough, or do you have something else in mind?
I view Dustin and OP as quite separate, especially before Holdenâs departure, so that might also explain our different experience.
There is a huge amount of work I am deeply grateful for that as far as I can tell is not âassociated with Lightcone and its affiliatesâ. Some examples:
The historical impact of the Future of Humanity Institute
The historical impact of MIRI
Gwernâs writing and work
Joe Carlsmithâs writing
Basically all of Holdenâs intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways)
Basically all of Owen Cotton Barrattâs contributions (bar the somewhat obvious fuckups that came out last year, though I think they donât outshine his positive contributions)
John Wentworthâs contributions
Ryan Greenblattâs and Buck Shlegerisâs contributions
Paul Christianoâs and Mark Xuâs research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great)
Rohin Shahâs many great contributions over the years
More broadly the Deepmind safety team
Zach Stein Perlmanâs work on carefully analyzing lab policies and commitments
There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/âEA/âAI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
I can confirm that Oliver dislikes us especially, and that other people dislike us as well.
(âDislikingâ feels a bit shallow, though I think a fair gloss. I have huge respect for a huge number of people at Open Philanthropy as well as many of the things youâve done, in addition to many hard feelings and grievances.
It does sadly to me seem like we are in a world where getting many things right, but some things wrong, still can easily flip the sign on the impact of oneâs actions and make it possible to cause large amounts of harm, which is my feeling with regards to OP.
I feel confused what exact emotional and social relationship that should cause me to have with OP. I have similar feelings about e.g. many people at Anthropic. In many respects they seem so close to having an enormous positive impact, they think carefully through many important considerations, and try pretty hard to create an environment for good thinking, but in expected impact space so far away from where I wish they were.)
Apologies, again, for putting words in your mouth. I was using a little gallows humor to try to break the tension. It didnât work.
Lol, it did totally work and I quite liked your comment and laughed about it. I just wanted to clarify since IDK, it does all still feel a bit high-stakes and being clear seems valuable, but I think your comment was great and did indeed ease a bunch of tension in me.
Ok great. Well I just want to re-emphasize the distinction again between âOPâ and the people who work at OP. Itâs not a homogenous blob of opinions, and AFAIK we didnât fire anybody related to this, so a lot of the individuals who work there definitely agree with you/âwant to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I donât work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasnât what I was aiming for, and Iâll try to mitigate it if I can.
Thanks, I appreciate that. I might message you at random points in the coming months/âyears with chilling effects I notice (in as much as that wonât exacerbate them), and maybe ideas to mitigate them.
I wonât expect any response or much engagement, I am already very grateful for the bandwidth youâve given me here, as one of the people in the world with the highest opportunity cost.
Sometimes I wish we had a laughing emoji here...
Iâm pleasantly surprised at the DeepMind alignment team being the only industry team called out! Iâm curious what you think weâre doing right?
You donât appear to be majorly used for safety-washing
You donât appear to be under the same amount of crazy NDAs as Iâve seen from OpenAI and Anthropic
You donât seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and donât seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isnât making a statement about what Google is doing, or Deepmind at large is doing, itâs just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/âBrain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
Iâd say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/âGV/âOP have mostly appeared as an aggregate blobââwhere the money comes fromâ. And Iâve heard so much frustration & disappointment about OP over the years! (Along with a lot of praise of course.) That said, I get the spirit of your comment, I wouldnât want to overstate how negative people are about Dustin or OP.
And for the record Iâve spent considerable energy criticizing OP myself, though not quite so far as âfrustrationâ or âdisappointmentâ.
As the author of the comment linked for âcriticizing OP for departing grantees too quickly,â Iâd note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I donât see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.