Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn’t read this post as x-risk motivated (though I admit I was confused what it’s primary motivation was).
Yeah, that’s a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.
Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think).
Like, Sean’s comment basically said “I think it was directly Bostrom’s fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain”. My comment is more specific, but I don’t really see it as harsher. I also have a prior to not go into critiques of individual people, but that’s what Sean did in this context (of course Bostrom’s judgement is relevant, but I think in that case so is Sean’s).
Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback.
The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don’t think it’s common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions.
In this case, I think it’s somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn’t explicitly say that I was trying to get anyone else to really change behavior.
And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don’t think EA has historically been such a place, nor wants to be such a place).
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team.
I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start
Yep, that’s the one I was thinking about. I’ve changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.
When people make claims, we expect there to be some justification proportional to the claims made.
To be clear, I also absolutely do not hold myself to this standard. I feel totally fine, and encourage others to do as well, to casually mention controversial and important beliefs of theirs whenever it seems relevant, without an obligation to fully back up that claim. Indeed, I am pretty confused what norm you are referring to here, since I also can’t think of this norm in almost any context I am in.
If someone mentions they believe in god, I don’t expect that this means they are ready or want to have a conversation about theology with me right then and there. When someone says they vote libertarian in the US general election I totally don’t expect to have a conversation with them about macroeconomic principles right there. People express large broad claims all the time without wanting to go into all the details.
This thread doesn’t feel great for this, though CSER is an organization for which I do really wish more people shared their assessments. Also happy to have a call if your curiosity extends that far, and you would be welcome to write up the things that I say in that call publicly (though of course that’s a lot of work and I don’t think you have any obligation to do so).
Thanks Sean. I think this is a good comment and I think makes me understand your perspective better.
I do think we obviously have large worldview differences here, that seem maybe worth exploring at some point, but this comment (as well as some private conversations sparked by these comments with others at FHI) made me feel more sympathetic to the perspective of “there is some history-rewriting happening that seems scary, where the university gets portrayed as this kind of boogeyman, and while it does seem the university did some unreasonable-seeming things, I think a lack of empathy and practicality in relation to that university had a lot of bad effects on both the university and FHI, and we should be very wary of remembering the story of FHI as a purely one-sided one”.
You made hostile claims that weren’t following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn’t have anything to back it up.
This seems relatively straightforwardly false. In as much as Sean is making claims about the right strategy to follow for FHI, and claiming that the errors at FHI were straightforwardly Bostrom’s fault and attributable to ‘garden variety incompetence’, the degree of historical success of the strategies that Sean seems to be advocating for is of course relevant in assessing whether that’s accurate. And CSER and Leverhulme seem like the obvious case studies that are available here.
We can quibble over the exact degree of relevance of the points I brought up, but the logical connection here seems straightforward.
didn’t have anything to back it up.
Separately, I see no way how you could know whether I have anything to back up my criticism. I have written about my thoughts on CSER in the past, and I did not intend to write up all the thoughts and evidence I have in this thread.
If you want we can have a call for an hour, or you can investigate this question yourself and come to your own conclusion, and then you can make a judgement of whether I have anything to back up my opinion, but as I have said upthread, I don’t consider myself to have an obligation to extensively document the evidence for all of my opinions and judgements before I feel comfortable expressing them.
Yeah, I agree this is a real dynamic. It doesn’t sound unreasonable for me to have a standard link that l link to if I criticize people on here that makes it salient that I am aspiring to be less asymmetric in the information I share (I do think the norms are already pretty different over on LW, where if anything I think criticism is a bit less scrutinized than praise, so its not like this is a totally alien set of norms).
I don’t understand. I do not consider myself to be under the obligation that all negative takes I share about an organization must be accompanied by a full case for why I think those are justified.
Similar to how it would IMO be crazy to request people to justify that all positive comments about an organization must be accompanied by full justifications for ones judgement.
I have written about my feelings about CSER and Leverhulme some in the past (one of my old LTFF writeups for example includes a bunch of more detailed models I have of CSER). I have definitely not written up most of my thoughts, as they would span many dozens of pages.
But to my mind high integrity actors don’t make the claims you’ve made in both of these comments without bringing examples or evidence.
I think holding criticism to a higher standard than praise is one of the most common low level violations of integrity that people engage in on an ongoing basis. I absolutely do not consider it part my of concept of integrity to only make negative claim about people without also making a comprehensive argument and providing extensive evidence of its veracity.
Indeed the honor culture from which my guess that instinct comes from is one of the things I am culturally most opposed to, so in as much as you have a concept of integrity here, it doesn’t seem that have that much overlap with mine (which is fine, words are hard, we can disambiguate in the future).
Thanks, that’s useful context, and I definitely have been less close to things than you have (and I have much less reason to distrust your take here than Sean’s).
I do think that given the results of Sean’s strategy at CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn’t exist, my best guess would be that as I dig into this more, I would find that what Sean thought were obvious choices were things that would have ultimately had long-term bad consequences, and I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity (though I don’t know, and would be interested in takes from people who have been responsible for core FHI intellectual contributions about the tradeoffs that Sean was advocating for).
My guess is Sean did probably get some things right, but I do think the track record here speaks quite badly to Sean’s allocation of responsibility by my lights.
At least from where I am standing, I do think a big issue with FHI’s relationship to the university was one in which I repeatedly saw FHI get bullied by the university in a way that felt somewhat obviously crazy to me, and at least my current (low-confidence) read of the situation is that Sean instead of reacting to that appropriately seemed to push for making FHI the kind of institution that wouldn’t fight back against that in a reasonable way (by e.g. threatening to just leave before it got completely smothered by the university, or drawing lines in the sand which would have caused FHI to shut down sooner instead of losing its coherence over many years of pain).
But again, I don’t have a ton of context here, I am mostly reasoning from the online comments of Sean that I’ve seen and the de-facto fate that befell the FHI and would update a good amount on reports by people who were actually there, especially in the later years.
i am very glad that Bostrom chose the path he chose with FHI in contrast to the path you seem to have chosen with CSER and Leverhulme, so I am inclined to not trust your take here too much (though your closeness and direct experience of course is important and relevant here and makes up for some of my a-priori skepticism). Where FHI successfully continued to produce great intellectual work, I have seen other organizations in similar positions easily fall prey to the demand and pressures and become a hollow shell of political correctness and vapid ideas.
It seems clear to me that Bostrom’s choices ended up much better than the choices of other people in similar situations. It seems like he didn’t compromise on the integrity of the institution he was building, and that seems to have been crucial to FHI’s success (and I expect will be crucial to Bostrom’s continued intellectual legacy).
I am confident he probably nevertheless made many mistakes in his relationship to the university, as this kind of conflict tends to force one to commit many visible errors, but the fate of FHI seems much better than the path that e.g. CSER and Leverhulme have been going down. Indeed I can’t think of a single institution in a similar situation that maintained its independence and integrity for as long as FHI did, so I am inclined to measure this more as a story of success, instead of a story of failure (though it might be that the success here is downstream of other people than Bostrom like yourself and Carrick, which I can’t rule out, though my guess is Bostrom’s overall choices were likely substantially responsible for the positive outcome here).
Of course, there did probably exist some way to get the best of all worlds here, we are talking about a very high-dimensional problem space here and Bostrom was obviously far from perfect at all the skills required to run FHI, but given the enormous success of FHI in its independence and continued ongoing production of important ideas, I would be very surprised if the answer turns out to be “garden variety incompetence”.
At the very least I feel like you should recognize that maintaining your independence under this kind of pressure is extremely hard, and that you yourself seem to have made difficult tradeoffs in the opposite direction (which I think were a mistake, but like, we can have that conversation instead of just chalking things up to “incompetence”).
Thank you Will! This is very much the kind of reflection and updates that I was hoping to see from you and other leaders in EA for a while.
I do hope that the momentum for translating these reflections into changes within the EA community is not completely gone given the ~1.5 years that have passed since the FTX collapse, but something like this feels like a solid component of a post-FTX response.
I disagree with a bunch of object-level takes you express here, but your reflections seem genuine and productive and I feel like me and others can engage with them in good faith. I am grateful for that.
Yeah, I think just buying Twitter to steer the narrative seems quite bad. But like, I have spent a large fraction of my career trying to think of mechanism design for discussion and social media platforms and so my relation to Twitter is I think a pretty healthy “I think I see lots of ways in which you could make this platform much more sanity-promoting” in a way that isn’t about just spreading my memes and ideologies.
Will has somewhat less of that background, and I think would have less justified confidence in his ability to actually make the platform better from a general sanity perspective, though still seems pretty plausible to me he saw or sees genuine ways to make the platform better for humanity.
I am the author of the linked post that DPiepgrass was commenting on: https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists
This link’s hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want.
(Author of the post) My model is that Sam had some initial tendencies for reckless behavior and bullet-biting, and those were then greatly exacerbated via evaporative cooling dynamics at FTX.
It sounds like SBF drove away everyone who couldn’t stand his methods until only people who tolerated him were left. That’s a pretty different way of making an organization go insane.
Relatedly, this kind of evaporative cooling is exactly the dynamic I was trying to point to in my post. Quotes:
People who don’t want to live up to the demanding standard leave, which causes evaporative cooling and this raises the standards for the people who remain. Frequently this also causes the group to lose critical mass.
[...]
My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.
FWIW, I totally don’t consider “donating” a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies.
I don’t have a particularly strong take on Bryan Johnson, but using “donations” as a proxy seems pretty bad to me.