Formerly Executive Director at BERI; now Secretary and board member. Current board member at SecureBio and FAR.AI, where Iâm also the Treasurer.
sawyerđ¸
Thanks for pointing this out. Reading it made me tear up a bit. Some secret heroes here!
Iâm so sorry you had to go through this. Every part of this demonstrates your strength and bravery in the face of a hostile and uncaring system (and several uncaring individuals) and I have boundless respect for you.
Iâm ashamed that some of my deeply-held values are being claimed and represented by an organization as flawed and irresponsible as CEA. I donât really know what to do about this, and everyone here is still posting on the forum CEA runs, so clearly this conflict has no obvious solution.
Even separate from the value to the community of having your traumatic experiences shared publicly, this is the best piece of writing Iâve read on sexual harassment in the EA community. You didnât have to write it, and Iâm sure it was difficult: On top of the emotional toll of reviewing all of this over and over again, itâs just genuinely a lot of work to write something this good.
I will be thinking about this post for a long time.
Thanks for writing and posting this! Iâve had these sorts of feelings floating around in my head for a while, but this is the best term Iâve heard for it.
Some perÂsonal thoughts about workÂing at Tarbell
Having not read the article, this threw me and I had to go check. But unfortunately they do seem to be calling the timeline models themselves âbadâ.
I focussed on one section alone: their âtimelines forecastâ code and accompanying methodology section. Not to mince words, I think itâs pretty bad.
I think this is the single most underrated post on the EA Forum.
Thanks for writing this! I have a more philosophical counter that Iâd love for you to respond to.
The idea of haggling doesnât sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative. Specifically, it seems to encourage deceptive pricing and reward people who are willing to be manipulative and stretch the truth.
In other words, haggling gives me bad vibes.
When you think about haggling/ânegotiating in altruistic context, do you have a framing that is more positive than this? Put another way: Other than saving money for the good guys (us) and costing money for the bad guys (some business), why is all of this âgoodâ?
Ah! Yes thatâs a good point and I misinterpreted.Thatâs part of what I meant by âhistorical accidentâ but now I think that it was confusing to say âaccidentâ and I should have said something like âhisotrical activitiesâ.
I agree that theyâre worth calling out somehow, I just think âlabâ is a misleading way to doing so given their current activities. Iâve made some admittedly-clunky suggestions in other threads here.
I completely agree that OpenAI and Deepmind started out as labs and are no longer so.
I agree that those companies are worth distinguishing. I just think calling them âlabsâ is a confusing way to do so. If the purpose was only to distinguish them from other AI companies, you could call them âAI bananasâ and it would be just as useful. But âAI bananasâ is unhelpful and confusing. I think âAI labsâ is the same (to a lesser but still important degree).
I think this is a useful distinction, thanks for raising it. I support terms like, âfrontier AI company,â âcompany making frontier AI,â and âcompany making foundation models,â all of which help distinguish OpenAI from Palantir. Also it seems pretty likely that within a few years, most companies will be AI companies!? So weâll need new terms. I just donât want that term to be âlabâ.
Another thing you might be alluding to is that âlabâ is less problematic when talking to people within the AI safety community, and more problematic the further out you go. I think that, within a community, the terms of art sort of lose their generic connotations over time, as community members build a dense web of new connotations specific to that meaning. I regret to admit that Iâm at the point where the word âlabâ without any qualifiers at all makes me think of OpenAI!
But code switching is hard, and if we use these terms internally, weâll also use them externally. Also external people read things that were more intended for internal people, so the language leaks out.
Interesting point! Iâd be OK with people calling them âevil mad scientist labs,â but I still think the generic âlabâ has more of a positive, harmless connotation than this negative one.
Iâd also be more sympathetic to calling them âlabsâ if (1) we had actual regulations around them or (2) they were government projects. Biosafety and nuclear weapons labs have a healthy reputation for being dangerous and unfriendly, in a way âcomputer labsâ do not. Also, private companies may have biosafety containment labs on premises, and the people working within them are labworkers/âscientists, but we call the companies pharmaceutical companies (or âBig Pharmaâ), not âfrontier medicine labsâ.
Also also if any startup tried to make a nuclear weapons lab they would be shut down immediately and all the founders would be arrested. [citation needed]
Stop callÂing them labs
From everything Iâve seen, GWWC has totally transformed under your leadership. And I think this transformation has been one of the best things thatâs happened in EA during that time. Iâm so thankful for everything youâve done for this important organization.
Yep! Something like this is probably unavoidable, and itâs what all of my examples below do (BERI, ACE, and MIRI).
There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
I think this dynamic is generally overstated, at least in the existential risk space that I work in. Iâve personally asked all of our medium and large funders for permission, and the vast majority of them have given permission. Most of the funding comes from Open Philanthropy and SFF, both of which publicly announce all of their grantsâwhen recipients decided not to list those funders, itâs not because the funders donât want them to. There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
This is one of the best examples of concise, engaging EA writing Iâve ever seen. Thank you for making an important point in a really fun and creative way!