I confirm that this is Chloe, who contacted me through our standard communication channels to say she was posting a comment today.
Ben Pace
Sharing Information About Nonlinear
Closing Notes on Nonlinear Investigation
Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.
I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.
New EA Cause Area: Run Blackwell’s Bookstore
To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.
The answer to many of your questions is no, I have little former professional experience at this sort of investigation! (I had also never run an office before Lightcone Office, never run a web forum before LessWrong, and never run a conference before EAGxOxford 2016.)
My general attitude to doing new projects that I think should be done and nobody else is doing them is captured in this quote by Eliezer Yudkowsky that I think about often:
But if there’s one thing I’ve learned in life, it’s that the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.
I’ll quote Emerson Spartz on this one:
People are so irrationally intimidated by lawyers that some legal firms make all their money by sending out thousands of scary form letters demanding payment for bullshit transgressions. My company was threatened with thousands of frivolous lawsuits but only actually sued once.
Threats are cheap.
It is indeed high stakes! But in my opinion they have opted in to this sort of accusation being openly stated. Many hundreds or even thousands of people have given their lives and efforts to causes and projects led by high-status people in EA, often on the grounds that it is “high trust” and the people are well-intentioned. Once you are taking those resources — for instance having a woman you talked to once at an EAG come and fly out and live with you and work for you while nomadically traveling and paying her next to nothing, or do the same via a short hiring process — then as soon as someone else in that group sees credible evidence that “Wait, these people look to me like they have taken advantage of this resource and really hurt some people and intimidated them into covering it up”, then it behooves them to say so loud and clear!
Perhaps you do not believe this is true of EA circles, and as an older person in these circles that generally correlates (quite wisely) with being on the more hesitant side of giving your full life to whatever a person currently in EA leadership randomly thinks you should do. Nonetheless I think two younger people here have been eaten up and chewed out and I think it’ll happen again, because of people resting on this inaccurately high level of trust. So I’ll say it as loud as you like :)
I think one of the things Rob has that is very hard to replace is his audience. Overall I continue to be shocked by the level of engagement Rob Miles’ youtube videos get. Averaging over 100k views per video! I mostly disbelieve that it would be plausible to hire someone that can (a) understand technical AI alignment well, and (b) reliably create youtube videos that get over 100k views, for less than something like an order of magnitude higher cost.
I am mostly confused about how Rob gets 100k+ views on each video. My mainline hypothesis is that Rob has successfully built his own audience through his years of videos including on places like Computerphile, and that they have followed him to his own channel.
Building an audience like this takes many years and often does not pay off. Once you have a massive audience that cares about the kind of content you produce, this is very quickly not replaceable, and I expect to find someone other than Rob to do this, it would either take the person 3-10 years to build this size of audience, or require paying a successful youtube content creator to change the videos that they are making substantially, in a way that risks losing their audience, and thus require a lot of money to cover the risk (I’m imagining $300k–$1mil per year for the first few years).
Another person to think of here is Tim Urban, who writes Wait But Why. That blog has I think produced zero major writeups in the last year, but he has a massive audience who knows him and is very excited to read his content in detail, which is valuable and not easily replaceable. If it were possible to pay Tim Urban to write a piece on a technical topic of your choice, this would be exceedingly widely-read in detail, and would be worth a lot of money even if he didn’t publish anything for a whole year.
- EA Forum Prize: Winners for September 2020 by 5 Nov 2020 6:23 UTC; 17 points) (
- 19 Sep 2020 23:33 UTC; 4 points) 's comment on Long-Term Future Fund: September 2020 grants by (
I don’t necessarily disagree with you, but FWIW I think Sam Bankman-Freid and Alameda would have been honestly described as “a notable but relatively minor actor in the space” during the many years when they were building their resource base, hiring, getting funds, and during which time people knew multiple serious accusations about him/them. I am here trying to execute an algorithm that catches bad actors before they become too powerful. I think Emerson is very ambitious and would like a powerful role in EA/X-risk/etc.
- 13 Dec 2023 23:28 UTC; 91 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
Yeah, well, I haven’t thought about this case much, so maybe there’s some good counterargument, but I think of personal attacks as “this person’s hair looks ugly” or “this person isn’t fun at parties”, not “this person is not strong in an area of the job that I think is key”. Professional criticism seems quite different from personal attacks, and I hold different norms around how appropriate it is to bring up in public contexts.
Sure, it’s a challenge to someone to be professionally criticized, and can easily be unpleasant, but it’s not irrelevant or off-topic and can easily be quite valuable and important.
Hey John,
For some (most?) of these opinions, there isn’t any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA.
First: Many positions in the public discourse are still strongly silenced. To borrow an idea from Scott Alexander, the measure of how silenced something is is not how many people talk publicly about it, but the ratio of people who talk publicly about it to the people who believe it. If a lot of people in a form say they believe something but are afraid to talk about it, I think it’s a straightforward sign that they do feel silenced. I think you should indeed update that, to borrow some of your examples, when someone makes an argument for negative utilitarianism, or human enhancement, or abortion, or mental health, that several people are feeling grateful that the person is stepping out and watching with worry to see whether the person gets attacked/dismissed/laughed at. I’m pretty sure I personally have experienced seeing people lose points socially for almost every single example you listed, to varying degrees.
Second: Even for social and political movements, it’s crucial to know what people actually believe but don’t want to say publicly. The conservative right in the US of the last few decades would probably have liked to know that many people felt silenced about how much they liked gay marriage, given the very sudden swing in public opinion on that topic; they could then have chosen not to build major political infrastructure around the belief that their constituents would stand by that policy position. More recently I think the progressive left of many countries in Europe, Australia and the US would appreciate knowing when people are secretly more supportive of right wing policies, as there has been (IIRC) a series of elections and votes where the polls predicted a strong left-wing victory and in reality there was a slight right-wing victory.
Third: I think the public evidence of the quality of the character of people working on important EA projects is very strong and not easily overcome. You explain that it’s important to you that folks at your org saw it and they felt worried that EA contains lots of bad people, or people who believe unsavoury things, or something. I guess my sense here is that there is a lot of strong, public evidence about the quality of people who are working on EA problems, about the insights that many public figures in the community have, and about the integrity of many of the individuals and organisation.
You can see how Holden Karnofsky went around being brutally honest yet rigorous in his analysis of charities in the global health space.
You can see how Toby Ord and many others have committed to giving a substantial portion of their lifetime resources to altruistic causes instead of personal ones.
You can see how Eliezer Yudkowsky and Nick Bostrom spent several decades of their lives attempting to lay out a coherent philosophy and argument that allowed people to identify a key under-explored problem for humanity.
You can read the writings of Scott Alexander and see how carefully he thinks about ethics, morality and community.
You can listen to the podcast of and read the public writings by Julia Galef and see how carefully she thinks about complex and controversial topics and the level of charity she gives to people on both sides of debates.
You can read the extensive writing of The Unit Of Caring by Kelsey Piper and see how much she cares about both people and principles, and how she will spend a great deal of her time trying to help people figure out their personal and ethical problems.
I could keep listing examples, but I hope the above gets my point across.
I am interested in being part of a network of people who build trust through costly (yet worthwhile) acts of ethics, integrity, and work on important problems, and I do not think the above public Form is a risk to the connections of that network.
Fourth: It’s true that many social movements have been able to muster a lot of people and political power behind solving important problems, and that this required them to care a lot about PR and hold very tight constraints on what they can be publicly associated with (and thus what they’re allowed to say publicly). I think however, that these social movements are not capable of making scientific and conceptual progress on difficult high-level questions like cause prioritisation and the discovery of crucial considerations.
They’re very inflexible; by this I don’t merely mean that they’re hard to control and can take on negative affect (e.g. new atheism is often considered aggressive or unkind), but that they often cannot course correct or change their minds (e.g. environmentalism on nuclear energy, I think) in a way that entirely prohibits intellectual progress. Like, I don’t think you can get ‘environmentalism, but for cause prioritisation’ or ‘feminism, but for crucial considerations’. I think the thing we actually want here is something much closer to ‘science’, or ‘an intellectual movement’. And I think your points are much less applicable to a healthy scientific community.
I hope this helps to communicate where I’m coming from.
Confirmed, this is Chloe.
without inviting or even permitting the accused party to share their side of the story in advance
You may have missed the section where I had a 3hr call with them and summarized what they told me? It’s not everything we’d want but I think this sentence is inaccurate.
I suspect Ben does in fact have some understanding of the political dimension of his decision to share this post
Of course I do! I thought about it a bunch and came to the conclusion that it’s best to share serious and credible accusations early and fast.
To confirm: I had a quickly written bit about the glassdoor reviews. It was added in without much care because it wasn’t that cruxy to me about the whole situation, just a red flag that suggested further investigation was worth it, that someone else suggested I add for completeness. The reviews I included were from after the time that Emerson’s linkedin says he was CEO, and I’m glad that Spencer corrected me.
If I’m remembering the other one, there was also a claim that I included not because it was itself obviously unethical, but because it seemed to indicate a really invasive social environment, and when I think information has been suppressed I have strong heuristics suggesting to share worrying information even if it isn’t proven or necessarily bad. Anyway, Spencer said he was confident in a very different narrative of events, so I edited it the comment to be more minor.
In general I think Spencer’s feedback on this and other points improved the post (though he also had some inaccurate information).
- 7 Sep 2023 20:37 UTC; 25 points) 's comment on Sharing Information About Nonlinear by (
I thought a bit about essays that were key on me becoming more competent and able to take action in the world to improve it, that connected to what I cared about. I’ll list some and the ways they helped me. (I filled out the rest of the feedback form too.)
---
Feeling moral by Eliezer Yudkowsky. Showed me an example where my deontological intuitions were untrustworthy and that simple math was actually effective.
Purchase Fuzzies and Utilons Separately by Eliezer. Showed me where attempts to do good can get very confused and simply looking at outcomes can avoid a lot of problems from reasoning by association or by what’s ‘considered a good idea’.
Ends Don’t Justify Means (Among Humans) by Eliezer. Helped me understand a very clear constraint on naive utilitarian reasoning, which avoided worlds where I would naively trust the math in all situations.
Dive In by Nate Soares. Helped point my flailing attempts to improve and do better in a direction where I would actually get feedback. Only by actually repeatedly delivering a product, even if you changed your mind about what you should be doing and whether it was valuable 10 times a day, can you build up real empirical data about what you can accomplish and what’s valuable. Encouraged me to follow-through on projects a whole lot more.
Beyond the Reach of God by Eliezer. This helped ground me, it helped me point at what it’s like to have false hope and false trust, and recognise it more clearly in myself. I think it’s accurate to say that looking directly and with precision at the current state of the world involves trusting the world a lot less than most people, and a lot less than establishment narratives would say (Steven Pinker’s “Everything is getting better and will continue to get better” isn’t the right way to conceptualise our position in history, there’s much more risk involved than that). A lot of important improvements in my ability to improve the world have involved me realising I had unfounded trust in people or institutions, and realising that unless I took responsibility for things myself, I couldn’t trust that they would work out well by default, and this essay was one of the first places I clearly conceptualised what false hope feels like.
Money: The Unit of Caring by Eliezer. Similar things to the Fuzzies and Utilons post, but a bit more practical. And Kelsey named her whole Tumblr after this, which I guess is a fair endorsement.
Desperation by Nate. This does similar things to Beyond the Reach of God, but in a more hopeful way (although it’s called ‘Desperation’, so how hopeful can it be?). It helped me conceptualise what it looks like to actually try to do something difficult that people don’t understand or think looks funny, and to notice whether or not it was something I had been doing. It also helped me notice (more cynically) that a lot of people weren’t doing things that tended to look like this, and to not try to emulate those kind of people so much.
Scope Insensitivity by Eliezer. Similar things to Feeling Moral, but a bit simpler / more concrete and tries to be actionable.
--
Some that I came up with that you already included:
On Caring
500 Million, But Not A Single One More
It’s odd that you didn’t include Scott Alexander’s classic on Efficient Charity or Eliezer’s Scope Insensitivity, although Nate’s “On Caring” maybe is sufficient to get the point about scope and triage across.
For those who don’t know Zvi’s series, it has come out weekly, included case numbers and graphs, and analysis of the news that week. Here’s a few:
The latest: Covid 9/24: Until Morale Improves
Plus some general analysis, like Seemingly Popular Covid-19 Model is Obvious Nonsense, and Covid-19: My Current Model which was a major factor in me choosing to stop cleaning all my packages and groceries and to stop putting takeout food in the oven for 15 minutes, as well as feeling safe about outdoors.
His 9⁄10 update on Vitamin D also caused me to make sure my family started taking Vitamin D, which is important because one of them has contracted the virus.
In your experience, what are the main reasons good people choose not to do AI alignment research after getting close to the field (at any org)? And on the other side, what are the main things that actually make the difference for them positively deciding to do AI alignment research?
Brief update: I am still in the process of reading this. At this point I have given the post itself a once-over, and begun to read it more slowly (and looking through the appendices as they’re linked).
I think any and all primary sources that Kat provides are good (such as the page of records of transactions). I am also grateful that they have not deanonymized Alice and Chloe.
I plan to compare the things that this post says directly against specific claims in mine, and acknowledge anything where I was factually inaccurate. I also plan to do a pass where I figure out which claims of mine this post responds to and which it doesn’t, and I want to reflect on the new info that’s been entered into evidence and how it relates to the overall picture.
It probably goes without saying that I (and everyone reading) want to believe true things and not false things about this situation. If I made inaccurate statements I would like to know that and correct them.
As I wrote in my follow-up post, I am not intending to continue spear-heading an investigation into Nonlinear. However this post makes some accusations of wrongdoing on my part, which I intend to respond to, and of course for that it is relevant whether the things I said are actually true.
I hope to write a response sometime this week, but I am not committing to any deadlines.
Not sure if it’s worth mentioning, but I hope that people reading this are aware of what Kat writes at the bottom of the appendices:
Many of the things that are quotes next to my name are not things I said and not things that I would endorse, and I believe the same is true of many sentences in quotation marks attributed to Alice/Chloe.