Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
(I gave it a small-downvote) I currently think that representation of the person in question is pretty inaccurate. I have various problems with them, one of the primary ones is that they threatened an EA community institution with a libel lawsuit, which you might have picked up I am not a huge fan of, but your comment to me seemed to be more likely to mislead (and to somewhat miasmically propagate a narrative I consider untrustworthy), and also that specific request for privacy still strikes me as illegitimate (as I have commented on the relevant posts).
The OP is not titled “An incomplete list of activities that EA orgs should think about before doing” it is “An incomplete list of activites that EA orgs Probably Shouldn’t Do”. I agree that most of the things listed in the OP seem reasonable to think about and take into account in a risk analysis, but I doubt the OP is actually contributing to people doing much more of that.
I would love a post that would go into more detail on “when each of these seems appropriate to me”, which seems much more helpful to me.
I think it depends a lot on what you mean by “a post like this”. Like, I do think I would just really like more investigation and more airing of suspicions around, and yeah, that includes people’s concerns with Lightcone.
Some of these seem fine to me as norms, some of them seem bad. Some concrete cases:
Live with coworkers, especially when there is a power differential and especially when there is a direct report relationship
Many startups start from someone’s living room. LessWrong was built in the Event Horizon living room. This was great, I don’t think it hurt anyone, and it also helped the organization survive through the pandemic, which I think was quite good.
Retain someone as a full-time contractor or grant recipient for the long term, especially when it might not adhere to legal guidelines
I don’t understand this. There exist many long-term contracting relationships that Lightcone engages in. Seems totally fine to me. Also, many people prefer to be grant recipients instead of employees, those come with totally different relationship dynamics.
Offer employer-provided housing for more than a predefined and very short period of time, thereby making an employee’s housing dependent on their continued employment and allowing an employer access to an employee’s personal living space
I also find this kind of dicey, though at least in Lightcone’s case I think it’s definitely worth it, and I know of many other cases where it seems likely worth it. We own a large event venue, and we are currently offering one employee free housing in exchange for being on-call for things that happen in the night. This seems like a fair trade to me and very standard (one of the sections of our hotel is indeed explicitly zoned as a “care-taker unit” for this exact purpose).
Date the partner of their funder/grantee, especially when substantial conflict-of-interest mechanisms are not active
This seems really quite a lot too micromanagey to me. I agree that there should be COI mechanisms in place, but this seems like it’s really trying to enforce norms on parts of people’s lives that really are their business.
This continues to feel quite a bit too micromanagy to me. Mostly these are the complaints that seemed significant to Ben (which also roughly aligned with my assessment).
The post was already like 100+ hours of effort to write. I don’t think “more contextualizing” is a good use of at least our time (though if other people want to do this kind of job and would do more of that, then that seems maybe fine to me).
Like, again, I think if some people want to update that all weirdness is bad, then that’s up to them. It is not my job, and indeed would be a violation of what I consider cooperative behavior, to filter evidence so that the situation here only supports my (or Ben’s) position about how organizations should operate.
Yep, not clear what to do about that. Seems kind of sad, and I’ve strong-downvoted the relevant comment. I don’t think it’s mine or Ben’s job to micromanage people’s models of how organizations should operate.
I might be confused here, but it sure seemed easy to hand over money, but hard to verify that the insurance would actually kick in in the relevant situation, and wouldn’t end up being voided for some random reason.
Using “preoccupied” feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation.
The post has a lot of details that should allow people to make a more detailed model than “weird is bad”, but I don’t think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important.
It would seem low-integrity by my standards to decline to pursue this case because I would be worried that people would misunderstand the facts in a way that would cause inconvenient political movements for me. It seems like a lot of people have a justified interest in knowing what happened here, and I don’t want to optimize hard against that, just because they will predictably learn a different lesson than I have. The right thing to do is to argue in favor of my position after the facts are out, not to withhold information like this.
Also, I think the key components of this story are IMO mostly about the threats of retaliation and associated information control, which I think mostly comes across to readers (at least based on the comments I’ve seen so far), and also really doesn’t seem like it has much to do with general weirdness. If anything this kind of information control is more common in the broader world, and things like libel suits are more frequent.
Just for the record, I think there are totally contexts that could justify that threat. I would be surprised if one of those had occurred here, but I can totally imagine scenarios where the behavior in the screenshot is totally appropriate (or at the very least really not that bad, given the circumstances).
My current model is that we have enough money to defend against a defamation lawsuit like this. The costs are high, but we also aren’t a super small organization (we have a budget of like $3M-$4M a year), so I think we could absorb it if it happened, and my guess is we could fundraise additionally if the costs somehow ballooned above that.
I looked a bit into liability insurance but it seemed like a large pain, and not worth it given that we are probably capable of self-insuring.
Some of the things in this category seem bad, some of them good. I think if people walk away with a generic conventionality bias from reading this, then that seems like a mistake and a recipe for witch hunts. Most of the dynamics explored in this post feel predictable and the result of specific aspects of the context and environment that was set up, and I would feel pretty happy putting my predictions on the record on what kind of error modes we will see from different initial conditions, and for different existing organizations.
E.g. I will put it on the record that I don’t think Open Phil has really any of the dynamics that this post is talking about, despite me also having many disagreements with how Open Phil operates.
I agree there are some circumstances under which libel suits are justified, but the net-effect on the availability of libel suits strikes me as extremely negative for communities like ours, and I think it’s very reasonable to have very strong norms against threatening or going through with these kinds of suits. Just because an option is legally available, doesn’t mean that a community has to be fine with that option being pursued.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line.
This, in-particular, strikes me as completely unsupported. The law does not strike me as particularly well-calibrated about what promotes good communal epistemics, and I do not see how preventing negative evidence from being spread, which is usually the most undersupplied type of evidence already, helps “promote better epistemics”. Naively the prior should be that when you suppress information, you worsen the accuracy of people’s models of the world.
As a concrete illustration of this, libel law in the U.S. and the U.K. function very differently. It seems to me that the U.S. law has a much better effects on public discourse, by being substantially harder actually make happen. It is also very hard to sue someone in a foreign court for libel (i.e. a US citizen suing a german citizen is very hard).
This means we can’t have a norm that generically permits libel suits, since U.K. libel suits follow a very different standard than U.S. ones, and we have to decide for ourselves where our standards for information control like this is.
IMO, both U.S. and UK libel suits should both be very strongly discouraged, since I know of dozens of cases where organizations and individuals have successfully used them to prevent highly important information from being propagated, and I think approximately no case where they did something good (instead organizations that frequently have to deal with libel suits mostly just leverage loopholes in libel law that give them approximate immunity, even when making very strong and false accusations, usually with the clarity of the arguments and the transparency of the evidence taking a large hit).
Note that I downvoted their response (intentionally separating it from agree/disagree) because I saw them as attempts to enforce a bad norm, and some of them as a form of intimidation. I endorse downvoting them (and also think other people should do that).
I don’t know, but I know that negotiating confidentiality is a major part that often takes a lot of calendar time. They might have emails or text chats from people that they would like to share, but they first need to get permission to share, or at least provide adequate warning. This can definitely take a few days in my experience.
That you’re struggling with the basics is what leads me to say that LTFF doesn’t “have it together”.
Just FWIW, this feels kind of unfair, given that like, if our grant volume didn’t increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of “the basics”.
Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy rhythm again when there is a new fund chair, and the basics will be better covered again, when the funding ecosystem settles into more of an equilibrium again.
It is very difficult for people to change their minds later, and most people assume that if you’re on trial, you must be guilty, which is why judges remind juries about “innocent before proven guilty”.
This is one of the foundations of our legal system, something we learned over thousands of years of trying to get better at justice. You’re just assuming I’m guilty and saying that justifies not giving me a chance to present my evidence.
Trials are public as well. Indeed our justice system generally doesn’t have secret courts, so I am not sure what argument you are trying to make here. In as much as you want to make an analogue to the legal system, you now also have the ability to present your evidence, in front of an audience of your peers. The “jury” has not decided anything (and neither have I, at least regarding the dynamics and accusations listed in the post).
I am not assuming you are guilty. My current best guess is that you have done some pretty bad things, but yeah, I haven’t heard your side very much, and I will update and signal boost your evidence if you provide me with evidence that the core points of this post are wrong.
Also, if we post another comment thread a week later, who will see it? EAF/LW don’t have sufficient ways to resurface old but important content.
You could make a new top-level post. I expect it would get plenty of engagement. The EAF/LW seems really quite capable of resurfacing various types of community conflict, and making it the center of attention for a long time.
You also took my email strategically out of context to fit the Emerson-is-a-horned-CEO-villain narrative. Here’s the full one:
I shared the part that seemed relevant to me, since sharing the whole email seemed excessive. I don’t think the rest of the email changes the context of the libel suit threat you made, though readers can decide that for themselves.
I don’t understand how people would be at greater risk of retaliation if the post was delayed by a week?
It is a lot easier to explain to your employer or your friends or your colleagues what is happening if you can just link them to a public post, if someone is trying to pressure you. That week in which the person you are scared of has access to the post, but the public does not, is a quite vulnerable week, in my experience.
I also want to make sure people realise that there’s a huge difference between “I will stalk you / call your family / get you fired” and “I will sue you” in terms of what counts as threats/intimidation/retaliation, so I don’t think Emerson’s email is a particularly strong confirmation that the “large and substantial threat of retaliation” is real.
I think threatening a libel lawsuit with the intensity that Emerson did strikes me as above “calling my family” in terms of what counts as threats/intimidation/retaliation, especially if you are someone who does not have the means for a legal defense (which would be true of Ben’s sources for this post). Libel suits are really costly, and a quite major escalation.
If Ben is worried about losing 40 hours of productive time by responding to Nonlinear’s evidence in private, he doesn’t have to. He could just allow them to put together their side of the story, ready for publishing when he publishes his own post.
Emerson’s email says explicitly that if the post is published as is, that he would pursue a libel suit. This seems to rule out the option of just delaying and letting them prepare their response, and indeed demands that the original post gets changed.
(Copying over the same response I posted over on LW)
I don’t have all the context of Ben’s investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don’t feel super sympathetic to requests to delay publication:
In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben’s sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open.
As a concrete example, Emerson has just sent me an email saying:
Given the irreversible damage that would occur by publishing, it simply is inexcusable to not give us a bit of time to correct the libelous falsehoods in this document, and if published as is we intend to pursue legal action for libel against Ben Pace personally and Lightcone for the maximum damages permitted by law. The legal case is unambiguous and publishing it now would both be unethical and gross negligence, causing irreversible damage.
For the record, the threat of libel suit and use of statements like “maximum damages permitted by law” seem to me to be attempts at intimidation. Also, as someone who has looked quite a lot into libel law (having been threatened with libel suits many times over the years), describing the legal case as “unambiguous” seems inaccurate and a further attempt at intimidation.
My guess is Ben’s sources have also received dozens of calls (as have I received many in the last few hours), and I wouldn’t be surprised to hear that Emerson called up my board, or would otherwise try to find some other piece of leverage against Lightcone, Ben, or Ben’s sources if he had more time.
While I am not that worried about Emerson, I think many other people are in a much more vulnerable position and I can really resonate with not wanting to give someone an opportunity to gather their forces (and in that case I think it’s reasonable to force the conflict out in the open, which is far from an ideal arena, but does provide protection against many types of threats and adversarial action).
Separately, the time investment for things like this is really quite enormous and I have found it extremely hard to do work of this type in parallel to other kinds of work, especially towards the end of a project like this, when the information is ready for sharing, and lots of people have strong opinions and try to pressure you in various ways. Delaying by “just a week” probably translates into roughly 40 hours of productive time lost, even if there isn’t much to do, because it’s so hard to focus on other things. That’s just a lot of additional time, and so it’s not actually a very cheap ask.
Lastly, I have also found that the standard way that abuse in the extended EA community has been successfully prevented from being discovered is by forcing everyone who wants to publicize or share any information about it to jump through a large number of hoops. Calls for “just wait a week” and “just run your posts by the party you are criticizing” might sound reasonable in isolation, but very quickly multiply the cost of any information sharing, and have huge chilling effects that prevent the publishing of most information and accusations. Asking the other party to just keep doing a lot of due diligence is easy and successful and keeps most people away from doing investigations like this.
As I have written about before, I myself ended up being intimidated by this for the case of FTX and chose not to share my concerns about FTX more widely, which I continue to consider one of the worst mistakes of my career.
My current guess is that if it is indeed the case that Emerson and Kat have clear proof that a lot of the information in this post is false, then I think they should share that information publicly. Maybe on their own blog, or maybe here on LessWrong or on the EA Forum. It is also the case that rumors about people having had very bad experiences working with Nonlinear are already circulating around the community and this is already having a large effect on Nonlinear, and as such, being able to have clear false accusations to respond against should help them clear their name, if they are indeed false.
I agree that this kind of post can be costly, and I don’t want to ignore the potential costs of false accusations, but at least to me it seems like I want an equilibrium of substantially more information sharing, and to put more trust in people’s ability to update their models of what is going on, and less paternalistic “people are incapable of updating if we present proof that the accusations are false”, especially given what happened with FTX and the costs we have observed from failing to share observations like this.
A final point that feels a bit harder to communicate is that in my experience, some people are just really good at manipulation, throwing you off-balance, and distorting your view of reality, and this is a strong reason to not commit to run everything by the people you are sharing information on. A common theme that I remember hearing from people who had concerns about SBF is that people intended to warn other people, or share information, then they talked to SBF, and somehow during that conversation he disarmed them, without really responding to the essence of their concerns. This can take the form of threats and intimidation, or the form of just being really charismatic and making you forget what your concerns were, or more deeply ripping away your grounding and making you think that your concerns aren’t real, and that actually everyone is doing the thing that seems wrong to you, and you are going to out yourself as naive and gullible by sharing your perspective.
[Edit: The closest post we have to setting norms on when to share information with orgs you are criticizing is Jeff Kauffman’s post on the matter. While I don’t fully agree with the reasoning within it, in there he says:
Sometimes orgs will respond with requests for changes, or try to engage you in private back-and forth. While you’re welcome to make edits in response to what you learn from them, you don’t have an obligation to: it’s fine to just say “I’m planning to publish this as-is, and I’d be happy to discuss your concerns publicly in the comments.”
[EDIT: I’m not advocating this for cases where you’re worried that the org will retaliate or otherwise behave badly if you give them advance warning, or for cases where you’ve had a bad experience with an org and don’t want any further interaction. For example, I expect Curzi didn’t give Leverage an opportunity to prepare a response to My Experience with Leverage Research, and that’s fine.]This case seems to me to be fairly clearly covered by the second paragraph, and also, Nonlinear’s response to “I am happy to discuss your concerns publicly in the comments” was to respond with “I will sue you if you publish these concerns”, to which IMO the reasonable response is to just go ahead and publish before things escalate further. Separately, my sense is Ben’s sources really didn’t want any further interaction and really preferred having this over with, which I resonate with, and is also explicitly covered by Jeff’s post.
So in as much as you are trying to enforce some kind of existing norm that demands running posts like this by the org, I don’t think that norm currently has widespread buy-in, as the most popular and widely-quoted post on the topic does not demand that standard (I separately think the post is still slightly too much in favor of running posts by the organizations they are criticizing, but that’s for a different debate).]
- 9 Sep 2023 8:00 UTC; 9 points) 's comment on Sharing Information About Nonlinear by (
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn’t really have much natural grounding any more, but we’ll see how it goes. I am glad to have two similar forums so we can see experiments like this play out.