Hi, I am a Physicist, Effective Altruist and AI safety student/researcher/organiser
Resume—Linda Linsefors—Google Docs
Linda Linsefors
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.
- 14 Feb 2023 22:24 UTC; 56 points) 's comment on ChanaMessinger’s Quick takes by (
- What improvements should be made to improve EA discussion on heated topics? by 16 Jan 2023 20:11 UTC; 54 points) (
The fact that talking to you can affect funding decisions is bad.
You don’t seem to understand how important funding decisions are to community members, which is baffling given your role. Or you do understand and that’s why this information is not public, which is deceptive, and also very bad.
Reporting interpersonal conflicts almost always makes everyone involved look bad, at least a bit. I don’t feel safe talking to someone who is also an evaluator. This is rely basic!
The Nonlinear team should have gotten their replies up sooner, even if in pieces. In the court of public opinion, time/speed matters. Muzzling up and taking ~3 months to release their side of the story comes across as too polished and buttoned up.
Strong disagree.
A) Sure, all else equal speed would have been better. But if you take the hypothesis that NL is mostly innocent as true for a moment. Getting such a post written about you must be absolutely terrible. If it was me, I’d probably not be in a good shape to write anything in response very quickly.
B) Taking their time to write one long thorough rebuttal is probably better for everyone involved than several rushed responses. I think this reduces the total time me and every other concerned observer will spend on this drama.
[Epistemic status: I’m writing this in the spirit of blurting things out. I think I’m pointing to something real, but I may be wrong about all the details.]
lack of social incentive to blurt things out when you’re worried you might be wrong;
lack of social incentive to build up your own inside-view model (especially one that disagrees with all the popular views among elite EAs);
You are correct that there is an incentive problem here. But the problem is not just lack of incentive, but actual incentive to fall in line.
Because funding is very centralised in EA, there are strong incentives to agree with the people who control the money. The funders are obviouly smarter than just selecting only “yes”-sayers, but they are also humans with emotions and limited time. There are types of ideas, projects, criticism that don’t appeal to them. This is not meant as criticism of individuals but as criticism of the structure. Because given the structure I don’t see how thing s could be otherwise.
This shapes the community in two major ways.
People who don’t fit the mould of what the funders like, don’t get funded.
People are self-censoring in order to fit what they think the mould is.
I think the only way out of this is to have less centralised funding. Some steps that may help:
Close the EA Funds. Specifically, don’t collect decentralised funding into centralised funds.
Encourage more people to earn-to-give and encourage all earning-to-givers to make their own funding decisions.
Maybe set up some infrastructure to help funders find projects? Maybe EA Funds could be replaced by some type of EA GoFundMe platform? I’m not sure what would be the best solution. But if I where to build something like this, I would start talking to earning-to-givers about what would appeal to them.
Ironically FTX fund actually got this right. Their re-grant program was explicitly designed to decentralise funding decisions.
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don’t think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone who was involved. They have asked to stay anonymous, please respect this.
The short version of the story is: (1) we applied to OP for funding, (2) late 2022/early-2023 we were in active discussions with them, (3) at some point, we received 200k USD via the SFF speculator grants, (4) then OP got back confirming that they would fund is with the amount for the “lower end” budget scenario minus those 200k.
My rough sense is similar to what e.g. Oli describes in the comments. It’s roughly understandable to me that they didn’t want to give the full amount they would have been willing to fund without other funding coming in. At the same time, it continues to feel pretty off to me that they let the SFF specultor grant 1:1 replace their funding, without even talking to SFF at all—since this means that OP got to spend a counterfactual 200k on other things they liked, but SFF did not get to spend additional funding on things they consider high priority.
One thing I regret on my end, in retrospect, is not pushing harder on this, including clarifying to OP that the SFF funding we received was partially uncoined, i.e. it wasn’t restricted to funding only the specific program that OP gave us (coined) funding for. But, importantly, I don’t think I made that sufficiently clear to OP and I can’t claim to know what they would have done if I had pushed for that more confidently.
I wrote this in response to Ben’s post
Thanks for writing this post.
I’ve heard enough bad stuff about Nonlinear from before, that I was seriously concerned about them. But I did not know what to do. Especially since part of their bad reputation is about attacking critics, and I don’t feel well positioned to take that fight.
I’m happy some of these accusations are now out in the open. If it’s all wrong and Nonlinear is blame free, then this is their chance to clear their reputation.
I can’t say that I will withhold judgment until more evidence comes in, since I already made a preliminary judgment even before this post. But I can promise to be open to changing my mind.
I have now read the above post, some of the comments, and very little of the appendix.
Nonlinear seems to have more evidence on their side than I had expected. I had the impression that the whole situation was very informal, with practically nothing written down. Now it looks like Nonlinear actually have documentations on their side. Although I have not actually looked at them. I might do this at some point, but mostly I’m hoping that other impartial observers will do this work for me, and I can just read their summaries in a couple of weeks or so.This is to say, I’m still keeping my mind open. But given that this superficially looks better than expected, I am updating in favour on Nonliner’s version. I.e, I went from expecting Nonlinear to be in the wrong to being much more unsure.
Some thoughts:
I expect that if I ever get a clear view of what happened here, it will look like one side was mostly telling the truth and one side where either delusional or straight up lying. But I don’t yet know who is telling the truth and who isn’t. The reason I think this is that this don’t look like just escalating misunderstandings. From what I can see [at least one side is delusional or lying], but also [only one side being delusional or lying] is enough to explain the evidence. I still have enough faith in EA that I think most EAs are mostly sane an honest, which means that [only one side being delusional or lying] is more likely on priors.
I think the important thing now is to find the truth. Any discussions of what lessons to learn should wait.
I expect all of these public accusations to be very painful for everyone involved. I want to express sympathy for everyone involved. Someone is falsely accused of terrible things. I think I can imagine how much this hurts.
I still think it probably good that this issue get public scrutiny. Who ever is right, some of these allegations are serious enough that I don’t think it should be over looked, and our community don’t have some type of more closed doors court to appeal too. If this was not brough out, who ever is in the wrong, would probably continue to hurt more people. But I also have a lot of uncertainty on this point.
Even though I expect it to be mostly one sided (which I could be wrong about by the way). I also expect anyone involved in this, who is trying to be honest, will make mistakes, and will sometimes express things in a somewhat hyperbolic way. That’s what happens when you are hurt.
Thanks for posting this. I have lot’s of thoughts about lots of things, that will take longer to think about. So I start with one of the easier questions.
Regarding pear review, you suggest
EAs should place a greater value on scientific rigour
We should use blogposts, Google Docs, and similar works as accessible ways of opening discussions and providing preliminary thoughts, but rely on peer-reviewed research when making important decisions, creating educational materials, and communicating to the public
When citing a blogpost, we should be clear about its scope, be careful to not overstate its claims, and not cite it as if it is comparable to a piece of peer-reviewed research
Have you had any interaction with the academic pear review system? Have you seen some of the stuff that passes though pear review? I’m in favour of scientific rigour, but I don’t think pear review solves that. In reality, my impression is that academia relies as much as name recognition and informal consensus mechanisms, as we (the blogpost community) does. The only reason academia has higher standards (in some fields) is that these fields are older and have developed a consensus around what methods are good enough vs not good enough.
I think pear review have the potential to be good. My impression is that it does really work in math, and that this has a lot to do with that reviewers receive recognition. But in many other fields it mainly serves to slow down research publication, and to gatekeep papers that are not written in the right format, or are not sufficiently interesting.
I did a PhD in theoretical physics and I was not impressed by the pear review responses I got on my paper. It was almost always very shallow comments. Which is not suppressing given that at least in that corner of physics pear review was unpaid and unrecognised work.
EA institutions should commission peer-reviewed research far more often, and be very cautious of basing decisions on shallow-dives by non-experts
For important questions, commission a person/team with relevant expertise to do a study and subject it to peer review
For the most important/central questions, commission a structured expert elicitation
Can one just do that? Isn’t it very hard to find a person who has the right expertise and who you can verify have the right expertise?
EA clearly don’t know how to handle power dynamics, and until we figure this out, we should avoid (as much as possible) to create concentration of power. I say this in full knowledge that avoiding concentration of power is not without cost.
Some examples of broken power dynamics:
Owen Cotton-Barratt’s severe mistakes seems to be largely down steam from not understanding power dynamics.
I don’t know what’s the reason behind the CEA community health team’s lack of understanding for the need to be fully separate from funding conidiations, but my best guess is that a lack of understanding of power dynamics is involved.
People not being able to trust that it’s ok to post criticism of EA under their own names, seems like a break down of power relations. For the record, I think the worry is well founded. “It’s not good for your career to criticise powerful people” is the default outcome if you don’t put in effort to mitigate this, and I don’t see such effort.
I have had several interactions with and observations of people how are better connected with in EA than me, which have left me baffled by their lack of understanding they have of the experience of being a less connected EA. This keeps happening, but I’m no longer surprised when it does.
A handfull of lower level community organisers who have told me in privet that their impression of CEA is that they are incompetent and/or unprofessional. But also that they have not spoken up about this because CEA is their sole source of funding.
What to do:
Don’t default to trusting CEA, 80k and other central orgs. Most of their power comes from your trust. Treat the word of high status people same as the word of any other EA.
Don’t donate to EA funds. We can’t democratise billionaire money, for lots of reasons. But we can avoid centralising the money that starts out as being dis-centralised. Instead decide for your self where to donate, or donate to your local or national EA group, or join the donation lottery, or delegate your decision to someone you trust personally (not based on community stanning).
I’m not accusing specific people of specific things. My current best model is that everything we see is what naturally happens when power is centralised. This is not about specific people, this is systemic. For example it’s not the fault of the central orgs that too many people defer to them too much, that’s on the rest of us.
I’m also not saying that no specific person is blameworthy. I’m just not getting into that discussion at all.
I want to add this regarding confidentiality. This is a quote from an email from Julia Wise:
People do ask me for my impressions of people they’re considering funding, so if someone does want to give their unvarnished thoughts/complaints to someone at CEA, it’s hard for that to stay separate from what funders hear.
This is from May 2021, so this may have changed.
- 1 Mar 2023 16:42 UTC; 36 points) 's comment on Linda Linsefors’s Quick takes by (
- 28 Dec 2022 11:32 UTC; 5 points) 's comment on Air-gapping evaluation and support by (
- 28 Dec 2022 11:30 UTC; 3 points) 's comment on Air-gapping evaluation and support by (LessWrong;
Registering predictions:
1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).
2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)
Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn’t worth the time investment.
I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths.
If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I’m in AI Safety I value EA being about more than X-risk prevention.
For me personally, the core of Effective Altruism is “it’s not about you”. Everything else follows from there.
This is very much in contrast to other cultures of altruism I have encountered, which focus very much on the mental state of the giver. When you stop questioning if you are pure and have the right motives, ect, and just focus on results, that’s when you get EA.
But also, don’t be 100% altruistic. Some of your efforts should be about you. If you only take care of your self for instrumental reasons, you will systematically under invest in your self. So be just genuinely egoistic with some parts of your effort, where “be egoistic” just means “do what ever you want”.
Isn’t that the opposite of what Nate said?
Nate says that he had some evidence that Sam was good in some way (good intent) and some evidence that Sam was bad in some ways (bad means). The correct conclusion in this case (probably?) is that Sam was part good part bad. But Nate mistakenly though of this as some chance Sam is totally good and some chance totally bad.
I’m not saying that what you (Duncan) points to is a real mistake that some people does. But I don’t see it is this case.
From the linked report:
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage.
Here’s a story I recently heard from someone I trust:
An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF). When OpenPhil found out about this, they rolled back the amount of money the would pay to this project, buy the exact amount that this project was promised by SFF, rendering the SFF grant meaningless.
I don’t think this is ok behaviour, and definitely not what you do to get more funders involved.
Is some context I’m missing here? Or has there been some misunderstanding? Or is this as bad as it looks?
I’m not going to name either the source or the project publicly (they can name themselves if they want to), since I don’t want to get anyone else in to trouble, or risk their chances of getting OpenPhil funding. I also want to make clear that I’m writing this on my own initiativ.
There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all, and I think this sort of things are worth calling out.
Edit: I’ve partly misunderstood what happened. See comment below for clarification. My apologies.
For what it’s worth, I think it was good that Thomas brought this up so that we could respond.
How does the conflictedness compare to the conflictedness (if any) you would feel if you were a business performing services for Meta?
To me, selling services to a bad actor feel significantly more immoral than receiving their donation, since selling a service to them is much more directly helpful to them.
(This is not a comment on how bad Meta is. I do not have an informed opinion on this.)
What type of funding opportunities related to AI Safety would OpenPhil want to see more of?
Anything else you can tell me about the funding situation with regards to AI Safety. I’m very confused about why not more people and projects get funded. Is because there is not enough money, or if there is some bottleneck related to evaluation and/or trust?
There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.
By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was “extremely reasonable”. I’m not sure why and I’ve just asked some follow up questions.
I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it’s not bad, and don’t think OpenPhil is dong something wrong for not responding to more email. They have other important work to do. But, I also have other important work to do. I’m also not doing anything wrong by not spending extra time figuring out who at their staff to contact and send a private email, which according to your data, has a 55% chance ending up ignored.
Why does the founder, Remmelt Ellen, keep posting things described as “content-free stream of consciousness”, “the entire scientific community would probably consider this writing to be crankery”, or so obviously flawed it gets −46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.
I see your concern.
Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two streams. Each of us are allowed to independently accept project into our own stream.
Remmelt believes that AGI alignment is impossible, i.e. there is no way to make AGI safe. Exactly why Remmelt believes this is complicated, and something I my self is still trying to understand, however this is actually not very important for AISC.
The consequence of this for this on AISC is that Remmelt is only interested in project that aims to stop AI progress.
I still think that alignment is probably technically possible, but I’m not sure. I also believe that even if alignment is possible, we need more time to solve it. Therefore, I see project that aim to stop or slow down AI progress as good, as long as there are not too large adverse side-effect. Therefore, I’m happy to have Remmelt and the projects in his stream as part of AISC. Not to mention that me an Remmelt work really well together, despite or different beliefs.
If you check our website, you’ll also notice that most of the projects are in my stream. I’ve been accepting any project as long as the there is a reasonable plan, there is a theory of change under some reasonable and self consistent assumptions, and the downside risk is not too large.
I’ve bounced around a lot in AI safety, trying out different ideas, stared more research projects than I finished, which has given me a wide view of different perspectives. I’ve updated many times in many directions, which have left me with a wide uncertainty as to what perspective is correct. This is reflected in what projects I accept to AISC. I believe in a “lets try everything” approach.
At this point, someone might think: If AISC is not filtering the project more than just “seems worth a try”, then how do AISC make sure not to waist participants time on bad projects.
Our participants are adults, and we treat them as such. We do our best to present what AISC is, and what to expect, and then let people decide for themselves if it seems like something worth their time.
We also require research leads to do the same. I.e. the project plan has to provide enough information for potential participants to judge if this is something they want to join.
I believe there is a significant chance that the solution to alignment is something no-one has though of yet. I also believe that the only way to do intellectual exploration is to let people follow their own ideas, and avoid top down curation.
The only thing I filter hard for in my stream is that the research lead actually need to have a theory of change. They need to have actually though about AI risk, and why their plan could make a difference. I had this conversation with every research lead in my stream.
We had one person last AISC who said that they regretted joining AISC, because they could have learned more from spending that time on other things. I take that feedback seriously. But on the other hand, I’ve regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
However, if we where not understaffed (due to being underfunded), we could do more to support the research leads to make better projects.
Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it’s their money. No one is stopping anyone from getting their own funding, and doing their own thing.
Except for the fact that 80k (and other though leaders? I’m not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community.