Hi, I am a Physicist, Effective Altruist and AI safety student/researcher/organiser
Resume—Linda Linsefors—Google Docs
Linda Linsefors
I’ve asked for more information and will share what I find, as long as I have permission to do so.
Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.
There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.
By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was “extremely reasonable”. I’m not sure why and I’ve just asked some follow up questions.
I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it’s not bad, and don’t think OpenPhil is dong something wrong for not responding to more email. They have other important work to do. But, I also have other important work to do. I’m also not doing anything wrong by not spending extra time figuring out who at their staff to contact and send a private email, which according to your data, has a 55% chance ending up ignored.
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn’t know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X—y, which I thought was extremely reasonable.
Thanks for sharing.
What the other grantmaker (the one who gave your y) though of this?
Where they aware of your OpenPhil grant when they offered you funding?
Did OpenPhil role back your grant because you did not have use for more than X or some other reason?
I have a feature removal suggestion.
Can the notification menu please go back to being like LW?
The LW version (which EA Forum used to have too) is more compact, which gives a better overview. I also prefer when karma and notifications are separate. I don’t want to see karma updates in my notification dropdown.
From the linked report:
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage.
Here’s a story I recently heard from someone I trust:
An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF). When OpenPhil found out about this, they rolled back the amount of money the would pay to this project, buy the exact amount that this project was promised by SFF, rendering the SFF grant meaningless.
I don’t think this is ok behaviour, and definitely not what you do to get more funders involved.
Is some context I’m missing here? Or has there been some misunderstanding? Or is this as bad as it looks?
I’m not going to name either the source or the project publicly (they can name themselves if they want to), since I don’t want to get anyone else in to trouble, or risk their chances of getting OpenPhil funding. I also want to make clear that I’m writing this on my own initiativ.
There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all, and I think this sort of things are worth calling out.
Edit: I’ve partly misunderstood what happened. See comment below for clarification. My apologies.
Hers’s the other career coaching options on the list. It case you want to connect with our colleagues.
That’s awesome!
I will add you to the list right away!
I do think AISF is a real improvement to the field. My apologies for not making this clear enough.
The 80,000 Hours syllabus = “Go read a bunch of textbooks”. This is probably not ideal for a “getting started’ guide.
You mean MIRI’s syllabus?
I don’t remember what 80k’s one looked like back in the days, but the one that is up not is not just “Go read a bunch of textbooks”.
I personally used CHAI’s one and found it very useful.
Also some times you should go read a bunch of text books. Textbooks are great.
Week 0: Even though it is a theory course, it would likely be useful to have some basic understanding of machine learning, although this would vary depending on the exact content of the course. It might or might not make sense to run a week 0 depending on most people’s backgrounds.
I would reccomend having a week 0 with some ML and RL basics.
I did a day 0 ML and RL speed run, at the start of two of my AI Safety workshops at EA hotel in 2019. Where you there for that? It might have been recorded, but I have no idea where it might have ended up. Although obviously some things have happened since then.Week 1 & 2: I’d assume that the participants have at least a basic understanding of inner vs outer alignment, deceptive alignment, instrumental convergence, orthogonality thesis, why we’re concerned about powerful optimisers, value lock-in, recursive self-improvement, slow vs. fast take-off, superintelligence, transformative AI, wireheading, though I could quite easily create a document that defines all of these terms.
Seems very worth creating. Depending on peoples background some people will have an understanding of these with out knowing the terminology. A document explaining each term, and a “read more” link to some useful post would be great. Both for people to know if they have the pre-requisite, and to help anyone who almost have the prerequisite to find that one blogpost they (them specifically) should read to be able to follow the course.
I was surprised to read this:
In 2020, the going advice for how to learn about AI Safety for the first time was:
Read everything on the alignment forum. [...]
Speak to AI safety researchers. [...]
MIRI, CHAI and 80k all had public reading guides since at least 2017, when I started studying AI Safety.So seems like at least part of the problem was that these where not well known enough? Which by the way is now a problem for the AI Safety Fundamentals curriculum. When I was giving career advise, most people I talked to, didn’t know that the curriculum is publicly available for self studies.
Despite the existence of these older resources, I still think AI Safety Fundamentals is great.
I’m updating the AI Safety Support—Lots of Links page, and came across this post when following trails of potentially useful links.
Are you still doing coaching, and if “yes” do you want to be listed on the lots of links page?
For what it’s worth, I think it was good that Thomas brought this up so that we could respond.
I’m guessing that what Marius means by “AISC is probably about ~50x cheaper than MATS” is that AISC is probably ~50x cheaper per participant than MATS.
Our cost per participant is $0.6k - $3k USD
50 times this would be 30k − 150k per participant.
I’m guessing that MATS is around 50k per person (including stipends).
Here’s where the $12k-$30k USD comes from:Dollar cost per new researcher produced by AISC
The organizers have proposed $60–300K per year in expenses.
The number of non-RL participants of programs have increased from 32 (AISC4) to 130 (AISC9). Let’s assume roughly 100 participants in the program per year given the proposed size of new camps.
Researchers are produced at a rate of 5–10%.
Optimistic estimate: $60K / (10% * 100) = $6K per new researcher
Middle estimate 1: $60K / (5% * 100) = $12K per new researcher
Middle estimate 2: $300K / (10% * 100) = $30K per new researcher
Pessimistic estimate: $300K / (5% * 100) = $60K per new researcher
5. Overall, I think AISC is less impactful than e.g. MATS even without normalizing for participants. Nevertheless, AISC is probably about ~50x cheaper than MATS. So when taking cost into account, it feels clearly impactful enough to continue the project. I think the resulting projects are lower quality but the people are also more junior, so it feels more like an early educational program than e.g. MATS.
This seems correct to me. MATS is investing a lot in few people. AISC is investing a little in many people.
Also agreement on all the other points.
From Lucius Bushnaq:
I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there’s a good chance I would never have become an AI notkilleveryoneism researcher.
Full comment here: This might be the last AI Safety Camp — LessWrong
Thanks for this comment. To me this highlights how AISC is very much not like MATS. We’re very different programs doing very different things. MATS and AISC are both AI safety upskilling programs, but we are using different resources to help different people with different aspects of their journey.
I can’t say where AISC falls in the talent pipeline model, because that’s not how the world actually work.
AISC participants have obviously heard about AI safety, since they would not have found us otherwise. But other than that, people are all over the place in where they are on their journey, and that’s ok. This is actually more a help than a hindrance for AISC projects. Some people have participate in more than one AISC. One of last years research leads are a participants in one of this years projects. This don’t mean they are moving backwards in their journey, this is them lending their expertise to a project that could use it.
So, the appropriate counterfactual for MATS and similar programs seems to be, “Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash.”
This seems correct to me for MATS, and even if I disagreed you should trust Ryan over me. However this is very much not a correct counterfactual for AISC.
If all MATS’ money instead went to the LTFF to support further independent researchers, I believe that substantially less impact would be generated.
This seems correct. I don’t know exactly the cost of MATS, but assuming the majority of the cost is stipends, then giving this money to MATS scrollas with all the MATS support seems just straight up better, even with some overhead cost for the organisers.
I’m less sure about how MATS compare to funding researchers in lower cost locations than SF Bay and London.
I believe the most taut constraint on producing more AIS researchers is generally training/mentorship, not money.
I’m not so sure about this, but if true then this is an argument for funnelling more money to both MATS and AISC and other upskilling programs.
Some of the researchers who passed through AISC later did MATS. Similarly, several researchers who did MLAB or REMIX later did MATS. It’s often hard to appropriately attribute Shapley value to elements of the pipeline, so I recommend assessing orgs addressing different components of the pipeline by how well they achieve their role, and distributing funds between elements of the pipeline based on how much each is constraining the flow of new talent to later sections (anchored by elasticity to funding). For example, I believe that MATS and AISC should be assessed by their effectiveness (including cost, speedup, and mentor time) at converting “informed talent” (i.e., understands the scope of the problem) into “empowered talent” (i.e., can iterate on solutions and attract funding/get hired).
I agree that it’s hard to attribute value when someone done more than one program. They way we asked Arb to adress this is by just asking people. This will be in their second report. I also don’t know the result of this yet.
I don’t think programs should be evaluated based on how well they achieve their role in the pipeline, since I reject this framework.
This said, MATS aims to advertise better towards established academics and software engineers, which might bypass the pipeline in the diagram above. Side note: I believe that converting “unknown talent” into “informed talent” is generally much cheaper than converting “informed talent” into “empowered talent.”
We already have some established academics and software engineers joining AISC. Being a part-time online program is very helfull for being able to include people who have jobs, but would like to try out some AI safety research on the side. This is one of several ways AISC is complementary to MATS, and not a competitor.
Several MATS mentors (e.g., Neel Nanda) credit the program for helping them develop as research leads. Similarly, several MATS alumni have credited AISC (and SPAR) for helping them develop as research leads, similar to the way some Postdocs or PhDs take on supervisory roles on the way to Professorship. I believe the “carrying capacity” of the AI safety research field is largely bottlenecked on good research leads (i.e., who can scope and lead useful AIS research projects), especially given how many competent software engineers are flooding into AIS. It seems a mistake not to account for this source of impact in this review.
Thanks. This is something I’m very proud of as an organiser. Although I was not an organiser the year Neal Nanda was a mentor, I’ve heard this type of feedback from several of the research leads from the last cohort.
This is another way AISC is not like MATS. AISC has a much lower bar for research leads than MATS has for their mentors, which has several down stream effects on how we organise our programs.
MATS has very few, well known, top talent mentors. This means that for them, the time of the mentors is a very limited resource, and everything else is organised around this constraint.
AISC has a lower bar for our research leads, which means we have many more of them, letting up run a much bigger program. This is how AISC is so scalable. On the other hand we have some research leads learning-by-doing, along with everyone else, which creates some potential problems. AISC is structured around addressing this, and it seem to be working.
I don’t like this funnel model, or any other funnel model I’ve seen. It’s not wrong exactly, but it misses so much, that it’s often more harmfull than helpful.
For example:If you actually talk to people their story is not this linear, and that is important.
The picture make it looks like AISC, MATS, etc are interchangeable, or just different quality versions of the same thing. This is very far from the truth.
I don’t have a nice looking replacement for the funnel. If had a nice clean model like this, it would probably be as bad. The real world is just very messy.
We have reached out to them and gotten some donations.
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don’t think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone who was involved. They have asked to stay anonymous, please respect this.