I think there should be an Oxford group that has as its audience the people in EA orgs, with activities to improve happiness, productivity, and the attractiveness of these workplaces, which is quite different from the goal of trying to grow a community of students. On this front, I’ve been spending time finding group housing near the new office. It would also be good to have short-term housing for visitors. It would be good to have dinners, and fun activities on a Friday night. In-principle, the range of activities that could be helped by proximity to the Oxford orgs is extremely large, but things that interact more closely with the orgs, like grant recommendations, or recruitment, just to pick a couple of arbitrary examples, would have to be worked out beforehand.
I’m not involved with either of these funds, but here are three projects I really want to see happen:
More recruiting for EA orgs: FHI wants to grow a bunch and could benefit from having more great researchers referred. Probably similar is true for other orgs.
Targeted outreach using social media advertisements: EA is currently doing little outreach for fear of dilution, and is thereby foregoing many of the benefits of our surplus of funds and ideas. Maybe we could do more outreach in a way that doesn’t bring about dilution, such as by advertising intellectual content in a way that’s filtered to just intellectual audiences.
EA Oxford community. There’s ~45 employees at FHI/GPI/Forethought/CEA-UK but almost all of the community activities are run by and directed at students.
I think the main answer is that advice for mid/late-career is harder to provide. But we can improvise by leveraging the existing research:
Could one land jobs at any of the positions on 80,000 Hours’ jobs board?
Could you switch to working on a high-priority area in general?
What are the main skills gained from your career? Are these needed by any of the organizations on the jobs board? Are they needed for starting any new organizations?
Isn’t Matt in HK?
It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.
It’s a bit surprising to me that you’d want to send all four volumes.
This is a strong set of grants, much stronger than the EA community would’ve been able to assemble a couple of years ago, which is great to see.
When will you be accepting further applications and making more grants?
In keeping with our ethos, we want to collaborate with other EA projects as much as possible. The Hub presently connects with the EA Forum, EA Work Club, PriorityWiki, EA Donation Swap and Effective Thesis.
I’m not sure much integration would be required, but did you consider linking the 80k jobs board? This seems like a really useful recent EA tool that could fit in quite well.
I agree that registering for organ donation after death helps but does no direct harm. But I think we need to have a high bar for including an activity in the typical cache of activities that EAs promote to others. We want the act to be similar to other acts that have near-maximal impact. Donation fits that bill because once you start donating anywhere, you can switch to other donation targets that have a big long term impact.
For organ donation, though, I don’t think it really gives you ideas about anything that can be done that has any real long-term significance. If you go down the organ-donation vertical, you might end up with kidney donation, or with extreme ideas about self-sacrifice. This kind of ideology is really catchy—It brought Zell Kravinsky mild fame, and was the main object of the book Strangers Drowning. But I don’t think that’s the main way that long-run good is done. I think doing long-run good requires mostly a more analytical or startup mindset. If you do things like live kidney donation, I actually think you might do less good than working for the week of your operation, and donating some of that to a top longtermist charity.
I get that my claim is that the second-order effects outweigh the first-order ones here, but I don’t think that should be so surprising in the context of EA outreach—we need to carve an overall package—that gets people to do some good in the short-run, but most-importantly, that builds up a productivity mindset, and gets people to do a lot of good over the longer term.
I hear more people do cold outreach about being a researcher than RA, and my guess is that 3-10x more people apply for researcher than RA jobs even when they are advertised. I think it’s a combination of those two factors.
My recommendation would be that people apply more to RA jobs that are advertised, and also reach out to make opportunities for themselves when they are not.
I think about half of researchers can use research assistants, whether or not they are currently hiring for one. A major reason researchers don’t make research assistant positions available is they don’t expect to find one worth hiring, and so don’t want to incur the administrative burden. Or maybe they don’t feel comfortable asking their bosses for this. But if you are a strong candidate, coldly reaching out may result in you being hired or may trigger a hiring round for that position. Although often strong candidates would be people I have met at an EA conference, that got far in an internship application, or that has been referred to me.
I don’t think the salaries would be any lower than competitive rates.
This is an uncharitable reading of my comment in many ways.
First, you suggest that I am worried that you want to recruit people not currently doing direct work. All things being equal, of course I would prefer to recruit people with fewer alternatives. But all things are not equal. If you use people you know for the initial assessments, you will much more quickly be able to iron out bugs in the process. In the testing stages, it’s best to have high-quality workers that can perceive and rectify problems, so this is a good use of time for smart, trusted friends, especially since it can help you postpone the recruitment step.
Second, you suggest that I am in the dark about the importance of consensus-building. But this assumes that I believe the only use for consultation is to reach agreement. Rather, by talking to the groups working in related spaces like BERI, Brendon, EA grants, EA funds, and donors, you will of course learn some things, and your beliefs will probably get closer. On aggregate, your process will improve. But also you will build a relationship that will help you to share proposals (and in my opinion funders).
Third, you raise the issue of connecting funding with evaluation. Of course, the distortionary effect is significant. I happen to think the effect from creating an incentive for applicants to apply is larger and more important, and funders should be highly engaged. But there are also many ways that you could have funders be moderately engaged. You could check what would be a useful report for them, that would help them to decide to fund something. You could check what projects they are more likely to fund.
The more strategic issue is as follows. Consensus is hard to reach. But a funding platform is a good that scales with the size of the network of applicants (and imo funders). Somewhat of a natural monopoly (although we want there to be at least a few funders.) You eventually want widespread community-support of some form. I think that as you suggest, that means we need some compromise, but I think it also weighs in favour of more consultation, and in favour of a more experimental approach, which projects are started in a simple form.
I’m a big fan of the idea of having a new EA projects evaluation pipeline. Since I view this as an important idea, I think it’s important to get the plan to the strongest point that it can be. From my perspective, there are only a smallish number of essential elements for this sort of plan. It needs a submissions form, a detailed RFP, some funders, and some evaluators. Instead, we don’t yet have these (e.g. detail re desired projects, consultation with funders). But then I’m confused about some of the other things that are emphasised: large initial scale, a process for recruiting volunteer-evaluators, and fairly rigid evaluation procedures. I think the fundamentals of the idea are strong enough that this still has a chance of working, but I’d much prefer to see the idea advanced in its strongest possible form. My previous comments on this draft are pretty similar to Oliver’s, and here are some of the main ones:
This makes sense to me as an overall idea. I think this is the sort of project where if you do it badly, it might dissuade others from trying the same. So I think it is worth getting some feedback on this from other evaluators (BERI/Brendon Wong). It would also probably be useful to get feedback from 1-2 funders (maybe Matt Wage? Maybe someone from OpenPhil?), so that you can get some information about whether they think your evaluation process would be of interest to them, or what might make it so. It could also be useful to have unofficial advisors.
I predict the process could be refined significantly with ~3 projects.
You only need a couple of volunteers and you know perhaps half of the best candidates, so for the purpose of a pilot, did you consider just asking a couple of people you know to do it?
I think you should provide a ~800 word request for proposals. Then you can give a much more detailed description of who you want to apply. e.g. just longtermist projects? How does this differ from the scope of EA grants, BERI, OpenPhil, etc etc? Is it sufficient to apply with just an idea? Do you need a team? A proof of concept? etc etc etc.This would be strengthened somewhat by already having obtained the evaluators, but this may not be important.
I was influenced at that time by people like Matt Fallshaw and Ben Toner, who thought that for sufficiently good intellectual work, funding would be forthcoming. It seemed like insights were mostly what was needed to reduce existential risks...
I thought that more technical skills were rarer, were neglected in some parts of academia (e.g. in history), and were the main thing holding me back from being able to understand papers about emerging technologies… Also, I asked Carl S, and he thought that if I was to go into research, these would be the best skills to get. Nowadays, one could ask a lot more different people.
I don’t think this idea was mine originally, but it would go a long way just to have two pi charts: the current distribution of careers in EA, and the optimal distribution.
Ryan/Tegan: Did you get your “something like thirty times lower” estimate from any particular research organization(s)?
Ryan/Tegan: Did you get your “something like thirty times lower” estimate from any particular research organization(s)?
This is an order-of magnitude estimate based on experience at various orgs. I’ve asked to be a research assistant for various top researchers, and generally I’m the only person asking at that time. I’ve rarely heard from researchers that someone has asked to research-assist with them. Some of this is because RA job descriptions are less common but I would guess that there is still an effect even when there are RA job descriptions.
Cover letters to core EA orgs from EAs generally indicate interest in EA. It’s sometimes also indicated by involvement in EA groups, through a CV, by referral sources, and by interviews. You can pretty reliably tell.
Hundreds of EA applicants? Most EA org roles don’t have that… I’ve been in/around MIRI, Ought, FHI and many other EA orgs. It’s common to have about a hundred applicants for a role (research or ops) and the number of EA applicants is usually in the tens.
That would honestly be my guess. Some people would call this cynical, but I think the amount of skills you’re going to impart in 4 days, or even with a very long ~5 week camp, are pretty limited compared to the variation in people’s innate dispositions, and the experience gained in their whole lifetime beforehand.