I work on various longtermist things, including movement building.
This is indeed my belief about ex ante impact. Thanks for the clarification.
That might achieve the “these might be directly useful goal” and “produce interesting content” goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don’t. It wouldn’t achieve any of the other goals, though.
Here’s a crazy idea. I haven’t run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
Someone picks a book they want to review.
Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
They write a review, and send it to me.
If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
If I don’t want to give them the money, they can do whatever with the review.
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
Things directly related to traditional EA topics
Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.
I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
It might surface some talented writers and thinkers who weren’t otherwise known to EA.
It might produce good content on the EA Forum and LW that engages intellectually curious people.
Suggested elements of a book review:
One paragraph summary of the book
How compelling you found the book’s thesis, and why
The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
Optionally, epistemic spot checks
Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
I think that “business as usual but with more total capital” leads to way less increased impact than 20%; I am taking into account the fact that we’d need to do crazy new types of spending.
Incidentally, you can’t buy the New York Times on public markets; you’d have to do a private deal with the family who runs it
Re 1: I think that the funds can maybe disburse more money (though I’m a little more bearish on this than Jonas and Max, I think). But I don’t feel very excited about increasing the amount of stuff we fund by lowering our bar; as I’ve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like “is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it” than “is this grant good enough to be worth the money”.
I think that the funds’ RFMF is only slightly real—I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn’t really increase my ability to direct money at promising projects that I run across. (It’s helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn’t have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
Do you think increasing available funding wouldn’t help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically won’t help at all for causing interventions of the types you listed in your post—all of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of course—there’s enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that’s not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I’d rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
I am planning on checking in with grantees to see how well they’ve done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.
I normally didn’t make specific forecasts about the outcomes of grants, because operationalization is hard and scary.
I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don’t, and my intuitions apparently don’t feel that optimistic about working on this. I am not sure why. I think it’s maybe just that operationationalization is super hard and I feel like I’m going to have to spend more effort figuring out reasonable proxies than actually thinking about the question of whether this grant will be good, and so I feel drawn to a more “I’ll know it when I see it” approach to evaluating my past grants.
Like Max, I don’t know about such a policy. I’d be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.
Re 1: I don’t think I would have granted more
Re 2: Mostly “good applicants with good proposals for implementing good project ideas” and “grantmaker capacity to solicit or generate new project ideas”, where the main bottleneck on the second of those isn’t really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don’t think that low quality applications make my life as a grantmaker much worse; if you’re reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isn’t that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I’d love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn’t heard about it or hasn’t decided that it’s promising, or doesn’t want to try it because they don’t have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who’d be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
Re your 19 interventions, here are my quick takes on all of them
Creating, scaling, and/or improving EA-aligned research orgs
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
Creating, scaling, and/or improving EA-aligned research training programs
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it’s very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Increasing grantmaking capacity and/or improving grantmaking processes
Yeah this seems good if you can do it, but I don’t think this is that much of the bottleneck on research. It doesn’t take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I’d love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don’t feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
I don’t immediately feel excited by this for longtermist research; I wouldn’t be surprised if it’s good for animal welfare stuff but I’m not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don’t think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
I’m not confident.
Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
The post doesn’t seem to exist yet so idk
Increasing and/or improving research by non-EAs on high-priority topics
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can’t think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I’m excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
Creating a central, editable database to help people choose and do research projects
I feel pessimistic; I don’t think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn’t need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn’t seem like the key thing to work on.
Using Elicit (an automated research assistant tool) or a similar tool
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it’s amazing we should expect it to be extremely commercially successful; I think I’ll wait to see if I’m hearing people rave about it and then try it if so.
Forecasting the impact projects will have
I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren’t as into forecasting as they should be (including me unfortunately.) I’d need to know your specific proposal in order to have more specific thoughts.
Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)
I think that facilitating junior researchers to connect with each other is somewhat good but doesn’t seem as good as having them connect more with senior researchers somehow.
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
I’m into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Increasing and/or improving career advice and/or support with network-building
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job “spend many hours a day talking to EAs who aren’t as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them” is not as good as what I’m currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
I’m not sure that this is better than providing funding to people, though it’s worth considering. I’m worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren’t as promising.
Another way of putting this is that I think it’s kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I’d rather they tried to get funding to try it really hard for a while, and if it doesn’t go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
Creating and/or improving relevant educational materials
I’m not sure; seems worth people making some materials, but I’d think that we should mostly be relying on materials not produced by EAs
Creating, improving, and/or scaling market-like mechanisms for altruism
I am a total sucker for this stuff, and would love to make it happen; I don’t think it’s a very leveraged way of working on increasing the EA-aligned research pipeline though.
Increasing and/or improving the use of relevant online forums
Yeah I’m into this; I think that strong web developers should consider reaching out to LessWrong and saying “hey do you want to hire me to make your site better”.
Increasing the number of EA-aligned aspiring/junior researchers
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don’t know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I’d still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
Increasing the amount of funding available for EA-aligned research(ers)
This seems almost entirely useless; I don’t think this would help at all.
discovering, writing, and/or promoting positive case studies
Seems like a good use of someone’s time.
This was a pretty good list of suggestions. I guess my takeaways from this are:
I care a lot about access to mentorship
I think that people who are willing to talk to lots of new people are a scarce and valuable resource
I think that most of the good that can be done in this space looks a lot more like “do a long schlep” than “implement this one relatively cheap thing, like making a website for a database of projects”.
I feel very unsure about this. I don’t think my position on this question is very well thought through.
Most of the time, the reason I don’t want to make a grant doesn’t feel like “this isn’t worth the money”, it feels like “making this grant would be costly for some other reason”. For example, when someone applies for a salary to spend some time researching some question which I don’t think they’d be very good at researching, I usually don’t want to fund them, but this is mostly because I think it’s unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism’s last dollar.
I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to “where should the bar be” seems like a worse use of my time than those other activities.
I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.
(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I don’t think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)
re 1: I expect to write similarly detailed writeups in future.
re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I’ll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)
re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes more time. The secondary con is that if I wrote more detailed grant reports, I’d have to be a bit clearer about the advantages and disadvantages of the grants we made, and this would involve me having to be clearer about kind of awkward things (like my detailed thoughts on how promising person X is vs person Y); this would be a pain, because I’d have to try hard to write these sentences in inoffensive ways, which is a lot more time consuming and less fun.
re 4: Yes I think this is a good idea, and I tried to do that a little bit in my writeup about Youtubers; I think I might do it more in future.
I don’t think this has much of an advantage over other related things that I do, like
telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why
asking people for their thoughts on grant applications that I’ve been given
asking people for ideas for active grantmaking strategies
A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:
The donors to the fund
The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics)
Presumably this differs a lot between grants; I’d be interested in some typical figures.
This question is important because you need a sense of these numbers in order to make decisions about which of these parties you should try to be. Eg if the donors get 90% of the credit, then EtG looks 9x better than if they get 10%.
(I’ll provide my own answer later.)
Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don’t normally track it.
EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.
I occasionally track my work time for a few weeks at a time; by coincidence I happen to be tracking it at the moment. I used to use Toggl; currently I just track my time in my notebook by noting the time whenever I start and stop working (where by “working” I mean “actively focusing on work stuff”). I am more careful about time tracking my work on my day job (working on longtermist technical research, as an individual contributor and manager) than working on the EAIF and other movement building stuff.
The first four days this week, I did 8h33m, 8h15m, 7h32m, and 7h48m of work on my day job. I think I did about four hours of work on movement building. So that’s an average of about 9 hours a day. Probably four of those hours are deep work on average.
My typical schedule is to do movement building stuff first thing in the morning, eg perhaps 7:30am to 8:30am, and then to do my day job between about 8:30am and 7pm, with a 30m break at 10am to hang out with my girlfriend after she’s woken up, and a maybe 40m break for lunch at about 12:10. I occasionally do some calls in the evenings, or respond to people’s messages about work things. (I usually go to bed between 10:10 and 11pm.)
So my efficiency is probably about two thirds, if you include my morning break and lunch break in the denominator, and 75% if you don’t.
I normally work for a couple hours on the weekend, mostly doing calls, and I also usually do some kind of unstructured and unfocused work like walking around and thinking about lots of stuff which sometimes includes my work. So I guess my total work time per week is probably like 47 hours or something.
My efficiency is highest when I wake up unusually early and work uninterrupted for a long time. It’s also much higher when I’m doing tasks that it’s easy to do for a long time. The most obvious example of this is meetings—they require less concentration than e.g. programming, and so if my day includes a lot of meetings, my efficiency looks higher.
Of course, work efficiency is a dangerous thing to optimize—I actually want to optimize the value of my work output, which is related but importantly different. In particular, sometimes I fall into a trap where I spot some task which I can easily spend lots of time on, but which isn’t actually the most valuable. I try hard in this kind of situation to catch myself and ask “what’s actually the most important thing to do right now”.
My efficiency and total work time has usually been somewhat lower in the past. When I worked at MIRI, I would typically get something like 37 hours of work done in a typical week (roughly 2⁄3 technical work and 1⁄3 recruiting work). I also had some bad fatigue problems at various points over the last few years; I think I worked more like 20 hours per week for like a third of 2019, which was very sad and unpleasant. I was kind of depressed for a while last year, which I think took my work down to maybe 30 hours per week. I work more at my current job for a few reasons: it feels more tractable than my MIRI work, I feel more responsibility because I have a more senior role, and I am working more as a manager and so I spend more time doing types of work that I find less tiring.
I think that my current work schedule is basically sustainable for me as long as I feel reasonably happy and satisfied with my life, which is pretty hard for me to ensure.
I can imagine taking other jobs that seemed equally impactful where I’d end up working many fewer hours. And there are a few jobs where I’d end up working more hours (eg jobs where I was constantly talking to people and rarely trying to think hard about stuff).
That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.
I am glad to have you around, of course.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)