Program Associate at Open Philanthropy and chair of the Long-Term Future Fund. I spend half my time on AI and half my time on EA community-building. Any views I express on the forum are my own, not the views of my employer.
abergal
I think the LTFF will publish a payout report for grants through ~December in the next few weeks. As you suggest, we’ve been delayed because the number of grants we’re making has increased substantially so we’re pretty limited on grantmaker capacity right now (and writing the reports takes a somewhat substantial amount of time).
I like IanDavidMoss’s suggestion of having a simpler list rather than delaying (and maybe we could publish more detailed justifications later)-- I’ll strongly consider doing that for the payout report after this one.
Confusingly, the report called “May 2021” was for grants we made through March and early April of 2021, so this report includes most of April, May, June, and July.
I think we’re going to standardize now so that reports refer to the months they cover, rather than the month they’re released.
I like this idea; I’ll think about it and discuss with others. I think I want grantees to be able to preserve as much privacy as they want (including not being listed in even really broad pseudo-anonymous classifications), but I’m guessing most would be happy to opt-in to something like this.
(We’ve done anonymous grant reports before but I think they were still more detailed than people would like.)
We got feedback from several people that they weren’t applying to the funds because they didn’t want to have a public report. There are lots of reasons that I sympathize with for not wanting a public report, especially as an individual (e.g. you’re worried about it affecting future job prospects, you’re asking for money for mental health support and don’t want that to be widely known, etc.). My vision (at least for the Long-Term Future Fund) is to become a good default funding source for individuals and new organizations, and I think that vision is compromised if some people don’t want to apply for publicity reasons.
Broadly, I think the benefits to funding more people outweigh the costs to transparency.
Another potential reason for optimism is that we’ll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively “raising” the adults we hire, so it could be that we’re able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.
Sorry this was unclear! From the post:
There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.
I will bold this so it’s more clear.
Changed, thanks for the suggestion!
There’s no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.
Yeah, FWIW I haven’t found any recent claims about insect comparisons particularly rigorous.
Nope, sorry. :) I live to disappoint.
FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it’s more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I’d maybe add an edit to your high-level comment just to make sure people don’t get confused?
Really appreciate the clarifications! I think I was interpreting “humanity loses control of the future” in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where “humanity” refers to present-day humans, rather than humans at any given time period. I totally agree that future humans may have less freedom to choose the outcome in a way that’s not a consequence of alignment issues.
I also agree value drift hasn’t historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.
And Paul Christiano agrees with me. Truly, time makes fools of us all.
Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.
Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It’s not clear to me that they would—in that sense, I do feel like we’re “losing control” in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we’re missing the opportunity to “take control” and enable a new set of possibilities that we would endorse much more.)
Relatedly, it doesn’t feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.
I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.
I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human’s job. Presumably there’s some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn’t captured looking at the list of present-day O*NET tasks.
I’m also a little skeptical of your “low-quality work dilutes the quality of those fields and attracts other low-quality work” fear—since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.
The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I’m imagining is that smart people look at existing work and think “these people seem amateurish, and I’m not interested in engaging with them”. Luke Muelhauser’s report on case studies in early field growth gives the case of cryonics, which “failed to grow [...] is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention.” I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.
I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we’ve intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We’ve made many grants that didn’t cross the funding bar for one or more fund managers.
I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I’m guessing people in this thread might be interested in hearing it. I still don’t know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing’s also pretty funny.
I recorded the conversation; don’t want to share publicly but feel free to DM me for access.
Hi Minh– sorry for the confusion! That footer was actually from an older version of the page that referenced eligible locations for the Centre for Effective Altruism’s city and national community building grant program; I’ve now deleted it.
I encourage organizers from any university to apply, including those in Singapore.