Building bespoke quantitative models to support decisionmakers in AI and bio. Right now that means: forecasting capabilities gains due to post-training enhancements on top of frontier foundation models, and estimating the annual burden of airborne disease in the US.
Joel Becker
Thank you for posting this beautiful reminder. I’m delighted for Henry’s good news.
Thanks Caleb! Noting that my reply to Asya is relevant here too.
Thank you for the helpful replies Asya.
Re: deemphasizing expertise:
I would imagine that some of the time saved in hiring expert grantmakers could be spent training junior grantmakers. (In my somewhat analogous experience running selection for a highly competitive program, I certainly notice that some considerations that I now think are very important were entirely missing from my early decision-making!) Should I think about your comment as coming from a hypothetical that is net or gross of that time investment?
As for improved set-ups, how about something like:
Junior grantmaker receives disproportionate training on downside considerations.
Junior grantmaker evaluates grants and rates downside risk.
Above some downside risk cut-off, if the junior grantmaker wants to give funding, the senior grantmaker checks in.
Below the cut-off, if the junior grantmaker wants to give funding, the funding is improved without further checks.
(If you think missing great grants is a bigger deal than accepting bad ones, analogously change the above.)
Intuitively, I would guess that this set-up could improve quite a bit on the waste-of-your-time and speed problems, without giving up too much on better-grants-by-your-lights. But I’m sure I’m missing helpful context.
Re: comparing to FTXFF and Manifund:
Definitely makes sense that the pitches are different. I guess I would have thought of this as part of “other hiring criteria you might have”—considerations that make it more challenging to select from the pool of people with some grantmaking experience, but for which some people tick the box.
- Aug 8, 2023, 11:45 AM; 1 point) 's comment on Reflections on my time on the Long-Term Future Fund by (
Seems like:
FTXFF did move a lot of money, alas.
Speed due to indecision is a very solveable problem (e.g. tie funding that a given regrantor can give out to shorter window, or have shared pot that other regrantors could use on their preferred projects first).
The indecision is due to the ~need to actively seek out grants, which wouldn’t be a problem for LTFF.
(In case relevant: I am a Manifund regrantor who just got back from holiday and plans to start working on it today! Thank you for the gentle push! :P)
Thank you for sharing these reflections, Asya! And for your service as the LTFF chair!
I feel confused about the difficulty of fund manager hiring. One source of confusion comes from the importance of expertise (/doing-good-direct-work), as you touch on in the post:
Historically, we’ve had trouble hiring fund managers, especially in technical AI alignment, largely for the reasons mentioned above (people generally want to focus on their work). I think there’s an extent to which I’ve contributed to our difficulty in hiring, in that I’m not sold that people doing good direct work should be taking on additional responsibilities as fund managers (so haven’t been great at convincing people to join)
In addition to the high opportunity cost of time for expert fund managers, I would have guessed that small differences between the EV of marginal grants pushes in the direction of expertise being less important. But then I don’t understand why hiring fund managers would be unusually challenging. Wouldn’t deemphasizing expertise increase the pool of eligible fund managers, thereby making hiring easier?
(Perhaps I‘m confusing relative and absolute difficulty — expertise being less important would make hiring relatively easier, but it’s still absolutely tough?)
The second source of confusion comes reconciling the difficulty of finding fund managers with the fact that FTXFF and Manifund seemed to find part-time grantmakers quite easily. I don’t know how many regrantors and grant-recommenders FTXFF ended up with, but the last rumour I heard was between 100 and 200. Manifund are currently on 16 and seem keen to expand. I would’ve thought that there is some intersection between regrantors with the top, say, 30% of grantmaking records by your light, those satisfying other hiring criteria you might have, and those currently willing to work with LTFF.
Is the difference in the scale of grants LTFF fund managers make vs regrantors? Or expectations around regularity of response (regrantors are more flexible)? Or you’re not excited about the records of regrantors in general? Or something else?
I have made early steps towards this. So far funder interest has been a blocker, although perhaps that doesn’t say much about the value of the idea in general.
Announcing the winners of the Reslab Request for Information
This is a phenomenal resource. Well done Aron!
Compared to whatever!
The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it—rhymes quite nicely with the case for work on AI safety.
Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:
Seems hard; no-one has made much progress so far. (To first approximation, no-one has tried!)
SOTA models aren’t similar enough to the things we care about. (Might get decreasingly true although, in any case, seems like we could plausibly better set ourselves up using only dissimilar models.)
But I’m guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)
Not for the main role any more, but excited to hear about people who might be interested in contributing!
True, but an appropriate number given the topic’s importance and neglectedness?
Agree.
Really glad this work is being done; grateful to Nikos for it! The “yes, and” is that we’re nowhere near the frontier of what’s possible.
You did a great job, Rob (and Luisa)! :)
Thanks for running this, Nuno! I had fun participating!
I agree with
My sense is that similar contests with similar marketing should expect a similar number of entries.
if we’re really strict about “similar marketing.” But, when considering future contests, there’s no need to hold that constant. The fact that e.g. Misha Yagudin had not heard of this prize seems shocking and informative to me. I think you could invest more time into thinking about how to increase engagement!
Relatedly, I have now had the following experience a number of times. I don’t know how to solve some problem in squiggle (charting multiple plots, feeding in large parameter dictionaries, taking many samples of samples, saving samples for use outside of squiggle, embedding squiggle in a web app, etc.etc.). I look around squiggle documentation searching for a solution, and can’t find it. I message one of the squiggle team. The squiggle team member has an easy and (often but not always) already-implemented-elsewhere solution that is not publicly available in any documentation or similar. I leave feeling very happy about the existence of squiggle and helpfulness of its team! But another feeling I have is that the squiggle team could be more successful if it invested more time in the final, sometimes boring mile of examples/documentation/evangelism, rather than chasing the next more intellectually interesting project.
Nice post! I would’ve already said this in feedback but, to reiterate publicly: I thought that the first ever EAGx in Latin America went fantastically! :) Well done to you all!
Not long at all! We’d prefer that anyone interested applies quickly; investment in interviews/reading our plan/etc. can wait.
Hardening pharmaceutical response to pandemics: concrete project seeks project lead
I thought this was great! Thank you for taking the time! Would love for a future episode with Eli to go deeper into the guts of these cruxes.
I am uncertain whether it’s important for program leads to be hard-working for the reason you describe. (I am very confident that hard-working-ness helped me personally a lot, but it doesn’t feel obvious that this went through the ‘understands hard-working-ness in others’ channel.)
Very, very strongly agree with the importance of an environment that values people’s time very highly. Small changes/mindset shifts here can have outsized impact. Lots of room for improvement too.
(Parts of this are covered under “basic amenities” but definitely more to add.)
This post is helpful and appropriately cautious! Thanks Linch.
It feels like adverse selection is a common enough phenomenon that there must be helpful case studies to learn from. I explored this with GPT, and got the following solutions for philanthropic grantmaking:
I’m pleased with (2) -- I’ve been putting time into open feedback on Manifund. And (5) is suggestive of something helpful: when it is ok for projects to receive only partial funding and each project applies to the same set of funders, then funders funding only “their part” reduces possible damage without the need to share private information. (Not putting in their part might be helpful information itself.)
Otherwise, these suggestions seem obvious or unhelpful. But I expect that a couple-of-hours dive into how philanthropists or science funders have dealt with these dynamics would be better. Nice project for someone! (@alex lawsen (previously alexrjl)?)