Scott said in his email that OpenPhil is only taking donations >$250,000. Is this still true?
JanB
Bio-risk and AI: AI progress might soon lead to much faster research and engineering
That makes sense, thanks. Although this will not apply to organisations/individuals that were promised funds from the Future Fund but didn’t receive any, right? This case is pretty common, AFAICT.
Scott has sent me the following email (reproduced here with his approval). Scott wants to highlight that he doesn’t know anything more than reading the public posts on this issue.
I’d encourage people to email Scott, it’s probably good for someone to have a list of interested donors.
------------------------------------
Scott’s email:SHORT VERSION
If you want to donate blindly and you can afford more than $250K, read here for details, then consider emailing Open Philanthropy at inquiries@openphilanthropy.org . If less than $250K, read here for details, then consider emailing Nonlinear at katwoods@nonlinear.org. You might want to check the longer section below for caveats first.
If you want to look over the charities available first, you can use the same contact info, or wait for them to email you. I did send them the names and emails of those of you who said you wanted to focus on charities in specific areas or who had other conditions. I hope they’ll get back to you soon, but they might not; I’m sure they appreciate your generosity but they’re also pretty swamped.
LONG VERSION (no need to read this if you’re busy, it just expands on the information above)
Two teams have come together to work on this problem—one from Open Philanthropy Project, and one from Nonlinear.
I know Open Philanthropy Project well, and they’re a good and professional organization. They’re also getting advice from the former FTX Future Fund team (who were foundation staff not in close contact with FTX the company; I still trust them, and they’re the experts on formerly FTX-funded charities). For logistical reasons they’re limiting themselves to donors potentially willing to contribute over $250,000.
I don’t know Nonlinear well, although a former ACX Grants recipient works there and says good things about it. Some people in the EA Forum have expressed concerns about them—see https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding , I have no context for this besides the comments there. They don’t seem to have a minimum donation. I’m trying to get in touch with them to learn more.
Important consideration: these groups are trying to maximize two imperatives. First, the usual effective altruism do-as-much-good-as-possible imperative. But second, an imperative to protect the reputation of the EA ecosystem as a safe and trustworthy place to do charity work, where your money won’t suddenly disappear, or at least somebody will try to help if it does. I think this means they will be unusually willing to help charities victimized by the FTX situation even if these would seem marginal by their usual quality standards. I think this is honorable, but if you’re not personally invested in the reputation of the EA ecosystem you might want to donate non-blindly or look elsewhere.
Also, FTX Future Fund focused disproportionately on biosecurity, pandemic prevention, forecasting, AI alignment, and other speculative causes, so most of the charities these teams are trying to rescue will probably be in those categories. If you don’t want to be mostly funding those, donate non-blindly or look elsewhere.
I’ve given (or will shortly give) both groups your details; they’ve promised to keep everything confidential and not abuse your emails. If they approach you in any way that seems pushy or makes you regret interacting with them, please let me know so I can avoid working with them in the future.
I can’t give you great answers on ACX Grants now, but I’ll hopefully know more soon, and if things don’t work out with this opportunity I’d be happy to work with you further then.
Thanks again for your generosity, and please let me know if you have any questions.
Yours,
Scott
[Question] How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund?
Thanks for investigating this and producing such an extremely thorough write-up, very useful!
I haven’t read the comments and this has probably been said many times already, but it doesn’t hurt saying it again:
From what I understand, you’ve taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)
At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely.
Sorry, what I wanted to say is it seems unclear if fundamental breakthroughs are needed. They might be needed, or not. I personally am pretty uncertain about this and think that both options are possible. I think it’s also possible that any breakthroughs that will happen won’t change the general picture described in the OP much.
I agree on the rest of your comment!
I gave the comment a strong upvote because it’s super clear and informative. I also really appreciate it if people spell out their reasons for “scale in not all you need”, which doesn’t happen that often.
That said, I don’t agree with the argument or conclusion. Your argument, at least as stated, seems to be “tasks with the following criteria are hard for current RL with human feedback, so we’ll need significant fundamental breakthroughs”. The transformer was published 5 years ago. Back then, you could have used a very analogous argument about language models to argue that language models will never do this or that task; but for many of these tasks, language models can perform them now (emergent properties).
- 12 Sep 2022 4:45 UTC; 2 points) 's comment on Agree/disagree voting (& other new features September 2022) by (
Yes, you can absolutely apply for conference and compute funding, separately from an application for salary, or in combination. E.g. if you’re applying for salary funding anyway, it would be very common and normal to also apply for funding for a couple of conferences, equipment that you need, and compute. I think you would probably go for cloud compute, but I haven’t thought about it much.
Sometimes this can give mild tax issues (if you get the grant in one year but only spend the money on the conference in the next year; or, in some countries, if you just receive the funding as a private person and therefore can’t subtract expenses).
Some organisations also offer funding via prepaid credit cards, e.g. for compute.
Maybe there are also other options, like getting an affiliation with some place and using their servers for compute, but often this will be hard.
I think you could apply for funding from a number of sources. If the budget is small, I’d start with the Longterm Future Fund: https://funds.effectivealtruism.org/funds/far-future
relevant tweet I saw recently: https://twitter.com/scholl_adam/status/1556989092784615424
I’m excited about people thinking about this topic. It’s a pretty crucial assumption in the “EA longtermist space”, and relatively underexplored.
This post is a response to the thesis of Jan Brauner’s post The Expected Value of Extinction Risk Reduction Is Positive.
The post is by Jan Brauner AND Friederike Grosse-Holz. I think correcting this is particularly important because the EA community struggles with gender diversity, so dropping the female co-author is extra bad.
Given that Greg trained as an MD because he wanted to do good, this here probably counts: https://80000hours.org/2012/08/how-many-lives-does-a-doctor-save/
(and the many medical doctors and students who read posts like this and then also changed their minds, including me :-) )
This is a bit of a summary of what other people have said, and a bit of my own conceptualisation:
A) If the work is not competitive (not a winner-takes-all market), then:
For some jobs, marginal returns on quality-adjusted time invested will decrease, and you lose less than 20% of impact. This is true for jobs where some activities are clearly more valuable than others, so that you cut the less valuable ones.
For some jobs, marginal returns on quality-adjusted time invested will increase, and you lose more than 20% of impact. This could be e.g. because you have some maintenance activities that are fixed costs (like reading papers to stay up to date), or have increasing returns because you benefit from deep immersion.
B) If the work is competitive (a winner-takes-all market), either:
you are going to win anyway, in which case the same as above applies, or
you are going to lose anyway, in which whether or not you spend 20% of your time on something else doesn’t matter, or
working less is causing you to lose the competition, in which case you lose 100% of value.
Of course, this is nearly always gradual because the market is not literally winner-takes-all, just winner-takes-a-lot-more-than-second. For example, if you’re working towards an academic faculty position, then maybe a position at a tier 1 uni is twice as impactful as one at a tier 2 uni, which is twice as impactful than one at a tier 3 uni, and so on (the tiers would be pretty small for the difference only being 2x, though).
On average, the more “competitive” a job, and the closer the distance between you and the competition, the more value you lose from working 20% less.
Nearly every job has some degree of “competitiveness”/”winner-takes-all-market” going on, but for some jobs this degree is very small (e.g. employee at EA org), and for others it’s large (academia before you got a tenure-track position, for-profit startup founder).For academic research, I’d guess that from looking at A) alone, you’d get roughly linear marginal returns, and how much B) matters depends on your career stage. It matters a lot before you got a tenure-track position (because the market is “winner-takes-much-more-than-second” and competition is likely close because so many people compete for these positions). After you got a tenure-track position, it depends on what you want to do. E.g., if you try to become the world-leader in a popular field, then competition is high. If you want to research some niche EA topic well, then competition is low.
I’d guess that quite often you’d either win anyway or lose anyway, and that the 20% don’t make the difference. There are so many factors that matter for startup founder success (talent, hard-workingness, network, credentials, luck) that it would be surprising if the competition was often so close that a 20% reduction in working time changes things.
Another way to put this: it seems likely that Facebook would still be worth hundreds of billions of dollars, and Myspace ~$0, had the Facebook founders worked 20% less).
Langzeitmus :-D
FWIW, I am excited about Future Matters. I have experienced them as having great perspectives on how to affect change via policy and how to make movements successful and effective. I think they have a sufficiently different lense and expertise from many EA orgs that I’m really happy to have them working on these causes. I’ve also repeatedly donated to them over the years (one of my main donation targets)