GCR capacity-building grantmaking and projects at Open Phil.
Eli Rose
I really think you ought to consider renaming this post… Probably about 1000 people will see the title. There’s some chance you could convince someone to stop donating to AMF just from the title—that tends to be how brains work, even though it isn’t very rational.
I think it’s not a good idea to respond to criticism in this way. I imagine myself as an outsider, skeptical of some project, and having supporters of the project tell me, “It’s morally wrong to say we’re not doing good without following our things-to-do-before-critiquing-us checklist, because critiques of us (if improperly done) might cause us to lose support, which is tantamount to causing harm.”
I think this would (and should) make skeptic-me take a dimmer view of the project in question. It’s unconvincing on the object level; to the extent that I already don’t think what you’re doing is valuable, I shouldn’t be moved by arguments about how critiquing it might destroy value. And it pattern-matches to the many other instances of humans organizations wanting to dictate the terms on which they can be criticized, and leveraging the force of moral arguments to do so. Organizations that do this kind of thing are often not truth-seeking and genuinely open to criticism (even when it’s done “properly” by their lights).
Empirically, in hiring rounds I’ve previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn’t make a hire. I’ve also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).
I’m sympathetic to the take “that seems pretty weird.” It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar has been somewhat too high in the past, though this is speaking just for myself. I think when you have a lot of strategic uncertainty, as GCR teams often do, that pushes towards a higher hiring bar as you need people who have a wide variety of skills.
I’d probably also gently push back against the notion that our hiring pool is extremely deep, though that’s obviously relative. I think e.g. our TAIS roles will likely get many fewer applicants than roles for similar applicants doing safety research at labs, for a mix of reasons including salience to relevant people and the fact that OP isn’t competitive with labs on salary.
(As of right now, TAIS has only gotten 53 applicants across all its roles since the ad went up, vs. governance which has gotten ~2x as many — though a lot of people tend to apply right around the deadline.)
If you think the signalling benefits from being veg*n are large, then it seems plausible to me that the signalling benefits from being a “scope-sensitive” or “evidence-sensitive” veg*n are larger, at least depending on your background culture and how high-bandwidth of a message you can send.
My family didn’t ask any questions when I became vegetarian (lots of their friends are vegetarian), but the fact that I still eat oysters causes no end of questions. This leads to conversations about different types of animal sentience that feel more like they’re actually about our treatment of animals than would have happened if I were a “normal” vegetarian.
I’ve had less opportunity to see the effects firsthand, but I think being averse to foods in rough proportion to their suffering per calorie, e.g. eating beef but avoiding eggs (and talking about why you do this, when asked) might have a similar result.
[This isn’t to argue that the signalling benefits outweigh the direct harm.]
I’ll stand by the title here. I think a bilingual person without specific training in translation can have good taste in determining whether or not a given translation is high-quality. These seem like distinct skills, e.g. in English I’m able to recognize a work badly translated from French even if I don’t speak French and couldn’t produce a better one. And having good taste seems like the most important skill for someone who is vetting and contracting with professional translators.
Separately, I also think that many (but not all) bilingual people without specific training in translation can themselves do good translation work. The results of our pilot project moved me towards this view (from a prior position that put a decent amount of weight on it).
As a high-level note, I see the goal here as enabling people to engage with EA ideas where they couldn’t before. It’s important that quality be high enough that the ideas are transmitted with good fidelity. But I don’t think we need to adhere to an extremely high and rigorous standard of the type one might have when translating a literary work, e.g. I don’t think we need translations to read so fluently that one forgets the material was originally written in English. I think this work is urgent and important, and I think the opportunity costs of imposing that kind of standard would be significant.
(I work at Open Phil assisting with this effort.)
-
Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.
-
For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.
-
I don’t buy your example on 80k’s advice re: climate change. You want to cooperate in prisoner’s dilemmas if you think that it will cause the agent you are cooperating with to cooperate more with you in the future. So there needs to a) be another coherent agent, which b) notices your actions, c) takes actions in response to yours, and d) might plausibly cooperate with you in the future. In the climate change case, what is the agent you’d be cooperating with here and does it meet these criteria?
Is it the climate change movement? It doesn’t seem to me that “the climate change movement” is enough of a coherent agent to do things like decide “let’s help EA with their goals.”
Or is it individual people who care about climate change? Are they able to help you with your goals? What is it you want from them?
Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, there’s a small number of data points, and there’s a ton of noise from other factors that language communities vary along.
Fortunately I think we’ll have additional context that will help us assess the impacts of these grants beyond a black-box “did this input lead to this output” analysis.
(I work at Open Phil assisting with this effort.)
We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.
- 16 Nov 2022 22:27 UTC; 25 points) 's comment on Open Phil is seeking applications from grantees impacted by recent events by (
By fast-growing startup, I mean a company that seems decently likely to be one of the top ~20 highest valued startups founded in a given 5 year period.
This sounds more like “top startup” than “fast-growing”? Not trying to nitpick, the terms just seem pretty different to me.
I think the bar need not be that high for some of the benefits you mention. I had an experience that jibes with this:
About 6 months after joining, I started leading a team of ~5 engineers on a high priority engineering project. That was mostly due to the company needing leaders to keep up with our growth, and my hustle and generalist skills making me well-suited for the role. That experience taught me a lot about leadership, management, and long-term engineering projects, and it seems like this type of experience is much more common in fast-growing startups.
from joining a startup that was certainly not one of the top ~20 in a five-year period—it was “just a TechStars company.” I found this really valuable. Probably I got fewer of the other benefits you mention around working with the top people in a given industry (this was a “random webapp” startup, not an ML startup, so that didn’t really apply.)
(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)
Have you heard of Harry Potter and the Methods of Rationality (http://www.hpmor.com/) and/or http://unsongbook.com ? I think they serve some of this role for the community already.
It’s interesting they are both long-form web fiction; we don’t have EA tv shows or rock bands that I know of.
Yep, this list isn’t intended to rule anything out. We’d certainly be interested in getting applications from people who want to get content translated into Hindi or other Indian languages.
This is an interesting issue; it makes sense that ISIS would be bad at dam maintenance.
Without reading all the sources (so perhaps these are clearly answered somewhere in there), some next questions I’d be curious about:
Where does the “500,000 to 1.5 million” estimate of deaths come from? Is this taking the simulations from the European Commission paper and assuming that anyone affected by water levels over X meters high dies?
Likewise, where do the cost estimates for the solutions come from?
Is it right that if this happened, it would be the most deaths caused by a dam failure, ever? Wikipedia seems to suggest this is so, with the 1975 Banqiao Dam failure causing ~20k − 200k deaths.
One solution would be to spend $2 billion to finish construction of Badush dam downstream in order to block the floodwaters, which if this saved 1 million lives would come out at $2000/life saved, better than AMF. Even if that’s likely too optimistic, it’s suggestive, as the likely if as yet unknown existence of more targeted marginal uses of money for readers means this is in my opinion a very promising new cause area worth further investigation, this post being an opener towards further inquiry. I encourage others to do much more detailed expected value calculations, with openminded curiosity.
I appreciate that this is just a toy estimate. But I think even at a toy level we could make the estimate more accurate by having a term for “P(dam failure within X years, absent our intervention)”. The dam may not fail within a given timeframe, or it may be fixed by other actors before it fails, etc, and it doesn’t seem like the case is so overwhelming that these outcomes should be ignored. E.g. if you think the dam is 50% likely to fail within 40 years, absent our intervention, then the estimate looks like $4000/life saved in expectation.
There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it’s a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:
EA and the availability of lots of funding for it are relatively new — there’s just not that much time for “market inefficiencies” to have been filled.
The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.
- 26 Apr 2022 13:33 UTC; 25 points) 's comment on How many EAs failed in high risk, high reward projects? by (
Looks like this already happened, in March 2020: https://lexfridman.com/william-macaskill/
I agree with this intuition. I suspect the question that needs to be asked is “14% chance of what?”
Made the front page of Hacker News. Here’s the comments.
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there’s a good deal of pushback and (I thought) some surprisingly high-quality discussion.
I’ve often wished for something like this when doing cost-effectiveness analyses or back-of-the-envelope calculations for grantmaking. (I have perhaps more programming background than the average grantmaker.)
Something like “Guesstimate but with typechecking” would, at first blush, seem to be the most useful version. But perhaps you shouldn’t trust my feedback until I’ve actually used it!
Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1⁄10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.
It seems like a society where everyone took contractualism to heart might have a hard time coordinating on any large moral issues where the difference any one individual makes is small, including non-x-risk ones like climate change or preventing great power war. What does the contractualist position recommend on these issues?
(In climate change, it’s plausibly the case that “every little bit helps,” while in preventing war between great powers outcomes seem much more discontinuous — not sure if this matters.)
I donated $1000 since it seems to me that something like the EA Hotel really ought to exist, and it would be really sad if it went under.
I’m posting this here so that, if you’re debating donating, you have the additional data point of knowing that others are doing so.