Questions about OP grant to Helena
TL;DR
A biosecurity & pandemic preparedness grant was made by Open Philanthropy to Helena for $500,000 in Nov 2022
The grant profile raises questions about how it was justified and approved
There is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the nature of the recipient organization,[1] and what appear to be previous false representations made by the recipient organization
This post asks OP to, in an effort to support accountability and transparency, publish further grant details and the investigation documents for the grant
Notes: I apologize in advance as I am not experienced in writing using the EA syntax and structure, all errors are my own. Full disclosure, I previously applied to and was rejected for a role at Open Philanthropy. I work for an organization that in the past has been a recipient of funding from OP, as a person who has an affinity with the short-termist global health and development side of EA but does not identify strongly as an EA. I think it is a good thing for accountability, transparency, and the broader governance of EA that OP listed the grant online, that I am able to question the grant in this forum and expect that OP will respond in good faith.[2]
Content
I have watched with interest as EA’s influence has grown within global health and development, in particular Open Philanthropy’s work in health security and biosecurity. Recent calls for transparency and accountability within effective altruism have thankfully been met by new initiatives like openbook.fyi. In light of this, a particular grant seems like it warrants further inspection and from OP’s perspective raises enough questions that it should have been accompanied by more information.
I occasionally trawl the grants databases for philanthropic foundations working in global health & health security and found a $500,000 grant to ‘Helena’ awarded in November 2022. I didn’t think anything of it until I saw it again in the openbook log, and upon seeing it looked into the organization.
I hope that my initial perception of the grant and the organization is entirely wrong, but the grant is striking in that it has scant details on what is a sizable grant to an organization with no track record in the subject matter that appears to have pretty clearly misrepresented itself in the recent past.
The grant commitment and recipient organization as they stand are emblematic of the challenges in effective and equitable grantmaking for philanthropies like Open Philanthropy. Henry Elkus and Helena’s path can be summed up as:[3]
Henry’s VC dad uses his resources to support Henry’s interests
Henry drops out of school because he thinks he is exceptionally smarter and better equipped to solve ‘our problems’
Henry starts an obtuse org with for-profit venture capital and non-profit 501(c)3 sides
The organization has a confusing and meandering scope of work, that appears to be a collection of personal interests at each point in time
Has no proven tangible value-add beyond ‘networking’, with previous programmatic activities summed up as funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVID
Helena then gets half a million from OP to explore ‘policy related to health security’ work that they have no track record of doing or apparent expertise in
It feels wrong when Helena (subjectively) seems like a self-aggrandizing nepotism project at best, and (objectively) gets a large grant with no details and no track record in the area of work. It is also notable that Henry and Helena have no track record of participating in EA settings. Participating in transparency and holding your grantmaking to account meaningfully are two different things. This all leads to several questions about the grant:
How did OP justify this grant? What are the activities covered, the case investigation details, how is this grant partitioned from the for-profit side of Helena,[4] and the usual due diligence…there is currently zero practical transparency on what is a large grant
Assuming there is an important, tractable, and neglected grant underpinning this, how does Helena qualify as the most appropriate recipient?
Is Helena even qualified for the work? At the time of the grant award, Helena does not appear to have the track record or expertise to work on health security policy issues, and the 990 forms show little detail on program activities related to biosecurity policy.[5] Prof Mark Dybul is listed on the website,[6] but they have a history of listing people who have no tangible relationship with Helena.[7] Having notable experts in a subject area as affiliates is also different from having the proven systems, processes, and staff with which to do the legwork of program implementation
If they are qualified, are they the best-placed recipient organization? Best-placed being THE most appropriate, suitable, and acceptable—and where sometimes doing nothing is better than choosing a less appropriate intervention method even if it generates benefit. As noted above, they don’t have a tangible track record of doing anything apart from funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVID[8] (for which there are global health critiques of such actions). There are several other worthy US-based organizations with proven track records working on and advocating for health security policy development
How did this grant evolve? Did Helena approach OP or the other way around? What personal relationships or conflicts of interest are there between the two organizations?[9]
Given the clear issues apparent across publicly available information, why did OP decide to award a grant of significant size to the recipient organization without providing key clarifying information?[10]
Two articles from when Helena was launched:
I Have No Idea What This Startup Does and Nobody Will Tell Me (gawker.com)
Celebrities deny working with scion’s son | Page Six
Form 990 submissions and 501(c)3 inforion:
Helena Group Foundation | Los Angeles, CA | Cause IQ
Helena Group Foundation | Los Angeles, CA | Instrumentl
- ^
Edited for clarity. The nepotism I refer to is the nature of the organization and not the grant
- ^
I chose to write this semi-anonymously here, and accept criticisms of posting in this way. It would not be hard to figure out who I am, but I chose this method for convenience and also because I will continue to work in the same field as Open Philanthropy
- ^
I completely accept that these points are very much my subjective and critical views based on what is available publicly of Henry Elkus and Helena, both of which I have no direct knowledge of or experience working with
- ^
As an aside, it is a peculiar quirk that the 990 forms list the two officers of Helena as working 60 hours a week on the 501(c)3 side of things when they are also engaged in the work of the for-profit Helena Special Investments. It seems like a pointless and obvious misrepresentation
- ^
The 990 forms across the years do not show much relevant programmatic work for the org. It is just lots of money in, some small grants out (not including expenses from funding the America in One Room event), and a lot of internal staff overhead (none of which is broken out in the highly paid staff section beyond the two officers). Internal expenses for staff pretty much take up whatever Helena 501(c)3 brings in, with none of the payments to contractors or vendors you would expect from an organization doing a bunch of work. I think this differs from an org like OP where the relationship between the non-profit and for-profit sides has been publicly outlined, and the for-profit side has a visible, distinct, and clear scope of work
- ^
If the grant was aimed at engaging experts like Prof Dybul on this work, would it not make more sense for it to be through a more traditional and direct affiliation (i.e., Georgetown)
- ^
See the Gawker and Page Six articles
- ^
The COVID PPE procurement activities also do not appear to reconcile in the financials section of the Helena Group Foundation’s 2020 990 form. The accomplishments line item ‘COVID RESPONSE’ describes three separate activities, one of which is $20m being ‘deployed’ for PPE procurement. Total accruals across the three activities for that line item are expenses of $925,044 against $50,000 in grants received by Helena and revenue of $0. Is this due to a reporting quirk of the 990 forms? could those activities have been ‘deployed’ external to Helena 501(c)3 and/or out of the VC for-profit side of things, with the 501(c)3 providing coordination and advice? or maybe the reconciliation will make sense once the 2021 990 forms are filed with IRS?
- ^
For clarity, I refer to how any relevant relationships may have influenced the grant development and approval process
- ^
Also, given Holden’s recent post: We’re no longer “pausing most new longtermist funding commitments”—EA Forum (effectivealtruism.org), would this grant have cleared the reevaluated bar for giving?
- [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? by 4 Feb 2023 16:57 UTC; 71 points) (
- EA & LW Forum Weekly Summary (30th Jan − 5th Feb 2023) by 7 Feb 2023 2:13 UTC; 21 points) (
- EA & LW Forum Weekly Summary (30th Jan − 5th Feb 2023) by 7 Feb 2023 2:13 UTC; 3 points) (LessWrong;
Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).
Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.
Just noting that this reply seems to be, to me, very close to content-free, in terms of addressing object-level concerns. I think you could compress it to “I did due diligence” without losing very much.
If you’re constrained in your ability to discuss things on the object-level, i.e. due to promises to keep certain information secret, or other considerations like “discussing policy work in advance of it being done tends to backfire”, I would appreciate that being said explicitly. As it is, I can’t update very much on it.
ETA: to be clear, I’m not sure I how I feel about the broader norm of requesting costly explanations when something looks vaguely off. My first instinct is “against”, but if I were to adopt a policy of not engaging with such requests (unless they actually managed to surface something I’d consider a mistake I didn’t realize I’d made), I’d make that policy explicit.
The “considered similar concerns to ones in this post” part and the explanation of how much investigation went into making the grant seem like content to me?
I think Robert’s comment recognizes there is content related to process (we investigated X, and considered Y concern). As Elias implies, there’s also a bottom-line conclusion (met the old funding bar on a hits-based rationale).
I think when Robert says there is very little content “in terms of addressing object-level concerns,” he may mean something like “there’s very little content here to explain why OP thought there was a sufficient chance of outsized impact to justify awarding the grant on hits-based reasoning.”
I guess I’m surprised that people are expecting that from OP? They haven’t committed to giving public justifications for their grants, and have written about why they often choose not to.
I got a reply to this as a private message from someone who says they don’t feel comfortable criticizing OP in public:
I replied: I think the name is a somewhat bad fit for the organization, and reflects that their views on transparency changed over time (mostly in their first few years). But they’re still much more open than most foundations, and I don’t think it’s a bad fit to the point that they should change their name.
(They said it was fine to post our exchange publicly.)
That’s their preogative, sure. It is also individuals’ preogative to critique them for that lack of transparency, and to judge the grant on the basis of publicly available information if they wish. (I think when an organization accepts the implied public subsidy that tax-advantaged status provides, it opens itself up to a considerably broader range of scrutiny and criticism than would be fair game without that status.)
I think most people realize there may be grant-specific reasons for a low-transparency approach. There are doubtless cases in which very little information should be disclosed, although I think one could safely disclose more information than this in the vast majority of cases. I think Robert’s bottom-line conclusion that there isn’t much in the response to update with (unless you started with concerns about due dilligence) makes sense. So I think one ultimately has to judge this grant on a combination of publicly available information and one’s priors about OP’s grantmaking skills.
That’s not where I thought you were going to go with this! If I wrote to a random foundation (ex: Hewlett) and asked why they made a specific grant I think they’d probably decline, and I don’t think there’s any sort of commonly held view that foundations are supposed to be making their evidence and reasoning public in exchange for their tax treatment.
I thought instead you’d say that OP is different because they use the word “open” in their name, initially had a goal of being extremely transparent about their decisions, spun out of GiveWell, cross-pollinate with EA, etc?
I think there are sound practical reasons for OP to be more open-by-default than other foundations. In my view, OP (in conjunction with its financial backers) is the most central animating organization in EA. There are a lot of people who sacrifice a lot for the goals OP is pursuing, and I think having the most central animating organization keep its decisions unnecessarily shrouded in mystery probably hurts community cohesion. But that’s not the reason I think people are entitled to criticize OP here—I think accepting criticism is simply part of the deal when a charity accepts tax-advantaged status.
Another angle, I think, is that the public information on Helena appears to make awarding the grant inconsistent with OP’s stated principles. For instance, if you had a foundation whose stated purpose was to improve the welfare of children, most people would probably look at it funny if it announced a grant to a senior center. Maybe the grant actually serves the foundation’s purposes, but many people wouldn’t be impressed if the hypothetical foundation suggested it was part of a hits-based approach to child welfare with hardly any substantive supporting detail. Given OP’s stated emphasis on effectiveness, and given that Helena appears to be an unusually ineffective charity, that scenario seems somewhat analogous here.
>> OP to be more open-by-default than other foundations
Which foundations do you think are more open than OP? Transparency is a spectrum and OP certainly seems to publish quite a bit. No others have forums dedicated to verbosely tearing apart their grants when they smell weakness.
That said, Open has an oft-forgotten second meaning: OP is open to any cause area in theory. i.e. it is cause neutral.
I think the primary content in ASB’s comment is actually “hits-based”—i.e. this is a grant with a low probability of a big win.
To think it isobviouslybad, as so many commenters here do, you must have e.g. 90% confidence that the grant won’t result in $5M worth of counterfactual x-risk funding. (I do not have inside information that this was the goal/haven’t talked to ASB about it; it just seems like the right kind of goal for an org focused on “elephant bumping”).Striking this bit out as someone pointed out you might have more rules-based objections to the grant.“No others have forums dedicated to verbosely tearing apart their grants when they smell weakness.”
How could people’s input be best framed? Do you think this critique adds any value? I guess it does, so how best to frame it?
Apologies for the snarky language, but I did not mean to disparage the criticisms in the slightest. I think they are quite fine as they are, and do add value (80% confidence). I’m just pointing out that people frequently say there is no scrutiny of OP while engaging in an explicit act of scrutiny.
It’s very true that big philanthropies are surprisingly bad at giving reasons for their grants publicly, and that EA orgs are far better. This is part of the reason I’m a fan of EA philanthropies. The only foundation I know of which writes content reasons for every grant is GiveWell, and besides that very few are more transparent than OP.
I think a reasonable response might be for OpenPhil to not change anything -and list information the same way they do now, but if scrutiny comes like here, then perhaps they could respond in detail. Helena and Atlas I think are the only 2 that have come under the pump recently on the forum—I don’t think moderately in depth and specific replies would be that difficult. There may be on occasions good reasons for not sharing information, then saying that in a reply would be good.
I don’t think that this forum is overly dedicated to verbosely tearing apart grants from OpenPhil. A small percentage of Openphil grants come under scrutiny here, and a lot of respondants to the posters who criticise make efforts to steelman grants. Also just looking at Karma and agreement voting there is more support for defence of OP in the thread than criticism here at least.
As a tiny note—the hits based approach is great no issues there, but with this approach it might be easy and helpful here to outline what the hit would be if successful, and why the org is well placed to potentially (even if low percentage chance) of hitting that home run.
If ASB said “there are good reasons not to provide more details”, would you accept that, or ask for the reasons?
I’m not Nick, but if I were concerned enough about the grant, I would ask if it would be possible to disclose those reasons to a few independent community members under an NDA that allows them to tell the community only whether there were good reasons not to disclose more info, and whether the grant was reasonable in their eyes. That would be cumbersome and only appropriate in fairly rare cases.
I am not personally concerned about this grant very much—I can surmise a plausible and defensible reason for making it that Open Phil would prefer not to disclose, and am OK assuming that was the reason.
Yea that’s reasonable.
Thanks yes i would accept that and what Jason said below would be absolute gold standard.
Great.
And in general is be interested if there was a format of forum critiques that OP would be most interested in.
Also, for Nuno I’ll ask when OP gonna let us bet against them.
“And in general is be interested if there was a format of forum critiques that OP would be most interested in.” I really like this, and feels in the spirit of EA.
Worth noting that OP’s biosecurity funding comes from a single private individual and is conducted via an LLC, so I don’t think this particular argument works in this particular case.
(To be clear, not trying to say here I agree or disagree with the Helena grant or OP’s approach to transparency on grants—just trying to make this much smaller point)
Can you confirm this is true specifically for biosecurity grants made to 501(c)(3)s like Helena? I know a lot of the biosecurity funding needs to be channeled through for-profit channels because of who the recipients are (or because the transaction is actually an investment vs. a pure grant), but it’s not clear to me why you wouldn’t run a pure grant to a 501(c)(3) through tax-advantaged channels.
I cannot confirm that. Reflecting on what I said, I think I may have misunderstood you at first, and think you have a better point than I first thought.
What Jason has said above is the thrust of my inquiry. If Open Phil said that in weighing the grant impact (as a hits-based grant) the private information they had justified the grant (which is not really what was in the comment on behalf of Open Phil). Then I would accept that as a pragmatic response based on the real world constraints to their grant-making, even if the rationale does not answer my own questions about the grant. I would expect a little more detail about what the grant actually entails, since it is about the least descriptive grant on the website.
It does open up questions about the importance we place on accountability and transparency in regard to norm- and priority-setting within EA and philanthropic giving.
Basically what Jason said, yes. The process described sounds reasonable but my prior, even given this post, was that it would sound reasonable.
I think it might match your prior, but it doesn’t seem to me like it matches the prior of the person writing the post?
Agree with your take nice one!
Would do you mean by “costly explanations?” If it’s time you mean wouldn’t have thought OpenPhil spending a couple of hours on a question was that costly given the reputational / community confidence importance of a post like this. Or do you mean the explanation could reveal something which could cost OpenPhil money in the future? Or legal cost? Sorry this might be obvious to most people...
Also I think this grant and org could potentially be interpreted as a bit more than “vaguely off” given that the org has already received some negative public scrutiny and doesn’t have a clear
track record.
But I also think the “vaguely off” assessment could be reasonable too.
Nice one.
There does seem to be non-negligible content in the references to hits-based giving and the lower funding bar, but otherwise I agree.
...coming back to this discussion 6 months later, having had nothing to do with any of this except as an observer, I’m incredibly happy with their recent work. Given that, I think that in retrospect, their work basically fully justifies the grant. (To be clear: a failure does not refute a claim of value based on hits-based giving, at best if functions as weak evidence—but success does strongly justify the claim.)
Thanks ASB
I had hoped Open Phil could be a bit more specific about this grant and the assessment that was made, and add some more details to your general comment. There might be some reason you can’t answer these questions and if so that’s all good.
What was the good work Helena were doing in the health policy space that made you interested in the organisation?
What were the impact claims Helena was making, and what made you think that Helena might be able to achieve that impact?
Was an expected value calculation done before the grant was given?
Thanks.
When I read this part of your bullet point summary, I thought someone at Open Phil might be related to someone at Helena. But then it became clear that you mean that the Helena founder dropped out of college supported with money from his rich investor dad to start a project that you think “(subjectively) seems like” self-aggrandizing.
(The word “inherent” probably makes clear what you mean; I just had a prior that nepotism is a problem when someone receives funding, and I didn’t know that you were talking about other funding that Helena also received.)
The OP (original poster) should not have invoked “nepotism” here at all. The alleged nepotism here is seemingly not worse than a situation in which a person uses their own wealth to fund a Helena-like organization that they lead.
It’s important that such misplaced invocations of nepotism w.r.t. OpenPhil will not distract from concerns that are actually valid. In particular: OpenPhil recommended a $30M grant to OpenAI in a deal that involved HK[1] (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both HK’s then-fiancée and the fiancée’s sibling to VP positions[2].
This comment originally referred to “OP” instead of “HK” by mistake, due to me copying from another comment, sorry about that.
See these two LinkedIn profiles and the “Relationship disclosures” section in this OpenPhil writeup.
I think the stated reasoning there by OP is that it’s important to influence OpenAI’s leadership’s stance and OpenAI’s work on AI existential safety. Do you think this is unreasonable?
To be fair I do think it makes a lot of sense to invoke nepotism here. I would be highly suspicious of the grant if I didn’t happen to place a lot of trust in Holden Karnofsky and OP.
(feel free to not respond, I’m just curious)
I do not think that reasoning was unreasonable, but I also think that deciding to give $30M to OpenAI in 2017 was not obviously net-positive, and it might have been one of the most influential decisions in human history (e.g. due to potentially influencing timelines, takeoff speed and the research trajectory of AGI, due to potentially[1] inspiring many talented people to pursue/invest in AI, and due to potentially[1:1] increasing the number of actors who competitively pursue the development of AGI).
Therefore, the appointments of the fiancée and her sibling to VP positions, after OpenPhil’s decision to recommend that $30M grant, seems very problematic. I’m confident that HK consciously judged that $30M grant to be net-positive. But conflicts of interest can easily influence people’s decisions by biasing their judgment and via self-deception, especially with respect to decisions that are very non-obvious and deciding either way can be reasonable.
Furthermore, being appointed to VP at OpenAI seems very financially beneficial (in expectation) not just because of the salary from OpenAI. The appointments of the fiancée and her sibling to VP positions probably helped them to successfully approach investors as part of their effort to found Anthropic, which ended up raising at least $704M. HK said in an interview:
You wrote:
I don’t think there is a single human on earth whose judgement can be trusted to be resilient to severe conflicts of interest while making such influential non-obvious decisions (because such decisions can be easily influenced by biases and self-deception).
EDIT: added “potentially”; the $30M grant may not have caused those things if OpenAI would have sufficiently succeeded without it.
The lack of clarity around “nepotism” in the original post is unfortunate. I do think the source of funds is relevant to evaluating Helena, though. Funding through family or family connections doesn’t have the same information value as knowing disinterested third parties have decided to fund. And if someone made a ton of money and launched their own charity, we would know they brought whatever knowledge, skills, and abilities they used to make megabucks to their new profession.
I also understood it this way at first.
My apologies, not my intention at all. I am new to posting it so would accept any recommendation on how to edit it for clarity.[edit: this was a lazy response and rightfully downvoted. In terms of recommendations, I meant the best way to edit my post using the norms of the forum—i.e., to strikethrough and replace text, add a footnote, or just edit the text. I do wholly and unequivocally apologies that my original post can be read as inferring that nepotism directly affected the relationship between Open Phil and Helena]
a low effort fix would just be to write a parenthesis after the claim specifying that the nepotism isn’t between open phil and the grant receiver
Edited, as I have the part about potential relationships between the funder and recipient organizations
I think it’s valuable to write critiques of grants that you believe to have mistakes, as I’m sure some of Open Philanthropy’s grants will turn out to be mistakes in retrospect and you’ve raised some quite reasonable concerns.
On the other hand, I was disappointed to read the following sentence “Henry drops out of school because he thinks he is exceptionally smarter and better equipped to solve ’our problems”. I guess when I read sentences like that I apply some (small) level of discounting towards the other claims made, because it sounds like a less than completely objective analysis. To be clear, I think it is valid to write a critique of whether people are biting off more than they can chew, but I still think my point stands.
I also found this quote interesting: “What personal relationships or conflicts of interest are there between the two organizations?” since it makes it sound like there are personal relationships or conflicts of interest without actually claiming this is the case. There might be such conflicts or this implication may not be intentional, but I thought it was worth noting.
Regarding this grant in particular: if you view it from the original EA highly evidence-based philanthropy end, then it isn’t the kind of grant that would rate highly in this framework. On the other hand, if you view it from the perspective of hits-based giving (thinking about philanthropy as a VC would), then it looks like a much more reasonable investment from this angle[1], as for instance, Mark Zuckerberg famously dropped out of college to start Facebook. Similarly, most start-ups have some degree of self-aggrandizement and I suspect that it might actually be functional in terms of pushing them toward greater ambition.
That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.
Though of course, if the grant was made on the basis of details that were misrepresented (I haven’t looked into those claims) then this would undercut this.
In this case — and many, actually — I think it’s fair to assume they are. OP is pretty explicit about taking a hits-based giving approach.
I think the discussion of hits-based giving is a bit beside the point. Many of the criticisms in the OP (“original post”) would speak against the grant even under hits-based giving. The only part where I think “hits-based giving” could be a satisfactory response (if everything else looks promising) is on the issue of prior expertise. If everything else looks promising, then it shouldn’t be a dealbreaker if someone lacks prior experience.
As I understand it, hits-based giving still means you have to be able to point to some specific reasons why the grant could turn out to be very impactful. And I understood the OP to be expressing something like, “I can’t find any specific reasons to expect this grant to be very impactful, except for its focus area – but there are other projects in the same space, so I wonder why this one was chosen.”
Agreed, that a hits-based approach doesn’t mean throwing money at everything. On the other hand, “lack of prior expertise” seems (at least in my books) to be the second strongest critique after the alleged misrepresentation.
So, while I conceded it doesn’t really address the strongest argument against this grant, I don’t see addressing the second strongest argument against the grant as being beside the point.
Hi Chris,
I agree with the points you have raised about subjectivity and my own personal views. And included the footnote about that collection of dot points being my very subjective read of how things developed. While I have tried my best to be attentive in my presentation of my read of the grant, some of it is just my analysis and it is is hard to present it without that color. I think it is legitimate that my subjectivity should influence your perception of my perspective and how seriously you think my inquiry should be taken.
I do think that there is enough objective detail given in my post for it to stand up to scrutiny, in particular the idea that Open Phil could have provided more detail on the grant to clarify and remove some of the concerns I have raised. I am not saying in any way that I have all the answers to the questions raised by the grant profile.
The most relevant bit of the Page Six article:
This post is a bit weak in making its case but it is blindingly obvious that Helena is a grift and I’m a bit unimpressed by galaxy brain’d reasons (hit based, etc) for thinking it might be good.
But in the big picture, occasionally a grant is bad. We can’t treat every bad grant as a scandal.
Though there’s a point of diminishing returns to treating every bad grant as a scandal, 500000 $ seems non negligible and worth scandaling about at least a little. If we do scandals on all large grants, then it incentivizes to start with smaller grants for hits based giving (where possible)
I disagree with this. For one, OpenPhil has a higher bar now. There’s a lot of work that needs to be done. ASB and others might already think this was a very bad grant. There’s a cost to dwelling on these things, especially as EA Forums drama rather than a high quality post mortem.
I don’t think anyone has actually treated this as a scandal. There have been fair questions and criticisms, but I don’t think anyone has suggested improper conduct by Open Phil.
Only thing I can add here is Helena is a longtime joke in VC/finance circles for being an almost comical example of nepotism and ego. Haven’t seen their name come up for a few years but it was a kind of meme for a while. Their new public facing stuff looks better but it used to be hard to distinguish from parody (Yale rich kid and his buddies behaving as if they were members of some kind of World Congress Davos intelligentsia). I have no idea about their biosecurity work and it could be good / it could be the case everyone has grown up and started doing serious stuff, but it’s worth noting the history and reputation. As someone who thinks EA massively underrated PR/vibe/reputation I thought I’d bring this up. I have a lot of respect for ASB so I assume there’s more to the story.
Bringing up potential issues with a grantee seems useful, especially if they’re things the grantor may have missed or underappreciated. The discussion around a history of claiming false affiliations (see additional details quoted in Shakeel) especially seems important.
On the other hand, there are several points in the post that, while I would find them apt in a discussion of a grant by GiveWell or one of the EA Funds, don’t make sense in the context of an organization like OP that doesn’t solicit funding from individuals and doesn’t aim for that level of transparency. Specifically, I think it’s fine for them to make a grant in cases where the public information about an organization would’t be enough to justify the grant, the org doesn’t have a public track record, or the org and it’s leaders haven’t participated in EA. There are many valuable funding opportunities that are only available to grantmakers who are willing to work directly with organizations and make decisions based on privately shared information, and we shouldn’t be pressuring OP to avoid these.
Holden’s 2016 OP blog post, Update on How We’re Thinking about Openness and Information Sharing, gets into some of these questions, and I think it’s pretty reasonable.
Definitely agree that my expectations are higher for a funder that solicits support from the general public.
However, I don’t think meeting the objectives in Holden’s blog post ordinarily require this level of secrecy. There are a number of possible intermediate points between “full public disclosure” and “not providing for any meaningful external assessment at all.” For example, if OP wanted to, it could presumably gather a panel of respected (and trusted) community members, have them sign an NDA (with two exceptions), and give them the justification for a sufficiently challenged grant. The two exceptions to the NDA would allow panelists to rate on a 1-to-7 scale “Is the amount of transparency appropriate under the circumstances?” and “In light of all the information, how reasonable was the grantmaking decision?”, and disclose the trimmed mean.
Of course, it is OP’s prerogative not to take an intermediate approach, and it would incur some costs. But how many six-figure-plus grants are they making that look this iffy from the external perspective?
A semi-cynical take: Could Open Phil have decided to fund an otherwise weak grant proposal because (a) there is a good chance that Elkus will receive a lot of money for philantrophy from his father; and (b) there is a reasonable chance that funding this grant would influence Elkus toward spending significantly more of that outside-of-EA money on biosecurity and pandemic preparedness work rather than on quasi-academic sociology experiments?
I’m not judging that possible rationale—it feels uncomfortable, but uncomfortable doesn’t mean wrong. I think this is illustrative of a potential challenge with transparency—this rationale would be undermined by its disclosure. If you’re trying to influence a grantee, releasing a statement that says “yeah, we thought their work was weak, but we funded because the CEO’s father is loaded and we were aiming for influence over how future money was spent” isn’t going to help your influence campaign.
A less cynical possibility would be that the CEO’s father is funding most of Helena, but requires it to raise a fraction of the costs of each project (say 20%). If that were the case, the project doesn’t need to clear Open Phil’s bar to justify funding. If we assume that the alternative project Helena would pursue would be essentially worthless, this project would only need to be 21% as effective as the bar to justify funding it.
I also want to specifically address the elephant in the room: Open Phil has significantly raised its bar for funding, and a lot of people are doubtless anxious about that. To the extent that Elkus’ father was a positive factor in evaluating the grant—under an influence theory, a matching-funds theory, or anything else—that is not fair to everyone else who wasn’t born with a silver spoon. Particularly under the current circumstances, it would be completely valid to experience negative emotions at the possibility that parental megawealth could factor into grantmaking decisions.
I agree with your sentiment here, but it might be skipping a few steps ahead in questioning the grant’s development. This is why (while many parts can be considered subjective critique) I have tried to avoid being prescriptive about what might have happened ‘behind the scenes’. Open Phil’s new bar does make this a more present/pressing issue.
That’s fair. I felt that the conversation was starting to veer toward evaluating the justification for the grant solely on a comparison of the marginal benefit of what Helena was going to accomplish with that $500K to Open Phil’s then-current bar. Thus, I felt it was worthwhile to point out that there could be rational (albeit at least somewhat uncomfortable) reasons for awarding a grant even if the proposal were on the weak side.
That probably came across as more aggressive than I meant to be. More to say that I agree with your sentiment but wanted to leave space in my post for me to be wrong and for Open Phil to provide a response
We haven’t seen the grant proposal, but I am thoroughly unimpressed with Helena—both as an organization and as a potential biosecurity grantee—based on its public-facing materials. In addition to the horrible taste the marketing materials leave in my mouth, it is far too spread out in cause areas for its size to make me think it would be effective in a new area.
100% agree. I also don’t like the marketing—it’s fancy, but non-specific and fluffy and makes me feel icky...
Can you expand on what you dislike about the marketing? When looking at their website I was just dazzled at all the animations, my developer brain was trying to figure out how they worked ;)
At launch—per the Gawker article—their priorities seemed to include lining up a bunch of famous people (who may or may not have known they were involved...) and hiring a PR firm for a glizty launch with lots of meaningless buzzwords, but apparently not having any clue about what the organization would actually accomplish. That strikes me as a prime example of performative charity.
At present, the website is very well-done, but is awfully light on what the organization has accomplished. For example, the second project on their website is “Shield,” accompanied by a long (and generally accurate) description of electrical-grid vulnerability. But people have been engaged in that issue for a while, and it’s unclear what Helena actually accomplished other than promoting two largely symbolic pieces of state legislation. The feel of their marketing doesn’t line up with their 990, which shows low-six figure spend in this area. It doesn’t dispell my impression that the organization is too interested in looking good rather than being firmly focused on doing good.
There is great focus on the “members,” but it is rather unclear what they do—so it comes across as mostly name-dropping. None are listed as employees or contractors on the 990s, which suggests that their involvement is highly limited. (Although you don’t have to list all employees, you have to specify the number, and that number seems to be filled by “our team”). Again, it feels very flashy but without real substance.
Of course, I could be wrong—I am not evaluating them for a grant, so didn’t read everything or read as carefully as I would if I were making a decision on my analysis.
Love that analysis Jason, my impression as well!
About “PPE procurement activities.”
I was on the board of directors for the NY chapter of AAEM ( American Academy of Emergency Medicine), as a student representative. During the first wave of COVID in NYC, we were tasked by the leadership with a PPE procurement drive; this was later delegated to medical students around the city who were eager to help with the COVID effort and who had a lot of free time due to suspended classes. The drive was quite successful at a time when manual resources to track down suppliers, warehouse stocks, and sifting through the black market sources was difficult for hospital administrators.
Having seen this process up close, done by the volunteer time of medical students, the price tag of ~1M$ for this seems exceptionally ridiculous. The PPE procurement activities they proposed must have either been on an entirely different scale/direction or just over budgeted.
Have you tried writing to OP to ask about the grant?
I think that we should probably trust people to make some more speculative grants if we trust them in general. My sense is the bio team at openphil do pretty good work in general (though I haven’t looked it up) so I’m willing to give a few speculative grants a pass.
I will circle back if this goes terribly or if there are lots more like it.
It’s more that the grant was speculative and still is low-transparency (why did OP think there was a huge potential upside?) There can be rational grounds for making low-transparency grants, though—cf. my speculation in another comment.
I think if you’re going to make speculative, low-transparency grants, you have to be squeaky clean on conflicts of interest and related matters. There’s zero evidence of COI at present, and a plausible rationale for the grant that would be undermined by its disclosure, so I am not terribly worried about this one.
Completely agree about the long applications. I don’t think they are super useful at all. I wasn’t using that as example of good process, just as a recognition I might have some emotional bias against this kind of grant.
The thing i might be most interested in here is the track record Helena has in the area. Have they done good stuff in biosecurity before? For a grant of 500k I would expect a reasonable track record in the field, regardless of how speculative it is or isn’t.
And yes a big question is how good are OpenPhil atpredicting the fruits of this kind of grant. I have no clue. I wonder what odds you would get on forecasting for this one ;).
Thanks for the reply, appreciate it.
Hey Nathan
I’ll flag that I’m emotionally biased as a director of an NGO that often has to write 10 page applications and often have 3-5 phone conversations to apply for grants of $5,000 to $25,000 from foundations and donors. $500,000 dollars feels instinctively like an awfully large amount of money to entertain words like “speculative” and “pass”, but maybe I need to move my frame of reference!
I’m intrigued that you are willing to give a grant of $500,000 a “pass”, what do you mean by that exactly? In my mind $500,000 is a large amount of money. To try and steelman a little (with assumptions), do you mean that you would be OK with the grant if the organisation was shown to have a track record in the area, and the approach could have a a high expected value if successful (even if a low chance of success) then you would be OK with the grant even if it didn’t bear fruit?
Personally I don’t believe we should give any grant a “pass” as such. Maybe small grants of a few thousand dollars.
Also what do you mean “if this goes terribly”, do you mean the result of the grant? What would constitute the going terribly?
As a side note, you might well disagree but I don’t think we should need to rely too much on trust when it comes to grants of this size—even if we do trust the org and the people involved. I know other NGOs and donors don’t get as much scrutiny as EA associated grants (one of the great things about EA), but I think any grant over $50,000 could at least always carry with it one page explainer document which outlines the credentials of the org, and what the grant might achieve (even if no math there)
“I’m intrigued that you are willing to give a grant of $500,000 a “pass”, what do you mean by that exactly? In my mind $500,000 is a large amount of money. To try and steelman a little (with assumptions), do you mean that you would be OK with the grant if the organisation was shown to have a track record in the area, and the approach could have a a high expected value if successful (even if a low chance of success) then you would be OK with the grant even if it didn’t bear fruit?”
Yes I do think this.
And the question is whether they are good at predicting. Do you think your long applications help that? Ive done huge grant application docs and thought they were largely a waste of everyone’s time.
I imagine I want scrutiny but I currently trust OP. I don’t sense long documents would have helped—I imagine they knew this was risky.
As much as I dislike their marketing (I’m clearly not the target audience), I don’t think it requires much imagination to see why open phil may have gone ahead with the grant.
see for example this event they put on:
https://helena.org/magazine/the-story-of-america-in-one-room
https://en.wikipedia.org/wiki/America_in_One_Room
The event was widely covered and Obama himself tweeted about the event. If they came to open phil with some similar idea, intended to make catastrophic risks salient to a wide audience I can see why they would seriously consider funding it.
Open phil aren’t stupid. If they are doing something seemingly stupid, they probably have information we don’t.
That doesn’t feel consistent with a grant to fund Helena’s work “on policy related to health security” as the grant is described on OP’s website.
Open Phil doesn’t necessarily owe anyone an explanation, but the website seems fairly vacuous, and most of their “projects” are just mentioning times when they have invested in someone else’s company. Strong vibe of “all hat, no cattle.”
Thanks so much for asking this question with such grace and reason. I think this question will be well received, because you have asked it well.
One alarm bell is that this relatively new organisation tries (or claims to try) to work on a range of unconnected areas, including pandemic support, AI stuff and apparently biosecurity risk stuff? One of the biggest strengths of EA orgs, and most open Phil supported orgs is that they focus on doing one thing well.
Am looking forward to the response and like you, I hope that my initial perception of this organisation is wrong.
Thanks for the feedback, on second thoughts think this post was not as graceful as I first thought, so have retracted the comment. Still think it is a good post, an important question and am looking forward to the response from OpenPhil.
A bit late to the party here but I want to note for anyone that is still looking for more information about Helena that there is a roughly ten-thousand word essay about the program from the founder on the organization’s website.