This also seems right to me. We roughly try to distribute all the money we have in a given year (with some flexibility between rounds), and aren’t planning to hold large reserves. So from just our decisions we couldn’t ramp up our grantmaking because better opportunities arise.
However, I can imagine donations to us increasing if better opportunities arise, so I do expect there to be at least some effect.
11. I gave significant amounts of money to the Long-Term Future Fund (which funded Catalyst), so I’m glad Catalyst turned out well. It’s really hard to forecast the counterfactual success of long-reach plans like this one, but naively it looks like this seems like the right approach to help build out the pipeline for biosecurity.
I am glad to hear that! I sadly didn’t end up having the time to go, but I’ve been excited about the project for a while.
though it’s important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.
So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.
Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.
The elliptic curve one doesn’t strike me at all like the NSA had a big lead. You are probably referring to this backdoor:
This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:
Bruce Schneier concluded shortly after standardization that the “rather obvious” backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.
I can’t really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it’s important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.
To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).
I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I’ve gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.
past leaks and cases of “catching up” by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research
I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.
Do you have a source for this?
Do you have the same feeling about comments on the EA Forum?
Separately, you mentioned OpenPhil’s policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.
This sounds a bit weird to me, given that the above is erring quite far in the direction of disclosure.
The specific dimension of the OpenPhil policy that I think has strong arguments going for it is to be hesitant with recusals. I really want to continue to be very open with our Conflict of Interest, and wouldn’t currently advocate for emulating Open Phil’s policy on the disclosure dimension.
I didn’t see any discussion of recusal because the fund member is employed or receives funds from the potential grantee?
Yes, that should be covered by the CEA fund policy we are extending. Here are the relevant sections:
Own organization: any organization that a team member
is currently employed by
was employed by at any time in the last 12 months
reasonably expects to become employed by in the foreseeable future
does not work for, but that employs a close relative or intimate partner
is on the board of, or otherwise plays a substantially similar advisory role for
has a substantial financial interest in
A team member may not propose a grant to their own organization
A team member must recuse themselves from making decisions on grants to their own organizations (except where they advocate against granting to their own organization)
A team member must recuse themselves from advocating for their own organization if another team member has proposed such a grant
A team member may provide relevant information about their own organization in a neutral way (typically in response to questions from the team’s other members).
Which covers basically that whole space.
Note that that policy is still in draft form and not yet fully approved (and there are still some incomplete sentences in it), so we might want to adjust our policy above depending on changes in the the CEA fund general policy.
Responding on a more object-level:
As an obviously extreme analogy, suppose that someone applying for a job decides to include information about their sexual history on their CV.
I think this depends a lot on the exact job, and the nature of the sexual history. If you are a registered sex-offender, and are open about this on your CV, then that will overall make a much better impression than if I find that out from doing independent research later on, since that is information that (depending on the role and the exact context) might be really highly relevant for the job.
Obviously including potentially embarrassing information in a CV without it having much purpose is a bad idea, and mostly signals various forms of social obliviousness, as well as distract from the actually important parts of your CV, which pertain to your professional experience and factors that will likely determine how well you will do at your job.
But I’m inclined to agree with Howie that the extra clarity you get from moving beyond ‘high-level’ categories probably isn’t all that decision-relevant.
So, I do think this is probably where our actual disagreement lies. Of the most concrete conflicts of interest that have given rise to abuses of power I have observed both within the EA community, and in other communities, more than 50% where the result of romantic relationships, and were basically completely unaddressed by the high-level COI policies that the relevant institutions had in place. Most of these are in weird grey-areas of confidentiality, but I would be happy to talk to you about the details of those if you send me a private message.
I think being concrete here is actually highly action relevant, and I’ve seen the lack of concreteness in company policies have very large and concrete negative consequences for those organizations.
less concrete terms is mostly about demonstrating an expected form of professionalism.
Hmm, I think we likely have disagreements on the degree to which I think at least a significant chunk of professionalism norms are the results of individuals trying to limit accountability of themselves and people around them. I generally am not a huge fan of large fractions of professionalism norms (which is not by any means a rejection of all professionalism norms, just specific subsets of it).
I think newspeak is a pretty real thing, and the adoption of language that is broadly designed to obfuscate and limit accountability is a real phenomenon. I think that phenomenon is pretty entangled with professionalism. I agree that there is often an expectation of professionalism, but I would argue that exactly that expectation is what often causes obfuscating language to be adopted. And I think this issue is important enough that just blindly adopting professional norms is quite dangerous and can have very large negative consequences.
You could do early screening by unanimous vote against funding specific potential grantees, and, in these cases, no COI statement would have to be written at all.
Since we don’t publicize rejections, or even who applied to the fund, I wasn’t planning to write any COI statements for rejected applicants. That’s a bit sad, since it kind of leaves a significant number of decisions without accountability, but I don’t know what else to do.
The natural time for grantees to object to certain information to be included would be when we run our final writeup past them. They could then request that we change our writeup, or ask us to rerun the vote with certain members excluded, which would make the COI statements unnecessary.
This is a more general point that shapes my thinking here a bit, not directly responding to your comment.
If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that’s going to end up incredibly salient to them and that’s not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.
I feel like the thing that is happening here makes me pretty uncomfortable, and I really don’t want to further incentivize this kind of assessment of stuff.
A related concept in this space seems to me to be the Copenhagen Interpretation of Ethics:
The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.
I feel like there is a similar thing going on with being concrete about stuff like sexual and romantic relationships (which obviously have massive consequences in large parts of the world). And maybe more broadly having this COI policy in the first place. My sense is that we can successfully avoid a lot of criticism by just not having any COI policy, or having a really high-level and vague one, because any policy we would have would clearly signal we have looked at the problem, and are now to blame for any consequences related to it.
More broadly, I just feel really uncomfortable with having to write all of our documents to make sense on a purely associative level. I as a donor would be really excited to see a COI policy as concrete as the one above, similarly to how all the concrete mistake pages on all the EA org websites make me really excited. I feel like making the policy less concrete trades of getting something right and as such being quite exciting to people like me, in favor of being more broadly palatable to some large group of people, and maybe making a bit fewer enemies. But that feels like it’s usually going to be the wrong strategy for a fund like ours, where I am most excited about having a small group of really dedicated donors who are really excited about what we are doing, much more than being very broadly palatable to a large audience, without anyone being particularly excited about it.
being personal friends with someone should require disclosure.
I think this comment highlights some of the reasons for why I am hesitant to just err on the side of disclosure for personal friendships.
I think the onus is on LTF to find a way of managing COIs that avoids this, while also having a suitably stringent COI policy.
I mean, these are clearly trading off against each other, given all the time constraints I explained in a different comment. Sure, you can say that we have an obligation, but that doesn’t really help me balance these tradeoffs.
The above COI policy is my best guess at how to manage that tradeoff. It seems to me that moving towards recusal on any of the above axes, will have to prevent at least some grants being made, or at least I don’t currently really see a way forward that would not make that the case. I do think looking into some kind of COI board could be a good idea, but I do continue to be quite concerned about having a profusion of boards in which no one has any real investment and no one has time to really think through things, and am currently tending towards that being a bad idea.
I can’t imagine myself being able to objectively cast a vote about funding my room-mate
So, I think I agree with this in the case of small houses. However, I’ve been part of large group houses with 18+ people in it, where I interacted with very few of the people living in it, and overall spent much less time with many of my housemates than I did with only very casual acquaintances.
Maybe we should just make that explicit? Differentiate living together with 3-4 other people, from living together with 15 other people? A cutoff at something like 7 people seems potentially reasonable to me.
Yeah, I am not sure how to deal with this. Currently the fund team is quite heavily geographically distributed, with me being the only person located in the Bay Area, so on that dimension we are doing pretty well.
I don’t really know what to do if there are multiple COIs, which is one of the reasons I much prefer us to err on the side of disclosure instead of recusal. I expect if we were to include friendships as sufficient for recusal, we would very frequently have only one person on the fund being able to vote on a proposal, and I expect that to overall make our decision-making quite a bit worse.
So, the problem here is that we are already dealing with a lot of time-constraint, and I feel pretty doomy about having a group that has even less time than the fund already has, to be involved in this kind of decision-making.
I also have a more general concern where when I look at dysfunctional organizations, one of the things I often see are profusions of board upon boards, each one of which primarily serves to spread accountability around, overall resulting in a system in which no one really has any skin in the game and in which even very simple tasks often require weeks of back-and-forth.
I think there are strong arguments in this space that should push you towards avoiding the creation of lots of specialized boards and their associated complicated hierarchies, and I think we see that in the most successful for-profit companies. I think the non-profit sector does this more, but I mostly think of this as a pathology of the non-profit sector that is causing a lot of its problems.
That seems good. Edited the document!
Oh, no. To be clear, recusals are generally non-public. The document above should be more clear about that.
Edit: Actually, the document above does just straightforwardly say:
(recusals and the associated COIs are not generally made public)
It seems fairly obvious to me that being in a [...] active collaboration with someone should require recusal
This seems plausibly right to me, though my model is that this should depend a bit on the size and nature of the collaboration.
As a concrete example, my model is that Open Phil has many people who were actively collaborating with projects that eventually grew into CSET, and that that involvement was necessary to make the project feasible, and some of those then went on to work at CSET. Those people were also the most informed about the decisions about the grants they eventually made to CSET, and so I don’t expect them to have been recused from the relevant decisions. So I would be hesitant to commit to nobody on the LTFF ever being involved in a project in the same way that a bunch of Open Phil staff were involved in CSET.
My broad model here is that recusal is a pretty bad tool for solving this problem, and that this instead should be solved by the fund members putting more effort into grants that are subject to COIs, and to be more likely to internally veto grants if they seem to be the result of COIs. Obviously that has less external accountability, but is how I expect organizations like GiveWell and Open Phil to manage cases like this. Disclosure feels like the right default in this case, which allows us to be open about how we adjusted our votes and decisions based on the COIs present.
In general I feel CoI policies should err fairly strongly on the side of caution
I don’t think I understand what this means, written in this very general language. Most places don’t have strong COI policies at all, and both GiveWell and OpenPhil have much laxer COI policies than the above, from what I can tell, which seem like two of the most relevant reference points.
Open Phil has also written a bunch about how they no longer disclose most COIs because the cost was quite large, so overall it seems like a bad idea to just blindly err on the side of caution (since one of the most competent organizations in our direct orbit has decided that that strategy was a mistake).
The above COI policy is more restrictive than the policy for any other fund (since its supplementary and in addition to the official CEA COI policy), so it’s also not particularly lax in a general sense.
It seems fairly obvious to me that being in a close friendship [...] should require recusal
I am pretty uncertain about this case. My current plan is to have a policy of disclosing these things for a while, and then allow donors and other stakeholders to give us feedback on whether they think some of the grants were bad as a result of those conflicts.
Again, CSET is a pretty concrete example here, with many people at Open Phil being close friends with people at CSET. Or many people at GiveWell being friends with people at GiveDirectly or AMF. I don’t know their internal COI policies, but I don’t expect those GiveWell or Open Phil employees to completely recuse themselves from the decisions related to those organizations.
There is a more general heuristic here, where at this stage I prefer our policies to end up disclosing a lot of information, so that others can be well-informed about the tradeoffs we are making. If you err on the side of recusal, you will just prevent a lot of grants from being made, the opportunity cost of which is really hard to communicate to potential donors and stakeholders, and it’s hard for people to get a sense of the tradeoffs. So I prefer starting relatively lax, and then over time figuring out ways in which we can reduce bad incentives while still preserving the value of many of the grants that are very context-heavy.