Sorry Constellation is not a secret exclusive office (I’ve been invited and I’m incredibly miscellaneous and from New Zealand). It’s a WeWork and from my understanding it doesn’t accept more applications because it’s full.
It’s unlikely Claire gave a grant to Buck since (a) like you said this is a well-known relationship (b) the grant-makers for AI safety are 2 people who are not Claire (Asya Berghal and Luke Muelhausser).
From personal experience it’s actually really easy to talk about diversity in EA? I literally chatted with someone who is now my friend when I said I believe in critical race theory and they responded wokism hurt a lot of their friends and now we’re friends and talk about EA and rationalism stuff all the time. I find most rationalist are so isolated away from blue tribe now days that they treat diversity chat with a lot of morbid curiosity and if you can justify you beliefs well.
Blacklists as I understand them have really high bars and are usually used for assault or when a person is a danger to the community. I also think not inviting someone to a rationality retreat cause you don’t want to hang out with them is fine. I would rather die than do circling with someone I disliked tbh (I still don’t know what circling is but I just assume every rationality retreat is read the CFAR handbook or circling at this point).
This breaks the weird default to everything as good faith but this post reads to me as someone who actually isn’t familiar with EA mechanisms and is more grasping at straws due to their anxieties about the EA space and specifically the Bay Area. Many of the details just hit weird trip wires for me that make me think something’s up (e.g. most of the arguments are poorly done when there are stronger empirics around it).
Edit: I confused constellation and lightcone. I still maintain it’s just an office and people need to chill out about the status anxiety of it.
Sorry Constellation is not a secret exclusive office (I’ve been invited and I’m incredibly miscellaneous and from New Zealand). It’s a WeWork and from my understanding it doesn’t accept more applications because it’s full.
Constellation isn’t located in a WeWork. The Lightcone Offices are located in a WeWork. Constellation doesn’t really have an application process IIRC, the Lightcone Offices accepts applications (though we are also likely to shut down in March).
The second half of Point 4 seems like a jumble of words I can’t figure out the meaning or relevance of. Am I missing something?
It says, if I understand correctly, that you don’t know what circling is, you would rather die than do it with people you dislike, that’s why people shouldn’t be invited to rationality retreats, and you don’t know what rationality retreats involve, but (you assume?) they involve circling.
Yeah so I should have written this clearer. It’s making a few claims:
1. Rationalist retreat type things often require a level of intimacy and trust that means it’s probably ok for them to be more sensitive and have a lower bar for inviting people.
2. Often a lot of young EAs have status anxiety about being invited to things of actual low importance for their impact (e.g. circling). I’m signalling that these are overblown and these social activities are often overestimated in their enjoyment and status.
People who are agreement-downvoting this: if you don’t agree with part of the comment, please write a reply explaining what you disagree with before downvoting. I see this has many downvotes but I can’t tell what part everyone is objecting to.
I think you’re conflating Lightcone Office (in WeWork, does have applications but I think did pause at one point because of space issues, run by LessWrong/Lightcone team, houses mostly lesser known orgs and independents) with Constellation (a few blocks away, run by Redwood Research, houses several major orgs)
On point 2, I would probably argue that philanthropic agencies shouldn’t give grants to an org which their partner is involved with, regardless of if they are involved in the grant or not. I have been surprised to read a number of posts where this seems to be happening.
This might seem harsh, but there is value in being squeaky clean and leaving yourself above reproach. Other funding agencies can come in if the work is valuable enough.
I don’t think this is standard anywhere for grantors, but I was unsure, so I checked a few. Carnegie, Gates, and the National Council of Nonprofits guidance. All three require disclosure, some cases require recusal, none of the three ban funding.
Does Open Philanthropy have a public document like this?
I hope it at least exists internally but I think they should follow the example of other well-established organisations like Gates to make it public, especially given the prevalence of polyamory in this community and the “insularity” of the Bay Area as described in this comment.
Edit: I believe this was important enough to turn into a separate post.
There is this, but I agree it would be good if there was one that were substantially more detailed in describing the process.
(You are probably getting downvotes because you brought up polyamory without being very specific about describing exactly how you think it relates to why Open Phil should have a public COI policy. People are sensitive about the topic, because it personally relates to them and is sometimes conflated with things it shouldn’t be conflated with. Regardless, it doesn’t seem relevant to your actual point, which is just that there should be a public document.)
This seems pretty hard to put into practice. Let’s say TLA gets most of it’s funding from OP. TLA is considering hiring someone: should they ask “are you romantically involved with any of these 80 people” as part of their decision to hire, and weigh employing this particular person against the difficulty of making up the funding shortfall? Or after hiring someone should TLA then ask the question, and just accept losing most of their funding if the answer is yes? Should OP be doing the same? (“Are you romantically involved with any of these ~10,000 people at these organizations we have an ongoing relationship with?”)
The particular situation you’re talking about is with a relatively senior person at OP, but I think not incredibly so? They’re one of 27 /80 people who either have “senior” in their title or are co-CEO/President. The person at the grantee org looks to be much more senior, probably the #2 or #3 person at the org. A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.
(I do think you should have COI policies, but recusal at the granting organization is the standard way to do it outside EA, and I think is pretty reasonable for EA as well.)
Thanks Jeff you’ve convinced me that a zero relationship policy woyldnt work. I think I didn’t grasp the scale of these orgs and just how unrealistic this might be to avoid romantic entanglement at all levels.
I think something asking the lines of your steelmanning me here might ensure an extremely low chance of relationship bias affecting grants.
“A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.”
I know other orgs operate through Recusal at the granting org, but romantic bias can still get legs in the donors door, and people may well still struggle to vote against someone’s partner out of loyalty. Recusal helps and I’m sure it’s happening already, but it doesn’t seem good enough in some situations
I appreciate where the sentiment is coming from (and I’d personally be in favour of stronger COI norms than a lot of EA funders seem to have) but the impact cost of this seems too high as stated.
There’s value is being squeaky clean, but there’s also value in funding impactful projects, and I think having COI policies apply across the whole large org (“If anyone from our org is dating anyone from your org, we can’t fund you”) will end up costing way more value than it gains.
That’s a strong argument thanks Will. It’s an interesting question which has more value—being squeaky clean or having some projects perhaps being underfunded. I would hope though that it wouldn’t necessarily cost as much as we might think, if other funders could cover shortfalls. OpenPhil isn’t the only donor fish in the sea, although they are perhaps the only Leviathan for some EA related orgs.
Perhaps this is also part of the argument in favour of having more, slightly smaller funders rather than a few huge ones. To help avoid COI
Although I didn’t say it as I was going for the “squeaky clean” argument, but you could also potentially draw a line at no funding to orgs where there are relationships between those at some kind of leadership/decision making level. This wouldn’t be squeaky clean, but cleaner at least.
part of the argument in favour of having more, slightly smaller funders rather than a few huge ones
I’ve heard others suggest this, but don’t know what it means. Do you think Dustin should give half his money to someone else? Or that he should fund two independent grantor organizations to duplicate efforts instead of having only one? Or just that we want more EA billionaires?
I feel NickLaing is encoding an implicit graph-theoretic belief that may not be factually accurate. The premise is that CoI opportunities fall with decentralization, but it may be the case that more diffuseness actually lead to problematic intermingling. I don’t have super good graph theory intuitions so I’m not making a claim about whether this is true, just that it’s a premise and that the truth value matters.
My graph-theoretic intuition is that it depends a lot of the distribution of opportunities. Because EAs tend to both fund and date other EAs, the COI increase / decrease probably depends to some extent on the relative size of the opportunity / recipient network.
My premise may well be wrong, but all I have heard to date is that the conflicts of interests aren’t that big a problem, not a clear argument that more diffuseness could make COI worse.
if we take an imaginary world where there is only one donor org and many donee organisations, within a small community like EA it seems almost impossible to avoid conflicts of interest in a high proportion of grants.
But I have low confidence in this, and would appreciate someone explaining arguments in favor of centralisation reducing potential for COI
I think Nick is suggesting that if we had Open Phil split into funders A and B (which were smaller than Open Phil), then A declining to fund an organization due to a COI concern would be somewhat less problematic because it could go to B instead. I’m not a graph theory person either, but it seems the risk of both A and B being conflicted out is lower.
I don’t think that’s a good reason to split Open Phil, although I do think some conflicts are so strong that Open Phil should forward those organizations to external reviewers for determination. For example, I think a strong conflict disqualifies all the subordinates of the disqualified person as well—eg I wouldn’t think it appropriate to evaluate the grant proposal of a family member of anyone in my chain of command.
This wasn’t clear, but I don’t think any of the three make sense as solutions. You can’t really tell donors that they don’t get to make decisions about what to fund, having multiple orgs creates duplication and overlap which is really costly and wasteful—and if the organizations coordinate you haven’t really helped, and lastly, sure, I’d love for there to be more donors, but it’s not really actionable, other than to tell more EAs to make lots of money. (And they probably should. EA was, is, and will be funding constrained.)
I think the first 2 options make some sense and I don’t think the donor diversity question is simple, with simple answers
On the first option, of course people can give money where they want, but I think any smart big donor could respond to a good argument for diversification of giving. It’s not about telling anyone to do anything, but about figuring out what is actually the best way to do philanthropy in the EA community in the long term.
I don’t think having multiple orgs are necessarily costly and wasteful. Even if donors co-ordinate to some degree, having a donor board with no major COI could make more uncompromised and rational decisions, and also avoid controversy both from within and outside the movement..
Charity entrepreneurship have invested in foundation entrepreneurship, and make a number of good arguments why it can good to have more funding orgs out there, even if smaller. These benefits includemore exploration of different cause areas and potential access to different pools of funding and different donors.
As a side note (although I know it wasn’t intentional) don’t think it’s a great conversation technique on a forum to suggest 3 possible solutions which seemed to be in good faith, and then turn around and say that they don’t make sense in the next comment. This would work in an in person discussion I think, but it makes it hard to have a discussion on a forum.
I’ve retracted the original comment as it’s clear to me now that it doesn’t make practical or ethical sense to completely rule out grants to orgs where there is partner entanglement. I still think it’s an important discussion though!
Sorry Constellation is not a secret exclusive office (I’ve been invited and I’m incredibly miscellaneous and from New Zealand).
It’s a WeWork and from my understanding it doesn’t accept more applications because it’s full.It’s unlikely Claire gave a grant to Buck since (a) like you said this is a well-known relationship (b) the grant-makers for AI safety are 2 people who are not Claire (Asya Berghal and Luke Muelhausser).
From personal experience it’s actually really easy to talk about diversity in EA? I literally chatted with someone who is now my friend when I said I believe in critical race theory and they responded wokism hurt a lot of their friends and now we’re friends and talk about EA and rationalism stuff all the time. I find most rationalist are so isolated away from blue tribe now days that they treat diversity chat with a lot of morbid curiosity and if you can justify you beliefs well.
Blacklists as I understand them have really high bars and are usually used for assault or when a person is a danger to the community. I also think not inviting someone to a rationality retreat cause you don’t want to hang out with them is fine. I would rather die than do circling with someone I disliked tbh (I still don’t know what circling is but I just assume every rationality retreat is read the CFAR handbook or circling at this point).
This breaks the weird default to everything as good faith but this post reads to me as someone who actually isn’t familiar with EA mechanisms and is more grasping at straws due to their anxieties about the EA space and specifically the Bay Area. Many of the details just hit weird trip wires for me that make me think something’s up (e.g. most of the arguments are poorly done when there are stronger empirics around it).
Edit: I confused constellation and lightcone. I still maintain it’s just an office and people need to chill out about the status anxiety of it.
Constellation isn’t located in a WeWork. The Lightcone Offices are located in a WeWork. Constellation doesn’t really have an application process IIRC, the Lightcone Offices accepts applications (though we are also likely to shut down in March).
Oh sorry I thought both were definitely weworks? I’ll edit that in.
The second half of Point 4 seems like a jumble of words I can’t figure out the meaning or relevance of. Am I missing something?
It says, if I understand correctly, that you don’t know what circling is, you would rather die than do it with people you dislike, that’s why people shouldn’t be invited to rationality retreats, and you don’t know what rationality retreats involve, but (you assume?) they involve circling.
Yeah so I should have written this clearer. It’s making a few claims:
1. Rationalist retreat type things often require a level of intimacy and trust that means it’s probably ok for them to be more sensitive and have a lower bar for inviting people.
2. Often a lot of young EAs have status anxiety about being invited to things of actual low importance for their impact (e.g. circling). I’m signalling that these are overblown and these social activities are often overestimated in their enjoyment and status.
People who are agreement-downvoting this: if you don’t agree with part of the comment, please write a reply explaining what you disagree with before downvoting. I see this has many downvotes but I can’t tell what part everyone is objecting to.
I think you’re conflating Lightcone Office (in WeWork, does have applications but I think did pause at one point because of space issues, run by LessWrong/Lightcone team, houses mostly lesser known orgs and independents) with Constellation (a few blocks away, run by Redwood Research, houses several major orgs)
On point 2, I would probably argue that philanthropic agencies shouldn’t give grants to an org which their partner is involved with, regardless of if they are involved in the grant or not. I have been surprised to read a number of posts where this seems to be happening.
This might seem harsh, but there is value in being squeaky clean and leaving yourself above reproach. Other funding agencies can come in if the work is valuable enough.
I don’t think this is standard anywhere for grantors, but I was unsure, so I checked a few. Carnegie, Gates, and the National Council of Nonprofits guidance. All three require disclosure, some cases require recusal, none of the three ban funding.
Does Open Philanthropy have a public document like this?
I hope it at least exists internally but I think they should follow the example of other well-established organisations like Gates to make it public, especially given the prevalence of polyamory in this community and the “insularity” of the Bay Area as described in this comment.
Edit: I believe this was important enough to turn into a separate post.
There is this, but I agree it would be good if there was one that were substantially more detailed in describing the process.
(You are probably getting downvotes because you brought up polyamory without being very specific about describing exactly how you think it relates to why Open Phil should have a public COI policy. People are sensitive about the topic, because it personally relates to them and is sometimes conflated with things it shouldn’t be conflated with. Regardless, it doesn’t seem relevant to your actual point, which is just that there should be a public document.)
Not sure if it’s public, but this indicates it exists.
This seems pretty hard to put into practice. Let’s say TLA gets most of it’s funding from OP. TLA is considering hiring someone: should they ask “are you romantically involved with any of these 80 people” as part of their decision to hire, and weigh employing this particular person against the difficulty of making up the funding shortfall? Or after hiring someone should TLA then ask the question, and just accept losing most of their funding if the answer is yes? Should OP be doing the same? (“Are you romantically involved with any of these ~10,000 people at these organizations we have an ongoing relationship with?”)
The particular situation you’re talking about is with a relatively senior person at OP, but I think not incredibly so? They’re one of 27 /80 people who either have “senior” in their title or are co-CEO/President. The person at the grantee org looks to be much more senior, probably the #2 or #3 person at the org. A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.
(I do think you should have COI policies, but recusal at the granting organization is the standard way to do it outside EA, and I think is pretty reasonable for EA as well.)
Thanks Jeff you’ve convinced me that a zero relationship policy woyldnt work. I think I didn’t grasp the scale of these orgs and just how unrealistic this might be to avoid romantic entanglement at all levels.
I think something asking the lines of your steelmanning me here might ensure an extremely low chance of relationship bias affecting grants.
“A version of your proposal that only included C-level people at the grantee org and people working on the relevant area (or people who directly or indirectly supervise people who do) at the granting org would make a lot more sense, though I’m not sure it’s a good idea.”
I know other orgs operate through Recusal at the granting org, but romantic bias can still get legs in the donors door, and people may well still struggle to vote against someone’s partner out of loyalty. Recusal helps and I’m sure it’s happening already, but it doesn’t seem good enough in some situations
Thanks for the engagement.
I appreciate where the sentiment is coming from (and I’d personally be in favour of stronger COI norms than a lot of EA funders seem to have) but the impact cost of this seems too high as stated.
There’s value is being squeaky clean, but there’s also value in funding impactful projects, and I think having COI policies apply across the whole large org (“If anyone from our org is dating anyone from your org, we can’t fund you”) will end up costing way more value than it gains.
That’s a strong argument thanks Will. It’s an interesting question which has more value—being squeaky clean or having some projects perhaps being underfunded. I would hope though that it wouldn’t necessarily cost as much as we might think, if other funders could cover shortfalls. OpenPhil isn’t the only donor fish in the sea, although they are perhaps the only Leviathan for some EA related orgs.
Perhaps this is also part of the argument in favour of having more, slightly smaller funders rather than a few huge ones. To help avoid COI
Although I didn’t say it as I was going for the “squeaky clean” argument, but you could also potentially draw a line at no funding to orgs where there are relationships between those at some kind of leadership/decision making level. This wouldn’t be squeaky clean, but cleaner at least.
I’ve heard others suggest this, but don’t know what it means. Do you think Dustin should give half his money to someone else? Or that he should fund two independent grantor organizations to duplicate efforts instead of having only one? Or just that we want more EA billionaires?
I feel NickLaing is encoding an implicit graph-theoretic belief that may not be factually accurate. The premise is that CoI opportunities fall with decentralization, but it may be the case that more diffuseness actually lead to problematic intermingling. I don’t have super good graph theory intuitions so I’m not making a claim about whether this is true, just that it’s a premise and that the truth value matters.
My graph-theoretic intuition is that it depends a lot of the distribution of opportunities. Because EAs tend to both fund and date other EAs, the COI increase / decrease probably depends to some extent on the relative size of the opportunity / recipient network.
My premise may well be wrong, but all I have heard to date is that the conflicts of interests aren’t that big a problem, not a clear argument that more diffuseness could make COI worse.
if we take an imaginary world where there is only one donor org and many donee organisations, within a small community like EA it seems almost impossible to avoid conflicts of interest in a high proportion of grants.
But I have low confidence in this, and would appreciate someone explaining arguments in favor of centralisation reducing potential for COI
I think Nick is suggesting that if we had Open Phil split into funders A and B (which were smaller than Open Phil), then A declining to fund an organization due to a COI concern would be somewhat less problematic because it could go to B instead. I’m not a graph theory person either, but it seems the risk of both A and B being conflicted out is lower.
I don’t think that’s a good reason to split Open Phil, although I do think some conflicts are so strong that Open Phil should forward those organizations to external reviewers for determination. For example, I think a strong conflict disqualifies all the subordinates of the disqualified person as well—eg I wouldn’t think it appropriate to evaluate the grant proposal of a family member of anyone in my chain of command.
Correct: a treatment of this question that does not consider BATNAs or counterfactuals would be inaccurate.
Thanks David. I think any of those 3 might work I also didn’t know just how much money Dustin was giving until I looked it up now. Great stuff!
This wasn’t clear, but I don’t think any of the three make sense as solutions. You can’t really tell donors that they don’t get to make decisions about what to fund, having multiple orgs creates duplication and overlap which is really costly and wasteful—and if the organizations coordinate you haven’t really helped, and lastly, sure, I’d love for there to be more donors, but it’s not really actionable, other than to tell more EAs to make lots of money. (And they probably should. EA was, is, and will be funding constrained.)
I think the first 2 options make some sense and I don’t think the donor diversity question is simple, with simple answers
On the first option, of course people can give money where they want, but I think any smart big donor could respond to a good argument for diversification of giving. It’s not about telling anyone to do anything, but about figuring out what is actually the best way to do philanthropy in the EA community in the long term.
I don’t think having multiple orgs are necessarily costly and wasteful. Even if donors co-ordinate to some degree, having a donor board with no major COI could make more uncompromised and rational decisions, and also avoid controversy both from within and outside the movement..
Charity entrepreneurship have invested in foundation entrepreneurship, and make a number of good arguments why it can good to have more funding orgs out there, even if smaller. These benefits includemore exploration of different cause areas and potential access to different pools of funding and different donors.
As a side note (although I know it wasn’t intentional) don’t think it’s a great conversation technique on a forum to suggest 3 possible solutions which seemed to be in good faith, and then turn around and say that they don’t make sense in the next comment. This would work in an in person discussion I think, but it makes it hard to have a discussion on a forum.
I’ve retracted the original comment as it’s clear to me now that it doesn’t make practical or ethical sense to completely rule out grants to orgs where there is partner entanglement. I still think it’s an important discussion though!