This is a novelty account created by Nuño Sempere for the purpose of providing perhaps-flawed frank criticism.
The problems it’s intended to mitigate are that when providing criticism:
providing criticism which is on-point and yet emotionally thoughtful is doubly hard,
leaving negative criticism can sometimes give the impression that I think a project is not worth it, whereas I think that providing criticism is most valuable for projects which are in fact worth it
conversely, it’s sometimes hard to communicate in a polite manner that a project is ~essentially worthless, and that the author should probably be doing something else. E.g., “your theory of impact sucks”. But someone has to do it
mechanisms to signal good intent can become trite and fake to read and to produce, like shit sandwiches, or the phrase “fairly good.”
mechanisms to signal good intent can vary from culture to culture, and result in miscommunication.
the above points become trickier in the presence of uncertainty about whether the criticism is on-point or stemming from confusion.
And yet, criticism seems worth it expectation, particularl if that it can change the actions of its recipients, or of its readers. For this reason, I thought it would be worth it to try to signal potentially flawed and upsetting criticism as such, and maybe develop a set of standard disclaimers around it.
Past examples using the set of patterns which I intend to use this account to manifest include: this comment or the examples in post
An example interaction with individuals might look like:
NN: Hey, do you want to hear some negative feedback under Crocker’s rules?
A: No, thanks.
or like:
NN: Hey, do you want to hear some negative feedback under Crocker’s rules?
A: Sure, why not.
NN: [negative feedback]
or like:
Someone else: “Hey, you should fund X!”
NN: Funding X sounds like a terrible idea.
I currently consider organizations, particularly large ones, to be fair game.
I am open to negative feedback requests, either here or through my main account.
It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn’t compare OP against the rest but against the ideal.
One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn’t vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.
More realistically, some proposals in that broad direction which I think could actually be implementable could be:
allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes.
allowing people to bet against OP’s strategic decisions (e.g., against worldview diversification)
I’d love to see bets between OP and other organizations about whose funding is more effective, e.g., I’d love to see a bet between your and Jaan Tallinn on who’s approach is better, where the winner gets some large amount (e.g., $200M) towards their philanthropic approach
I’m particularly attracted to bets which have the shape of “you will change your mind about this in the future”.
At various points in the past, I think I would have personally appreciated having the option to bet...
against hypothetically continued funding towards Just Impact beating GiveDirectly
against your $8M towards INFER having been efficiently spent
that the marginal $5M given out as grants in an ACX grants-type process would be better than your marginal $5M to forecasting (you are giving more than $5M/yeear to forecasting, cf. your $8M grant to INFER).
against worldview diversification being evaluated positively by a neutral third party.
for closer or later AI timelines.
on more abstract topics, e.g., “your forecasting grantmaking is understaffed/underrated”, or “your forecasting grantmaking is too institutional”, “OP finds it too hard to exercise trust and would obtain better results by having more grant officers”.
at the odds implied by some of your public forecasts.
Note that individual people inside OP may agree with some of the above propositions, even though “OP as a whole” may act as if they believe the opposite.
You could also delegate the research of a strategy for democratic participation to other researchers, rather than doing it yourself, e.g., Robin Hanson’s time is probably buy-able with money. It would really surprise me if he (or other researchers) wasn’t able to come up with a few futarchy-adjacent ideas that were at least worth considering.
More broadly, I think that there is a spectrum between:
OpenPhilanthropy makes all decisions democratically and we all sing Kumbaya
Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers. Karnofsky writes tens of thousands of words in blogposts but does not answer comments. At the same time OP ultimately makes decisions which steer the EA community and reverberate across many lives.
Both extremes are caricatures, but we are closer to the second. Contrast with the Survival and Flourishing Fund, which has a number of regrantors with pots which grow proportionally to their estimated success.
I also think that comparison with FTX’s FF is instructive, because it was willing to trust a larger number of regrantors much earlier, and I think was able to produce a number of more experimental, ambitious and innovative grants as a result. For what it’s worth, my impression here is that Beckstead and MacAskill & the others in the FFF team did a great job here which was pretty much independent of FTX’s fraud.
So anyways, I’ve brought up some mechanisms here:
Allowing people to bet against the success of your grants
Allowing people to bet against the success of your strategic decisions
Allowing people to bet that they are better at giving out grants than OP is
Or generally trying out systems other than grants officers.
Using a wide number of regrantors rather than a small number of grant officers.
which perhaps get some the same benefits that democratization could produce for decision-making, namely information aggregation from a wider pool, and distribution of trust.
My sense is that OP could take these and other steps, and they could have some value of information, while perhaps not being all that risky if tried out at a small scale. It’s unclear though whether the managerial effort would be worth it.
PS: I liked the idea behind the Cause Exploration prizes, though I think that they did fail to produce a mechanism for addressing the above points, since the cause proposals were limited to Global Health & Wellbeing, and the worldview questions were too specific, whereas I think that the most important decisions are at the strategic level.