Some Thoughts on Public Discourse
Thanks to Ben Hoffman and several of my coworkers for reviewing a draft of this.
It seems to me that there have been some disagreements lately in the effective altruism community regarding the proper role and conduct for public discourse (in particular, discussions on the public Web). I decided to share some thoughts on this topic, because (a) my thoughts on the matter have evolved a lot over time; (b) some of the disagreement and frustration I’ve seen has been specifically over the way Open Philanthropy approaches public discourse, and rather than responding to comments piecemeal I thought it would be more productive to lay out my views at a high level.
First I’ll discuss my past and present views on the role of public discourse, and why they’ve changed. (In brief, I see significantly fewer benefits and greater costs to public discourse than I used to, but I still value it.) Then I’ll list some guidelines I follow in public discourse, and some observations about what kinds of responses people are likely to get from the Open Philanthropy Project depending on how they approach it.
By “public discourse,” I mean communications that are available to the public and that are primarily aimed at clearly describing one’s thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc.
My past and present views on the role of public discourse
Vipul Naik recently quoted a 2007 blog post of mine as saying, “When I look at large foundations making multimillion-dollar decisions while keeping their data and reasoning ‘confidential’ – all I see is a gigantic pile of the most unbelievably mind-blowing arrogance of all time. I’m serious.” (It continues, “Deciding where to give is too hard and too complex – with all the judgment calls and all the different kinds of thinking it involves, there is just no way Bill Gates wouldn’t benefit from having more outside perspectives. I don’t care how smart he is.”)
I’d guess that there are many other quotes in a similar vein. My old writing style tended toward hyperbole rather than careful statement of the strength of my views, but overall, I think this quote captures something I believed. It’s hard to say exactly what I thought more than nine years ago, but I think some key parts of my model were:
On any given topic, knowledge and insight are broadly distributed. It’s hard to predict what sort of person will have helpful input, and hard to assess an idea without subjecting it to a broad “marketplace of ideas.” Thus, the ideal way to arrive at truth would be to broadcast one’s views in as much detail to as many people as possible and invite maximal input.
Foundations face little downside to public discourse, because they are not accountable to the public at large. Their hesitation to engage in public discourse can most easily be explained by wanting to avoid embarrassment, bad press, etc. - and/or by following habits derived from other kinds of institutions (companies, government agencies) that face more substantive downsides. Because of their lack of accountability to the public at large, foundations are uniquely positioned to raise the level of public discourse and set examples for other institutions, so it’s a shame that they don’t.
Evolution
Over time, I’ve come to estimate both less benefit and more cost to public discourse. The details of this evolution are laid out in Challenges of Transparency (2014) and Update on How We’re Thinking about Openness and Information Sharing (2016).
The biggest surprise for me, over time, has been on the “benefits” side of the ledger. This point is noted in the above blog posts, but it’s worth going into some detail here.
For nearly a decade now, we’ve been putting a huge amount of work into putting the details of our reasoning out in public, and yet I am hard-pressed to think of cases (especially in more recent years) where a public comment from an unexpected source raised novel important considerations, leading to a change in views. This isn’t because nobody has raised novel important considerations, and it certainly isn’t because we haven’t changed our views. Rather, it seems to be the case that we get a large amount of valuable and important criticism from a relatively small number of highly engaged, highly informed people. Such people tend to spend a lot of time reading, thinking and writing about relevant topics, to follow our work closely, and to have a great deal of context. They also tend to be people who form relationships of some sort with us beyond public discourse.
The feedback and questions we get from outside of this set of people are often reasonable but familiar, seemingly unreasonable, or difficult for us to make sense of. In many cases, it may be that we’re wrong and our external critics are right; our lack of learning from these external critics may reflect our own flaws, or difficulties inherent to a situation where people who have thought about a topic at length, forming their own intellectual frameworks and presuppositions, try to learn from people who bring very different communication styles and presuppositions.
The dynamic seems quite similar to that of academia: academics tend to get very deep into their topics and intellectual frameworks, and it is quite unusual for them to be moved by the arguments of those unfamiliar with their field. I think it is sometimes justified and sometimes unjustified to be so unmoved by arguments from outsiders.
Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits—more below). I’m aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse.
I have also come to have a better understanding of the costs of public discourse. These costs are enumerated in some detail in the posts linked above. A couple aspects that seem worth highlighting here are:
I’ve come to appreciate the tangible benefits of having a good reputation, in terms of hiring, retention, access to experts, etc. I’ve also raised my estimate of how risky public discourse can be to our reputation; I think there are a lot of people who actively seek out opportunities to draw attention by quoting things in bad faith, and a lot of other people who never correct the first impression they get from encountering these quotes.
Because of how much valuable feedback we’ve gotten from “insiders” who know the topic at hand well, I’ve come to feel that careless public discourse would do more harm than good to our ability to learn from feedback, via damaging relationships with the people most likely to give good feedback.
I recognize that an outsider might be skeptical of this narrative, because there is a simple alternative one: that we valued public discourse when we were “outsiders” desperate for more information, and we value it less now that we are “insiders” who usually are able to get the information we want. I think this is, in fact, part of why my attitude has changed; but I think the above factors are more important.
Why I still value public discourse
Despite all of the above considerations, we still engage in a large amount of public discourse by the standards of a funder, and I’m glad we do. Some reasons:
I think our public content helps others understand where we’re coming from and why we do what we do. I think that this has, over time, been a major net positive for our reputation, and in particular for our ability to connect with people who deeply resonate with our values and approach. Such people can later become highly informed, engaged critics who influence our views.
I still empathize with my earlier self, and my frustration at not being able to learn about topics I cared about, understand the thinking of key institutions, and generally get “up to speed” in key areas. I worry about what I perceive as a lack of mentorship in the effective altruism community, and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.
Writing public content often forces me to clarify my own thinking, helps (via others’ reactions) highlight the most controversial parts of it (which then leads to further reflection), and often leads to better feedback than we would’ve otherwise gotten from highly informed and engaged people.
However, it is much more costly for me to participate in public discourse than it used to be, both because the stakes are higher (calling for more care in communications) and because I have less time.
I’ll add that I don’t see it as a cost to us when someone publicly criticizes our work, and I’d generally like to see more of this rather than less—provided that such criticism does not misrepresent our views and actions. And as discussed below, I think misrepresentation is fairly easy to avoid. It is still the case that the people we most want to reach are people we expect to fairly consider different arguments and reach reasonable conclusions; if public criticism hurt our reputation among such people (without misrepresenting our views), I would by default consider this deserved and good.
How I approach public discourse today
Principles I generally follow in public discourse
I am very selective about where I engage. In general, if I write something publicly, it either (a) lays out a fundamental set of ideas and arguments that I expect to link to repeatedly in order to help people understand something important about my thinking; (b) addresses a criticism/concern that is important to one or more specific people whose relationships I value; or (c) addresses a question posed directly to me, while not saying more than needed to accomplish this.
I strive to convey the nuances of my thinking, and generally prioritize avoiding harm over getting attention. My communications often take patience to read through, but have relatively low risk of leaving people with problematic impressions.
I always seek at least one other pair of eyes to look over what I’ve written before I post it publicly. Usually much more than one.
I always run content by (a sample of) the people whose views I am addressing and the people I am directly naming/commenting on, assuming they are not outright adversaries (e.g., political opponents of ideas the content is arguing for). I consider this a universally, unquestionably good practice. Almost always, I learn some nuance of their views or actions that leads to my improving the content, from both my perspective and theirs. Almost always, they appreciate the chance to comment. Sometimes, they make small requests (e.g. about timing) that I can easily accommodate. I think this practice is good for both the accuracy of the content and my relationships with the people affected. And it does not make criticism more costly for me—quite the contrary, it makes criticism less costly (in terms of relationships, and in terms of time due to improved accuracy), and increases the quantity of criticism I’m willing to make. I see essentially no case against this practice. Note that running content by people is not the same as giving editorial control or seeking their permission (see next point).
When running content by others, I communicate explicitly about my expectations for their feedback. In particular, I am clear about when I expect to publish by default, and clear that I am not offering them editorial control. I am usually happy to delay publication for an agreed-upon, non-excessive amount of time. I will make corrections to my content if it improves accuracy, and sometimes if it offers a major relationship benefit for a negligible substance cost. But I do not wait indefinitely if there’s no response, and I do not accept suggestions that result in my writing something in someone else’s voice, or stating something I don’t believe to be true and fair.
I don’t try to write for everyone. I try to write for our most thoughtful critics and for our most valued present and future relationships. I do not try to address every detail of criticisms and claims people make, or to address every misconception someone might have.
When writing at length, I provide a summary and a roadmap, and I generally try to make it easy for people to quickly understand my major claims and how to find the supporting arguments behind them.
I try to avoid straw-manning, steel-manning, and nitpicking. I strive for an accurate understanding of the most important premises behind someone’s most important decisions, and address those. (As a side note, I find it very unsatisfying to engage with “steel-man” versions of my arguments, which rarely resemble my actual views.)
I try to bear in mind how limited my understanding of others’ views is. I believe it is often prohibitively difficult and time-consuming to communicate comprehensively about the reasoning behind one’s thinking. I often observe others who have extremely inaccurate pictures of my thinking but are quite confident in their analysis, and I don’t want to make that mistake. So I have a high bar for making judgments and assertions about the rationality, character, and values of people based on public discourse, and I generally confine my writing to topics that don’t rely on views about these things. More on this in the note at the bottom.
Especially when dealing with organizations, I restrict my definition of “important disagreements” to “beliefs underlying important actions I disagree with.” I think there is sometimes a practice in the effective altruist and rationalist communities of paying a lot of attention to inconsistencies, or to actions that seem knowably non-optimal, or to other disagreements, even when they don’t pertain to important disagreements on actions, on the grounds that such things demonstrate a lack of good epistemic standards or a lack of value alignment. I think this is misguided. As an organization leader, I am constantly making tradeoffs about when to think carefully about a dilemma and reach a great answer, vs. when to go with a hacky “middle ground” approach that is knowably non-optimal but also minimizes the risk of any particular disaster, vs. when to simply defer to others or stick with inertia and accept the risk of doing something clearly flawed. I strongly feel that one cannot get a read on the values and epistemology of an organization’s leadership by focusing on the decisions that seem simplest to analyze; one must focus on the decisions that are important and that one feels could have been done in a specific better way. Even then, one will often lack a great deal of context.
What to expect in terms of responses from Open Philanthropy
If you have questions or criticisms of Open Philanthropy and are hoping for a direct response, here are some general guidelines for what to expect:
If someone comments on the Open Philanthropy Blog (including one of our regular open threads, and even if the open thread is old) and asks a direct question, I or someone else from Open Philanthropy will answer it. (Tagging me on Facebook or mentioning Open Philanthropy in a forum comment does not have the same effect, at least not consistently. I feel I owe a response to people who specifically “approach” Open Philanthropy with a question, which includes people who email, people who comment on our blog, and people who come to our events; I don’t feel a similar obligation to people who express interest/curiosity in our views but are ultimately having their own discussion.)
If someone whose relationship I value specifically tells me they are curious about my answer to a question or criticism, and that they think it’s worth my time to engage, I generally do.
I generally address specific claims that seem crucial to an argument implying Open Philanthropy’s actions are suboptimal. I often do not respond at all to claims that seem tangential, or to vague expressions of disagreement/dissatisfaction that I can’t pin down to particular claims.
I have limited time for reading as well as writing. When someone writes a long critique of Open Philanthropy, I look to the summary to determine whether there’s anything worth addressing, then drill down to see the supporting arguments behind key points. When the summary is absent or ineffective, this usually means I will not respond in a satisfying way; I do not consider myself obligated to read long (>5pg) pieces just because they address Open Philanthropy.
I do not feel offended when people criticize us, and I also do not generally feel taxed by it (I often feel no obligation no respond, and when responding out of obligation, I often respond quite briefly). I lower my opinion of someone when I feel they are misrepresenting us (or others), but I generally do not lower my opinion of someone simply because they express criticism or disagreement. (I elaborate on this point in a note below, since people who reviewed a draft of this piece generally found this statement surprising.) As noted above, I believe that misrepresentations can wrongfully damage our reputation, but I do not worry about the reputational effects of criticism based on accurate representations. I don’t have a desire for people to criticize us less; I do have a desire for people to be more understanding about the fact that we sometimes do not respond at length or at all.
When people run things by me before posting them, I try to be helpful by correcting any inaccuracies I notice, though I often do not engage much beyond that, and will usually save substantive disagreements for public discourse (or decline to get into them at all).
I have a strong bias to engage people who seem like they are thinking hard, reasoning carefully, engaging respectfully, and doing their best to understand the public content that is already available, even if this doesn’t fit my other criteria.
A note on evaluating people based on public discourse
This section is tangential to the rest of the piece, but I include it as an elaboration on a couple of comments above that stem from a somewhat unusual attitude toward evaluating people.
I think it’s good and important to form views about people’s strengths, weaknesses, values and character. However, I am generally against forming negative views of people (on any of these dimensions) based on seemingly incorrect, poorly reasoned, or seemingly bad-values-driven public statements. When a public statement is not misleading or tangibly harmful, I generally am open to treating it as a positive update on the person making the statement, but not to treating it as worse news about them than if they had simply said nothing.
The basic reasons for this attitude are:
I think it is very easy to be wrong about the implications of someone’s public statement. It could be that their statement was poorly expressed, or aimed at another audience; that the reader is failing to understand subtleties of it; or that the statement is in fact wrong, but that it merely reflects that the person who made it hasn’t been sufficiently reflective or knowledgeable on the topic yet (and could become so later).
I think public discourse would be less costly and more productive for everyone if the attitude I take were more common. I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.
I generally believe in evaluating people based on what they’ve accomplished and what they’ve had the opportunity to accomplish, plus any tangible harm (including misinformation) they’ve caused. I think this approach works well for identifying people who are promising and people whom I should steer clear of; I think other methods add little of value and mostly add noise.
I update negatively on people who mislead (including expressing great confidence while being wrong, and especially including avoidable mischaracterizations of others’ views); people who do tangible damage (usually by misleading); and people who create little of value despite large amounts of opportunity and time investment. But if someone is simply expressing a view and being open about their reasons for holding it, I try (largely successfully, I think) not to make any negative updates simply based on the substance.
- Collection of good 2012-2017 EA forum posts by 10 Jul 2020 16:35 UTC; 202 points) (
- ITT-passing and civility are good; “charity” is bad; steelmanning is niche by 5 Jul 2022 0:15 UTC; 161 points) (LessWrong;
- Meta-tations on Moderation: Towards Public Archipelago by 25 Feb 2018 3:59 UTC; 78 points) (LessWrong;
- Focusing on bad criticism is dangerous to your epistemics by 15 Mar 2024 23:57 UTC; 70 points) (
- Writing Down Conversations by 28 Dec 2017 22:08 UTC; 53 points) (LessWrong;
- 8 Dec 2022 17:16 UTC; 42 points) 's comment on Why did CEA buy Wytham Abbey? by (
- 13 Nov 2022 20:00 UTC; 39 points) 's comment on A personal statement on FTX by (
- 4 Feb 2023 19:47 UTC; 34 points) 's comment on Criticism Thread: What things should OpenPhil improve on? by (
- 18 Jan 2023 22:01 UTC; 30 points) 's comment on Doing EA Better by (
- EA Forum Prize: Winners for July 2020 by 8 Oct 2020 9:16 UTC; 28 points) (
- AMA: Holden Karnofsky @ EA Global: Reconnect by 12 Mar 2021 15:12 UTC; 27 points) (
- 21 Nov 2019 1:24 UTC; 24 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (
- The case for transparent spending by 15 Dec 2022 17:42 UTC; 19 points) (
- 16 Mar 2021 2:07 UTC; 12 points) 's comment on AMA: Holden Karnofsky @ EA Global: Reconnect by (
- 14 Feb 2021 22:50 UTC; 8 points) 's comment on “PR” is corrosive; “reputation” is not. by (LessWrong;
- 3 May 2023 0:23 UTC; 6 points) 's comment on Review of The Good It Promises, the Harm It Does by (
- 23 Jun 2021 9:02 UTC; 2 points) 's comment on Linch’s Quick takes by (
- Ben Hoffman & Holden Karnofsky by 20 Mar 2021 4:07 UTC; 0 points) (
Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.
The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.
As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.
So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.
For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.
Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).
Agreed. OpenPhil has saved me months of time from having to duplicate work and re-research things myself. And often I would have come to lower quality conclusions.
Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They’ve saved me lots of time and blind alleys.
OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other’s findings.
It may be possible to have information sharing through people’s networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.
Strong agreement. I’d like to add that the general reports on biorisk have also been very valuable personally, including the written-up conversations with experts.
Interesting post.
I wonder if it’d be useful to make a distinction between the “relatively small number of highly engaged, highly informed people” vs “insiders”.
I could easily imagine this causal chain:
Making your work open acts as an advertisement for your organization.
Some of the people who see the advertisement become highly engaged & highly informed about your work.
Some of the highly engaged & informed people form relationships with you beyond public discourse, making them “insiders”.
If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.
I think this story is quite plausibly true. I’m not sure the EA movement would have ever come about without the existence of Givewell. Givewell’s publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.
I can easily imagine a parallel universe “Closed Philanthropy Project” with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn’t be surprised if many foundations already had giving philosophies very much like OpenPhil’s, but we don’t hear about them because they don’t make their research public.
Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.
So, a power law distribution in commenter usefulness isn’t sufficient to show that openness lacks benefits.
As an aside, I hadn’t previously gotten a strong impression that OpenPhil’s openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.
For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).
Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it’s plausible that the average EA puts commenting on OpenPhil blog posts in the “time wasted on the internet” category, and it might not require a ton of effort to change that.
To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator’s website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.
More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn’t a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.
I’d like to build on the causal chain point. I think there’s something unsatisfying about the way Holden’s set up the problem.
I took the general thought as: “we don’t get useful comments from the general public, we get useful comments from those few people who read lots of our stuff then talk to us privately”. But if the general way things work is 1. people read the OPP blog (public) then 2. talk to OPP privately (perhaps because they don’t believe anyone takes public discourse seriously), but doing 2. means you are then no longer part of the general public, then almost by definition public discourse isn’t going to be useful: those motivated enough to engage in private correspondence are now not counted as part of public discourse!
Maybe I’ve misunderstood something, but it seems very plausible to me that the public discourse generates those useful private conversations even if the useful comments don’t happen on public forums themselves.
I’m also uncertain if the EA forum counts as public discourse Holden doesn’t expect to be useful, or private discourse which might be, which puts pressure on the general point. If you typify ‘public discourse’ as ‘talking to people who don’t know much’ then of course you wouldn’t expect it to be useful.
Michael, this post wasn’t arguing that there are no benefits to public discourse; it’s describing how my model has changed. I think the causal chain you describe is possible and has played out that way in some cases, but it seems to call for “sharing enough thinking to get potentially helpful people interested” rather than for “sharing thinking and addressing criticisms comprehensively (or anything close to it).”
The EA Forum counts for me as public discourse, and I see it as being useful in some ways, along the lines described in the post.
Hi John, thanks for the thoughts.
I agree with what you say about public discourse as an “advertisement” and “critical first step,” and allude to this somewhat in the post. And we plan to continue a level of participation of public discourse that seems appropriate for that goal—which is distinct from the level of public discourse that would make it feasible for readers to understand the full thinking behind the many decisions we make.
I don’t so much agree that there is a lot of low-hanging fruit to be had in terms of getting more potentially helpful criticism from the outside. We have published lists of questions and asked for help thinking about them (see this series from 2015 as well as this recent post; another recent example is the Worldview Diversification post, which ended with an explicit call for more ideas, somewhat along the lines you suggest). We do generally thank people for their input, make changes when warranted, and let people know when we’ve made changes (recent example from GiveWell).
And the issue isn’t that we’ve gotten no input, or that all the input we’ve gotten has been low-quality. I’ve seen and had many discussions about our work with many very sharp people, including via phone and in-person research discussions. I’ve found these discussions helpful in the sense of focusing my thoughts on the most controversial premises, understanding where others are coming from, etc. But I’ve become fairly convinced—through these discussions and through simply reflecting on what kind of feedback I would be giving groups like GiveWell and Open Phil, if I still worked in finance and only engaged with their work occasionally—that it’s unrealistic to expect many novel considerations to be raised by people without a great deal of context.
Even if there isn’t low-hanging fruit, there might still be “high-hanging fruit.” It’s possible that if we put enough effort and creative thinking in, we could find a way to get a dramatic increase in the quantity and quality of feedback via public discourse. But we don’t currently have any ideas for this that seem highly promising; my overall model of the world (as discussed in the previous paragraph) predicts that it would be very difficult; and the opportunity cost of such a project is higher than it used to be.
Great points! (An upvote wasn’t enough appreciation, hence the comment as well).
I’m skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I’m skeptical of any particular story being causal.
To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.
There was a thread on Marginal Revolution many years ago about why more economists don’t do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly ‘made fools of themselves’ in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.
So, three claims.
Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
I don’t find the view that publishing a lot of internal thinking for public consumption and feedback is a poor use of time to be implausible on its face. Here are some reasons:
By the time you know enough to write really useful things, your opportunity cost is high (more and better grants, coaching staff internally, etc).
Thoughtful and informative content tends to get very little traffic anyway because it doesn’t generate controversy. Most traffic will go to your most dubious work, thereby wasting your time, other people’s time and spreading misinformation. I’ve benefitted greatly from GiveWell/OpenPhil investing in public communication (including this blog post for example) but I think I’m in a small minority that arguably shouldn’t be their main focus given the amount of money they have available for granting. If there are a few relevant decision-makers who would benefit from a piece of information, you can just quickly email it to them and they’ll understand it without you having to explain things in great detail.
The people with expertise who provide the most useful feedback will email you or meet you eventually anyway—and often end up being hired. I’d say 80% of the usefulness of feedback/learning I’ve received has come from 5% of providers, who can be identified as the most informed critics pretty quickly.
‘Transparency’ and ‘engaging with negative public feedback’ are applause lights in egalitarian species and societies, like ‘public parks’, ‘community’ and ‘families’. No one wants to argue against these things, so people who aren’t in senior positions remain unaware of their legitimate downsides. And many people enjoy tearing down those they believe to be powerful and successful for the sake of enforced egalitarianism, rather than positive outcomes per se.
The personal desire for attention, and to be adulated as smart and insightful, already pushes people towards public engagement even when it’s an inferior use of time.
This isn’t to say overall people share too much of the industry expertise they have—there are plenty of forces in the opposite direction—but I don’t come with a strong presupposition that they share far too little either.
sharing more things of dubious usefulness is what I advocate.
I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can’t expect to outperform them by much.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
I also don’t care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don’t affect decision criteria because [Insert Robin Hanson analysis here].
I’m not advocating a sea change in policy, but an increase in skepticism at the margin.
link
Notably, it’s easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil’s work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)
Thanks for the thoughts!
I’m not sure I fully understand what you’re advocating. You talk about “only selectively engag[ing] with criticism” but I’m not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.
I agree that “we should be skeptical of our stories about why we do things, even after we try to correct for this.” I’m not sure that the reasons I’ve given are the true ones, but they are my best guess. I note that the reasons I give here aren’t necessarily very different from the reasons others making similar transitions would give privately.
I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between “engage in public discourse” and “let one’s views calcify”; unfortunately I think the former does little to prevent the latter.
I don’t understand the claim that “The principles section is an outline of a potential future straightjacket.” Which of the principles in that section do you have in mind?
Whoops, I somehow didn’t see this until now. Scattered EA discourse, shrug.
I am in support of only engaging selectively.
great!
agreed
the whole thing. Principles are better as descriptions and not prescriptions :)
WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to
I’ve been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.
This talk gives a brief overview: https://vimeo.com/89936101
And I recommend skimming one of Edward deBono’s books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.
The principles were meant as descriptions, not prescriptions.
I’m quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: “I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.” But because the risks are what they are, I’ve concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone’s attitude toward low-stakes brainstorming from their public communications.
Most people wear their hearts on their sleeve to a greater degree than they might realize. Public conservatism of discourse seems a pretty reasonable proxy measure of private conservatism of discourse in most cases. As I mentioned, I am very happy to hear evidence this is not the case for openPhil.
I do not think the model of creativity as a deliberate, trainable set of practices is widely known, so I go out of my way to bring it up WRT projects that are important.
+1, excellent comment!
This is my concern (which is not to say it’s Open Phil’s responsibility to solve it).
Agreed that this is a grave concern that worries me a lot.
Where do you feel that the responsibility for solving it lies?
Thanks Holden. This seems reasonable.
A high impact foundation recently (and helpfully) sent me their grant writeups, which are a treasure trove of useful information. I asked them if I could post them here and was (perhaps naively) surprised that they declined.
They made many of the same points as you re: the limited usefulness of broad feedback, potential reputation damage, and (given their small staff size) cost of responding. Instead, they share their writeups with a select group of likeminded foundations.
I still think it would be much better if they made their writeups public, but almost entirely because it would be useful for the reader.
It’s a shame that the expectation of responding to criticism can disincentivise communication in the first place.
(Views my own, not my employer’s)
Thanks for the comments, everyone!
I appreciate the kind words about the quality and usefulness of our content. To be clear, we still have a strong preference to share content publicly when it seems it would be useful and when we don’t see significant downsides. And generally, the content that seems most likely to be helpful has fairly limited overlap with the content that poses the biggest risks.
I have responded to questions and criticisms on the appropriate threads.
Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.
In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.
I’ll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.
(1) Frustrating vagueness and seas of generality: This post, as well as many other posts you have recently written (such as http://www.openphilanthropy.org/blog/radical-empathy , http://www.openphilanthropy.org/blog/worldview-diversification , http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing , http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/) struck me as fairly vague. Even posts where you were trying to be concrete (e.g., http://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about , http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity) were really hard for me to parse and get a grip on your precise arguments.
I didn’t really reflect on this much with the previous posts, but reading your current post sheds some light: the vagueness is not a bug, from your perspective, it’s a corollary of trying to make your content really hard for people to take issue with. And I think therein lies the problem. I think of specificity, falsifiability, and concreteness as keys to furthering discourse and helping actually converge on key truths and correcting error. By glorifying the rejection of these virtues, I think your writing does a disservice to public discourse.
For a point of contrast, here are some posts from GiveWell and Open Phil that I feel were sufficiently specific that they added value to a conversation: http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ , http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ , http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms , http://blog.givewell.org/2016/12/12/amf-population-ethics/ -- notice how most of these posts make a large number of very concrete claims and highlight their opposition to very specific other parties, which makes them targets of criticism and insult, but really helps delineate an issue and pushes conversations forward. I’m interested in seeing more of this sort of stuff and less of overly cautious diplomatic posts like yours.
One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.
From my list, the posts I identified as clearly vague:
http://www.openphilanthropy.org/blog/radical-empathy got 1 comment (a question that hasn’t been answered)
http://www.openphilanthropy.org/blog/worldview-diversification got 1 comment (a single sentence praising the post)
http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing got 6 comments
http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/ got 8 comments
In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side)
http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ got 17 comments
http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ got 14 comments
http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms got 27 comments
http://blog.givewell.org/2016/12/12/amf-population-ethics/ got 7 comments
If engagement is any indication, then people really thirst for specific, concrete content. But that’s not necessarily in contradiction with Holden’s point, since his goal isn’t to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.
Just my rough impression, but I find that controversial or flawed posts get comments, whereas posts that make a solid, concrete, well-argued point tend to not generate much discussion. So I don’t think this is a good measure for the value of the post to the community.
Thinking about what to call this phenomenon because it seems like an important aspect of discourse. Namely, making no claims but only distinctions, which generates no arguments. This was a distinct flavor to Superintelligence, I think intentionally to create a framework within which to have a dialog absent the usual contentious claims. This was good for that particular use case, but I think that deployed indiscriminately it leads to a kind of big tent approach inimical to real progress.
I think potentially it is the right thing for OpenPhil to currently be doing since they are first trying to figure out how the world actually is with pilot grants and research methodology testing etc. Good to not let it infect your epistemology permanently though. Suggested counter force: internal non-public betting market.
Or taxonomies. Hence: The Taxoplasma of Ra.
(Sorry, I should post this in DEAM, not here. I don’t even understand this Ra thing.)
But I really like this concept!
Thanks for the thoughts, Vipul! Responses follow.
(1) I’m sorry to hear that you’ve found my writing too vague. There is always a tradeoff between time spent, breadth of issues covered, and detail/precision. The posts you hold up as more precise are on narrower topics; the posts you say are too vague are attempts to summarize/distill views I have (or changes of opinions I’ve had) that stem from a lot of different premises, many hard to articulate, but that are important enough that I’ve tried to give people an idea of what I’m thinking. In many cases their aim is to give people an idea of what factors we are and aren’t weighing, and to help people locate beliefs of ours they disagree (or might disagree) with, rather than to provide everything needed to evaluate our decisions (which I don’t consider feasible).
While I concede that these posts have had limited precision, I strongly disagree with this: “the vagueness is not a bug, from your perspective, it’s a corollary of trying to make your content really hard for people to take issue with.” That is not my intention. The primary goal of these posts has been to help people understand where I’m coming from and where the most likely points of disagreement are likely to lie. Perhaps they failed at this (I suspect different readers feel differently about this), but that was what they were aiming to do, and if I hadn’t thought they could do that, I wouldn’t have written them.
(2) I agree with all of your thoughts here except for the way you’ve characterized my comments. Is there a part of this essay that you thought was making a universal claim about transparency, as opposed to a claim about my own experience with it and how it has affected my own behavior and principles? The quote you provide does not seem to point this way.
(3) My definition of “public discourse” does not exclude benefits that come from fundraising/advocacy/promotion. It simply defines “public discourse” as writing whose focus is on truth-seeking rather than those things. This post, and any Open Phil blog post, would count as “public discourse” by my definition, and any fundraising benefits of these posts would count as benefits of public discourse.
I also did not claim that the reputational effects of openness are skewed negative. I believe that the reputational effects of our public discourse have been net positive. I believe that the reputational effects of less careful public discourse would be skewed negative, and that has implications for how time-consuming it is for us to engage, which in turn has implications for how much we engage.
(4) We have incurred few costs from public discourse, but we are trying to avoid risks that we perceive. As for “who gets the blame,” I didn’t intend to cover that topic one way or the other in this post. The intent of the post was to help people understand how and why my attitude toward public discourse has changed and what to expect from me in the future.
(2) Overstated connotations of expertise with respect to the value of transparency and openness:
“Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits—more below). I’m aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse.”
Your writing makes it appear like you’ve left no stone unturned to try every approach at transparency and confirmed that the masses are wanting. But digging into the facts suggests support for a much weaker conclusion. Which is: for the particular approach that GiveWell used and the particular kind of content that GiveWell shared, the people who responded in ways that made sense to you and were useful to you were restricted to a narrow pool. There is no good reason offered on why these findings would be general across any domains or expository approaches than the ones you’ve narrowly tried at GiveWell.
This doesn’t mean GiveWell or Open Phil is obligated to try new approaches—but it does suggest more humility in making claims about the broader value of transparency and openness.
There is a wealth of ways that people seek to make their work transparent. Public projects on GitHub make details about both their code evolution and contributor list available by default, without putting in any specific effort into it, because of the way the system is designed. This pays off to different extents for different kinds of projects; in some cases, there are a lot of issue reports and bugfixes from random strangers, in many others, nobody except the core contributors cares. In some, malicious folks find vulnerabilities in the code because it’s so open. If you ran a few projects on GitHub and observed something about how frequently strangers make valuable commits or file bug reports, it would not behoove you to then use that information to make broad claims about the value of putting projects on GitHub. Well, you seem to be doing the same based on a couple of things you ran (GiveWell, Open Phil).
Transparency/Semi-transparency/openness is a complex subject and a lot of its value comes from a wide variety of downstream effects that differentially apply in different contexts. Just a few of the considerations: precommitment (which gives more meaning to transparency, think research preregistration), transparent-by-definition processes and workflows (think tools like git on GitHub, automatically and transparently updated accounts ledgers such as those on blockchains), computability and pluggability (stuff that is in a computable format and can therefore be plugged into other datasets or analyses with minimal effort by others, e.g., the Open Philanthropy grants database and the International Aid Transparency Initiative (both of which were used by Issa in collating summary information about grant trends and patterns), donation logs (which I used to power the donations lists at https://donations.vipulnaik.com/)), integrity and consistency forced by transparency (basically your data has to check out if you are making it transparently available, e.g., when I moved all my contract work payments to https://contractwork.vipulnaik.com/ , I had to make sure the entire payment system was consistent), etc.
It seems like, at GiveWell, many of the key parts of transparency (precommitment, transparent-by-definition processes and workflows, computability and pluggability, integrity and consistency) are in minimal use. Given this rather abridged use case of transparency (which could be great for you), it really doesn’t make sense to argue broadly about the value of being transparent.
Here is what I’d consider a better way to frame this:
“At GiveWell, we made some of our reasoning and the output of our work transparent, and reaped a variety of benefits. However, we did not get widespread engagement from the general public for our work. Getting engagement from the general public was something we wanted and hoped to achieve but not the main focus of our work. We couldn’t figure out the right strategy for doing it, and have deprioritized it. I hope that others can benefit from what worked and didn’t work in our efforts to engage the public with our research, and come up with better strategies to engender public engagement. I should be clear that I am not making any broader claims about the value of transparency in contexts beyond ours.”
(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative.
“By “public discourse,” I mean communications that are available to the public and that are primarily aimed at clearly describing one’s thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc.”
If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stacking the deck against public discourse—now any reputational or time-sink impacts are likely to be negative.
Here’s an alternate perspective. Any public statement should be thought of both in terms of the object-level points it is making (specifically, the information it is directly providing or what it is trying to convince people of), and secondarily in terms of how it affects the status and reputation of the person or organization making the statement, and/or their broader goals. For instance, when I wrote http://effective-altruism.com/ea/15o/effective_altruism_forum_web_traffic_from_google/ my direct goal was to provide information about web traffic to the Effective Altruism Forum and what the patterns tell us about effective altruism movement growth, but an indirect goal was to highlight the value of using data-driven analytics, and in particular website analytics, something I’ve championed in the past. Whether you choose to label the public statement as “fundraising”, “advocacy”, or whatever, is somewhat besides the point.
(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case.
As a more general claim about being public, though, your claim strikes me as misguided. The main obstacle to writing up stuff for the public is just that writing stuff up takes a lot of time, but this is mostly a limitation on the part of the writer. The writer doesn’t have a clear picture of what he or she wants to say. The writer does not have a clear idea of how to convey the idea clearly. The writer lacks the time and resources to put things together. Failure to do this is a failure on the part of the writer. Blaming readers for continually trying to misinterpret their writing, or carrying out witch hunts, is simply failing to take responsibility.
A more humble framing would highlight this fact, and some of its difficult implications, e.g.: “As somebody in charge of a foundation that is spending ~$100 million a year and recommending tens of millions in donations by others, I need to be very clear in my thinking and reasoning. Unfortunately, I have found that it’s often easier and cheaper to spend millions of dollars in grants than write up a clear public-facing document on the reasons for doing so. I’m very committed to writing publicly where it is possible (and you can see evidence of this in all the grant writeups for Open Phil and the detailed charity evaluations for GiveWell). However, there are many cases where writing up my reasoning is more daunting than signing off on millions of dollars in money. I hope that we are able to figure out better approaches to reducing the costs of writing things up.”
I believe you when you say that you don’t benefit much from feedback from people not already deeply engaged with your work.
There’s something really noticeable to me about the manner in which you’ve publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what’s most noticeable about this for me is that I can’t find anything that you’ve written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really isn’t meant to be; I find that writing in such a way is actually somewhat resource intensive in terms of both time, and something roughly like mental energy.
(I find it’s generally easier to develop a felt sense for when someone else is paying sufficient attention to conversational nuances regarding civility than it is to point out specific examples, but your discussion of how you feel about receiving criticism is a good example of this sort of civility).
As you and James mention, public writeups can be valuable to readers, and I think this is true to a strong extent.
I’d also say that, just as importantly, writing this kind of well thought out post which uses healthy and civil conversational norms creates value from a leadership/coordination point of view. Leadership in terms of teaching skills and knowledge is important too, but I guess I’m used to thinking of those as separate from leadership in terms of exemplifying civility and openness to sharing information. If it were more common for people and foundations to write frequently and openly, and communicate with empathy towards their audiences when they did, I think the world would be the better for it. You and other senior Open Phil and GiveWell staff are very much respected in our community, and I think it’s wonderful when people are happy to set a positive example for others.
(Apologies if I’ve conflated civility with openness to sharing information; these behaviors feel quite similar to me on a gut level—possibly because they both take some effort to do, but also nudge social norms in the right direction while helping the audience.).
This was a great read, and just the kind of post I have been waiting for! I think that almost all of the principles should be helpful to keep in mind for anyone engaging in this kind of public discourse. In my opinion it is very important to increase the quality of communication and thus quality of knowledge across EA folks, both current and future, both internally and towards the general public; these kinds of posts would seem to help out there.
Of course, I might slightly overvalue this kind of discussion, since I don’t know about the demographics of the EA community, and might be somewhat similar to the earlier self you mentioned in the post, frustrated at not being able to learn and get up to speed about the topics I care about. I don’t know whether there is a demand for these kinds of information in the next generation, although it might be interesting and somewhat relevant to find out. I wonder if anyone has an idea about that?
Either way, thank you very much for taking the time to make these thoughts public, I—as well as many others, it seems—really appreciate it!
The following is entirely a “local” criticism: It responds only to a single statement you made, and has essentially no effect on the validity of the rest of what you say.
I found this statement surprising, because it seems to me that this practice has a high cost. It increases the amount of effort it takes to make a criticism. Increasing the cost of making criticisms can also making you less likely to consider making a criticism. There is also a fixed cost in making this into a habit.
Seeing the situation you’re in as you describe in the rest of your post, and specifically that you put a lot of effort into your comments in any case, I can see this practice working well for you. However, it’s not “no case” against it, especially for people who aren’t public figures.