I’m skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I’m skeptical of any particular story being causal.
To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.
There was a thread on Marginal Revolution many years ago about why more economists don’t do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly ‘made fools of themselves’ in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.
So, three claims.
Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
I don’t find the view that publishing a lot of internal thinking for public consumption and feedback is a poor use of time to be implausible on its face. Here are some reasons:
By the time you know enough to write really useful things, your opportunity cost is high (more and better grants, coaching staff internally, etc).
Thoughtful and informative content tends to get very little traffic anyway because it doesn’t generate controversy. Most traffic will go to your most dubious work, thereby wasting your time, other people’s time and spreading misinformation. I’ve benefitted greatly from GiveWell/OpenPhil investing in public communication (including this blog post for example) but I think I’m in a small minority that arguably shouldn’t be their main focus given the amount of money they have available for granting. If there are a few relevant decision-makers who would benefit from a piece of information, you can just quickly email it to them and they’ll understand it without you having to explain things in great detail.
The people with expertise who provide the most useful feedback will email you or meet you eventually anyway—and often end up being hired. I’d say 80% of the usefulness of feedback/learning I’ve received has come from 5% of providers, who can be identified as the most informed critics pretty quickly.
‘Transparency’ and ‘engaging with negative public feedback’ are applause lights in egalitarian species and societies, like ‘public parks’, ‘community’ and ‘families’. No one wants to argue against these things, so people who aren’t in senior positions remain unaware of their legitimate downsides. And many people enjoy tearing down those they believe to be powerful and successful for the sake of enforced egalitarianism, rather than positive outcomes per se.
The personal desire for attention, and to be adulated as smart and insightful, already pushes people towards public engagement even when it’s an inferior use of time.
This isn’t to say overall people share too much of the industry expertise they have—there are plenty of forces in the opposite direction—but I don’t come with a strong presupposition that they share far too little either.
sharing more things of dubious usefulness is what I advocate.
I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can’t expect to outperform them by much.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
I also don’t care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don’t affect decision criteria because [Insert Robin Hanson analysis here].
I’m not advocating a sea change in policy, but an increase in skepticism at the margin.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
Notably, it’s easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil’s work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)
I’m not sure I fully understand what you’re advocating. You talk about “only selectively engag[ing] with criticism” but I’m not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.
I agree that “we should be skeptical of our stories about why we do things, even after we try to correct for this.” I’m not sure that the reasons I’ve given are the true ones, but they are my best guess. I note that the reasons I give here aren’t necessarily very different from the reasons others making similar transitions would give privately.
I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between “engage in public discourse” and “let one’s views calcify”; unfortunately I think the former does little to prevent the latter.
I don’t understand the claim that “The principles section is an outline of a potential future straightjacket.” Which of the principles in that section do you have in mind?
Whoops, I somehow didn’t see this until now. Scattered EA discourse, shrug.
I am in support of only engaging selectively.
I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions,
great!
I think there is a bit of a false dichotomy between “engage in public discourse” and “let one’s views calcify”; unfortunately I think the former does little to prevent the latter.
agreed
I don’t understand the claim that “The principles section is an outline of a potential future straightjacket.” Which of the principles in that section do you have in mind?
the whole thing. Principles are better as descriptions and not prescriptions :)
WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to
“But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again.” -Herbert Simon, Nobel Laureate, founding father of the AI field
I’ve been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.
And I recommend skimming one of Edward deBono’s books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.
The principles were meant as descriptions, not prescriptions.
I’m quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: “I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.” But because the risks are what they are, I’ve concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone’s attitude toward low-stakes brainstorming from their public communications.
I think it would be a mistake to infer someone’s attitude toward low-stakes brainstorming from their public communications.
Most people wear their hearts on their sleeve to a greater degree than they might realize. Public conservatism of discourse seems a pretty reasonable proxy measure of private conservatism of discourse in most cases. As I mentioned, I am very happy to hear evidence this is not the case for openPhil.
I do not think the model of creativity as a deliberate, trainable set of practices is widely known, so I go out of my way to bring it up WRT projects that are important.
I’m skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I’m skeptical of any particular story being causal.
To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.
There was a thread on Marginal Revolution many years ago about why more economists don’t do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly ‘made fools of themselves’ in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.
So, three claims.
Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
I don’t find the view that publishing a lot of internal thinking for public consumption and feedback is a poor use of time to be implausible on its face. Here are some reasons:
By the time you know enough to write really useful things, your opportunity cost is high (more and better grants, coaching staff internally, etc).
Thoughtful and informative content tends to get very little traffic anyway because it doesn’t generate controversy. Most traffic will go to your most dubious work, thereby wasting your time, other people’s time and spreading misinformation. I’ve benefitted greatly from GiveWell/OpenPhil investing in public communication (including this blog post for example) but I think I’m in a small minority that arguably shouldn’t be their main focus given the amount of money they have available for granting. If there are a few relevant decision-makers who would benefit from a piece of information, you can just quickly email it to them and they’ll understand it without you having to explain things in great detail.
The people with expertise who provide the most useful feedback will email you or meet you eventually anyway—and often end up being hired. I’d say 80% of the usefulness of feedback/learning I’ve received has come from 5% of providers, who can be identified as the most informed critics pretty quickly.
‘Transparency’ and ‘engaging with negative public feedback’ are applause lights in egalitarian species and societies, like ‘public parks’, ‘community’ and ‘families’. No one wants to argue against these things, so people who aren’t in senior positions remain unaware of their legitimate downsides. And many people enjoy tearing down those they believe to be powerful and successful for the sake of enforced egalitarianism, rather than positive outcomes per se.
The personal desire for attention, and to be adulated as smart and insightful, already pushes people towards public engagement even when it’s an inferior use of time.
This isn’t to say overall people share too much of the industry expertise they have—there are plenty of forces in the opposite direction—but I don’t come with a strong presupposition that they share far too little either.
sharing more things of dubious usefulness is what I advocate.
I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can’t expect to outperform them by much.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
I also don’t care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don’t affect decision criteria because [Insert Robin Hanson analysis here].
I’m not advocating a sea change in policy, but an increase in skepticism at the margin.
link
Notably, it’s easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil’s work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)
Thanks for the thoughts!
I’m not sure I fully understand what you’re advocating. You talk about “only selectively engag[ing] with criticism” but I’m not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.
I agree that “we should be skeptical of our stories about why we do things, even after we try to correct for this.” I’m not sure that the reasons I’ve given are the true ones, but they are my best guess. I note that the reasons I give here aren’t necessarily very different from the reasons others making similar transitions would give privately.
I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between “engage in public discourse” and “let one’s views calcify”; unfortunately I think the former does little to prevent the latter.
I don’t understand the claim that “The principles section is an outline of a potential future straightjacket.” Which of the principles in that section do you have in mind?
Whoops, I somehow didn’t see this until now. Scattered EA discourse, shrug.
I am in support of only engaging selectively.
great!
agreed
the whole thing. Principles are better as descriptions and not prescriptions :)
WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to
I’ve been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.
This talk gives a brief overview: https://vimeo.com/89936101
And I recommend skimming one of Edward deBono’s books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.
The principles were meant as descriptions, not prescriptions.
I’m quite sympathetic to the idea expressed by your Herbert Simon quote. This is part of what I was getting at when I stated: “I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.” But because the risks are what they are, I’ve concluded that public discourse is currently the wrong venue for this sort of thing, and it indeed makes more sense in the context of more private discussions. I suspect many others have reached a similar conclusion; I think it would be a mistake to infer someone’s attitude toward low-stakes brainstorming from their public communications.
Most people wear their hearts on their sleeve to a greater degree than they might realize. Public conservatism of discourse seems a pretty reasonable proxy measure of private conservatism of discourse in most cases. As I mentioned, I am very happy to hear evidence this is not the case for openPhil.
I do not think the model of creativity as a deliberate, trainable set of practices is widely known, so I go out of my way to bring it up WRT projects that are important.
+1, excellent comment!