sharing more things of dubious usefulness is what I advocate.
I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can’t expect to outperform them by much.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
I also don’t care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don’t affect decision criteria because [Insert Robin Hanson analysis here].
I’m not advocating a sea change in policy, but an increase in skepticism at the margin.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
Notably, it’s easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil’s work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)
sharing more things of dubious usefulness is what I advocate.
I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can’t expect to outperform them by much.
I think it is easy to underestimate the effect of the long tail. See Chalmers’ comment on the value of the LW and EA communities in his recent AMA.
I also don’t care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don’t affect decision criteria because [Insert Robin Hanson analysis here].
I’m not advocating a sea change in policy, but an increase in skepticism at the margin.
link
Notably, it’s easy for me to imagine that people who work at foundations outside the EA community spend time reading OpenPhil’s work and the discussion of it in deciding what grants to make. (This is something that could be happening without us being aware of it. As Holden says, transparency has major downsides. OpenPhil is also running a risk by associating its brand with a movement full of young contrarians it has no formal control over. Your average opaquely-run foundation has little incentive to let the world know if discussions happening in the EA community are an input into their grant-making process.)