I think if you think this is an ineffective use of limited resources, you absolutely should feel entitled to critique it! In many ways this is what our movement is about! …
Strongly disagree here—despite liking Linch and respecting his work, I think this mindset is actively harmful, and needs to be pointed out and pushed back against.
The movement is about inspiring people to investigate what is or will be effective at improving the world in an impartial welfarist sense, and then to actively invest personally, financially, and professionally in making that happen. Attacking people for doing something you think is sub-optimal seems completely unrelated—and is often detrimental. I keep seeing the assumption that “someone is involved in EA” implies “I should criticize them if they aren’t doing what I think is optimally good.” That’s both horrible as epistemics, and a recipe for a really dysfunctional community—and it needs to stop.
I’m surprised, but happy you engaged—I think that it’s reasonable to disagree, and I’d love to understand more about why. However, I don’t understand the mindset, which you seem to support, that says “it’s fine to tell people you barely know that they are acting sub-optimally, even if plausibly positive” but that it’s wrong, per your downvote, to do what I did, and tell someone to stop doing something that I think is bad for the social dynamics in EA. (And note that I have said very similar things, publicly, more than once, before now.)
I am not saying not to discuss the decision, and whether it was optimal—I am saying that trying to address “the actual decision makers” is not a good norm. Of course, if your position was that it is inappropriate to publicly criticize others generally, it would make sense to agree with my main point, and still downvote me for publicly telling Linch ‘I think this is bad,’ and potentially you could tell me that privately. (But I don’t think that would have been helpful here, especially because a number of people seem to have the same opinion, given the number of disagree votes.)
My point, however, was that criticizing people because they did something less than optimally good is generally unacceptable unless you know them well. In my view, telling people not to do something harmful has a lower bar, and maybe that’s people’s criticism here. (But that’s not how I read most of the discussion—and it definitely wasn’t what Linch’s comment said was “what [EA] is all about”.) I do think I know Linch well enough that he would be OK with me criticizing things he’s doing, though I would likely have done so in private in other circumstances. However, in my view, publicly criticizing people you don’t know for ineffective but plausibly positive things, or worse, what I saw here, encouraging the community generally to publicly criticize specific people for such things, is very harmful, and, as I said, I think it needs to stop.
I think a culture of critique and debate, where people are expected to argue for their resource allocation decisions (or at least major ones involving large amounts of resources), is core to what I see as making EA a promising approach to improving the world. For various reasons, I also prefer as many of these conversations as possible to happen publicly. I’m much less excited about a version of EA where all the important conversations happen in private, & publicly everyone is nice and deferential and stays quiet when they see large amount of resources being spent in ways that they think are ineffective or problematic.
Donating large amounts of money to build a nice theatre is probably mildly good for the world, but if someone was spending EA money to do this I’d absolutely want to see public pushback and critique, and in the absence of that my default assumption would be that the culture would decay to ~uselessness over time.
(I also think it’s generally a mistake to draw a strong qualitative distinction between “harmful” and “suboptimally good”, here and elsewhere. Strong omission/commission distinctions are usually a mistake, and what ultimately matters in both cases is the value of what you did relative to the alternatives.)
In terms of why I downvoted rather than just disagreevoting, I think the comment was phrased as an explicit attempt at moral policing/shaming (“horrible”, “dysfunctional”, “needs to stop”). I would like to see less of this on the Forum, especially given that I think the position being enforced would be bad for the community and the world.
I think that proposing impact models for an intervention someone is considering and discussion values of the variables and the structure is great. That isn’t what we’re discussing here—this has basically just been social shaming and talking about and playing level three. Even aside from that, the correct place for discussion of impact is the people who are considering giving. That means that when Givewell publishes recommendations, they are suggesting everyone give money, and public criticism is absolutely warranted. And post-hoc “lessons learned” written by uninvolved people seems less defensible—but even that requires at least considering the value proposition, and proposing what you think is wrong. What happened here was none of that.
I also think that policing optimality (not drawing “a strong qualitative distinction between ‘harmful’ and ‘suboptimally good’”) is even worse than an optimizing mindset, which itself is a problem, as I argued there.
At this point, it might be helpful if you pointed to some specific things you think Linch was endorsing that you think “need to stop”. It sounds here like you have some specific examples in mind, and it’s unclear how much I/you/Linch would have different opinions about those specific cases.
I continue to disagree with your general claims, which seem to point towards a (strong form of a) “our giving is our business” attitude that I think runs counter to building an effective and epistemically healthy EA community, especially once we’re at the scale of £15m gifts.
Regarding optimality, while I disagree with a lot of the pushback against optimising mindset I’ve seen recently, I think focusing on this is something of a red herring in this context; Linch’s original claim that you contested was that we should “feel entitled to critique” “ineffective use of limited resources”. Weakening the goal from finding the optimal thing to merely finding exceptionally good things doesn’t have much bearing on that claim IMO—there will still be many uses of money that fall far short of that bar, and deploying large amounts of resources on those things should result in criticism.
(I also still think “was this harmful or not” is not a particularly useful heuristic in cases close to the zero line, and I don’t think we should draw much of a distinction between “slightly harmful in expectation” and “slightly good in expectation”, as long as both are much worse than other counterfactual options. This claim also survives a weakening of the EA goal away from strict optimisation.)
First, I think much of the discussion in the comments to this post are an example—it’s generally bad when criticism of what someone else did isn’t “this has concrete negative value” or even “this erodes a norm that we have agreed on,” and is instead “this will make others think differently in ways that harms reputations, regardless of the object level impact.”
Second, criticism of individuals, without any relationship with them. In this case, until we found out that this was funded by an openphil grant—which definitely makes criticism far more reasonable—the criticism was of an unknown donor. If Owen had a non-EA rich contact who he convinced to give the donation, perhaps because they think that academic retreats are great, and that more castles should be used as conference centers, I think it would be a very bad idea to publicly tell them that they shouldn’t have given money to a project that you think looks bad, with very little analysis.
Third, all resources are by definition limited, and there is a huge difference between criticizing the use of limited community resources, compared to criticizing the use of personal resources. For example, I’ve had EAs tell me that I’d really be more effective if I moved to a different city, for example. They are correct, I’d be more impactful as an EA if I was located elsewhere—but I have a family, and prioritize them, and really don’t think that people who just met me should “feel entitled to critique” the use of my personal limited time and energy. (But, yes, several EAs have done so shortly after meeting me, because that’s evidently the norm in the community. Which I think is “horrible”, “dysfunctional”, and “needs to stop.”) Similarly, I sometimes do ineffective things with my money. I think that’s actually good—which is why I said so. But even if I wasn’t interested in publicly defending my donations to my local synagogue, I don’t think it’s anyone else’s place to try to correct me.
Separately, I think we disagree about the expected value of the project. If we ignore PR, (which I think we almost all always should, in favor of questions of norms and ethics,) I think this is nowhere near “close to the zero line,” and think that it’s obviously reasonably high expected value, even if it’s not as effective as whichever top charity you’d prefer. And I think we agree that there’s no useful dividing line between slightly net good and slightly net harmful, and I certainly did not intend to imply that the issue here was that it was close to such a line, and since it was barely above the line, it shouldn’t be criticized. Instead, I’m arguing the point we disagree about, which was optimizing mindset, given that I think this was obviously a reasonably valuable investment.
And to explain my claim that it’s clearly valuable, first, there is tons of retained value in real estate, so the expected cost of the purchase was very small, except for opportunity cost of doing other things with the money—which I think was clearly understood to be far lower when the decision was made.
And the benefit is potentially very large. There is a strong potential for really useful retreats and conferences, better than most of the ones which have occurred already within EA. I know several papers that came out of previous GPI conferences, and the conferences would have been much better if they didn’t have everyone staying in different parts of Oxford, splitting up and making ad-hoc collaborations harder. In contrast, I found events like “Palmcone,” which was run by Lightcone over a week at a resort, incredibly valuable, and had several important connections and projects kickstarted. It was easily worth a multiple of the price of the flight, specifically because it was the type of immersive retreat that this would allow -several days of unstructured discussions with a relatively small group of people, which was really helped by being in a very nice location. However, I heard from people at Lightcone that the only reason it was possible was that the venue was available at a steeply discounted price due to a cancellation.
Strongly disagree here—despite liking Linch and respecting his work, I think this mindset is actively harmful, and needs to be pointed out and pushed back against.
The movement is about inspiring people to investigate what is or will be effective at improving the world in an impartial welfarist sense, and then to actively invest personally, financially, and professionally in making that happen. Attacking people for doing something you think is sub-optimal seems completely unrelated—and is often detrimental. I keep seeing the assumption that “someone is involved in EA” implies “I should criticize them if they aren’t doing what I think is optimally good.” That’s both horrible as epistemics, and a recipe for a really dysfunctional community—and it needs to stop.
I downvoted this, mainly for the last sentence (specifically “it needs to stop”), though I quite strongly disagree with the rest as well.
I’m surprised, but happy you engaged—I think that it’s reasonable to disagree, and I’d love to understand more about why. However, I don’t understand the mindset, which you seem to support, that says “it’s fine to tell people you barely know that they are acting sub-optimally, even if plausibly positive” but that it’s wrong, per your downvote, to do what I did, and tell someone to stop doing something that I think is bad for the social dynamics in EA. (And note that I have said very similar things, publicly, more than once, before now.)
I am not saying not to discuss the decision, and whether it was optimal—I am saying that trying to address “the actual decision makers” is not a good norm. Of course, if your position was that it is inappropriate to publicly criticize others generally, it would make sense to agree with my main point, and still downvote me for publicly telling Linch ‘I think this is bad,’ and potentially you could tell me that privately. (But I don’t think that would have been helpful here, especially because a number of people seem to have the same opinion, given the number of disagree votes.)
My point, however, was that criticizing people because they did something less than optimally good is generally unacceptable unless you know them well. In my view, telling people not to do something harmful has a lower bar, and maybe that’s people’s criticism here. (But that’s not how I read most of the discussion—and it definitely wasn’t what Linch’s comment said was “what [EA] is all about”.) I do think I know Linch well enough that he would be OK with me criticizing things he’s doing, though I would likely have done so in private in other circumstances. However, in my view, publicly criticizing people you don’t know for ineffective but plausibly positive things, or worse, what I saw here, encouraging the community generally to publicly criticize specific people for such things, is very harmful, and, as I said, I think it needs to stop.
I think a culture of critique and debate, where people are expected to argue for their resource allocation decisions (or at least major ones involving large amounts of resources), is core to what I see as making EA a promising approach to improving the world. For various reasons, I also prefer as many of these conversations as possible to happen publicly. I’m much less excited about a version of EA where all the important conversations happen in private, & publicly everyone is nice and deferential and stays quiet when they see large amount of resources being spent in ways that they think are ineffective or problematic.
Donating large amounts of money to build a nice theatre is probably mildly good for the world, but if someone was spending EA money to do this I’d absolutely want to see public pushback and critique, and in the absence of that my default assumption would be that the culture would decay to ~uselessness over time.
(I also think it’s generally a mistake to draw a strong qualitative distinction between “harmful” and “suboptimally good”, here and elsewhere. Strong omission/commission distinctions are usually a mistake, and what ultimately matters in both cases is the value of what you did relative to the alternatives.)
In terms of why I downvoted rather than just disagreevoting, I think the comment was phrased as an explicit attempt at moral policing/shaming (“horrible”, “dysfunctional”, “needs to stop”). I would like to see less of this on the Forum, especially given that I think the position being enforced would be bad for the community and the world.
I think that proposing impact models for an intervention someone is considering and discussion values of the variables and the structure is great. That isn’t what we’re discussing here—this has basically just been social shaming and talking about and playing level three. Even aside from that, the correct place for discussion of impact is the people who are considering giving. That means that when Givewell publishes recommendations, they are suggesting everyone give money, and public criticism is absolutely warranted. And post-hoc “lessons learned” written by uninvolved people seems less defensible—but even that requires at least considering the value proposition, and proposing what you think is wrong. What happened here was none of that.
I also think that policing optimality (not drawing “a strong qualitative distinction between ‘harmful’ and ‘suboptimally good’”) is even worse than an optimizing mindset, which itself is a problem, as I argued there.
At this point, it might be helpful if you pointed to some specific things you think Linch was endorsing that you think “need to stop”. It sounds here like you have some specific examples in mind, and it’s unclear how much I/you/Linch would have different opinions about those specific cases.
I continue to disagree with your general claims, which seem to point towards a (strong form of a) “our giving is our business” attitude that I think runs counter to building an effective and epistemically healthy EA community, especially once we’re at the scale of £15m gifts.
Regarding optimality, while I disagree with a lot of the pushback against optimising mindset I’ve seen recently, I think focusing on this is something of a red herring in this context; Linch’s original claim that you contested was that we should “feel entitled to critique” “ineffective use of limited resources”. Weakening the goal from finding the optimal thing to merely finding exceptionally good things doesn’t have much bearing on that claim IMO—there will still be many uses of money that fall far short of that bar, and deploying large amounts of resources on those things should result in criticism.
(I also still think “was this harmful or not” is not a particularly useful heuristic in cases close to the zero line, and I don’t think we should draw much of a distinction between “slightly harmful in expectation” and “slightly good in expectation”, as long as both are much worse than other counterfactual options. This claim also survives a weakening of the EA goal away from strict optimisation.)
Thanks for taking this back to the object level!
Types of things that I object to:
First, I think much of the discussion in the comments to this post are an example—it’s generally bad when criticism of what someone else did isn’t “this has concrete negative value” or even “this erodes a norm that we have agreed on,” and is instead “this will make others think differently in ways that harms reputations, regardless of the object level impact.”
Second, criticism of individuals, without any relationship with them. In this case, until we found out that this was funded by an openphil grant—which definitely makes criticism far more reasonable—the criticism was of an unknown donor. If Owen had a non-EA rich contact who he convinced to give the donation, perhaps because they think that academic retreats are great, and that more castles should be used as conference centers, I think it would be a very bad idea to publicly tell them that they shouldn’t have given money to a project that you think looks bad, with very little analysis.
Third, all resources are by definition limited, and there is a huge difference between criticizing the use of limited community resources, compared to criticizing the use of personal resources. For example, I’ve had EAs tell me that I’d really be more effective if I moved to a different city, for example. They are correct, I’d be more impactful as an EA if I was located elsewhere—but I have a family, and prioritize them, and really don’t think that people who just met me should “feel entitled to critique” the use of my personal limited time and energy. (But, yes, several EAs have done so shortly after meeting me, because that’s evidently the norm in the community. Which I think is “horrible”, “dysfunctional”, and “needs to stop.”) Similarly, I sometimes do ineffective things with my money. I think that’s actually good—which is why I said so. But even if I wasn’t interested in publicly defending my donations to my local synagogue, I don’t think it’s anyone else’s place to try to correct me.
Separately, I think we disagree about the expected value of the project. If we ignore PR, (which I think we almost all always should, in favor of questions of norms and ethics,) I think this is nowhere near “close to the zero line,” and think that it’s obviously reasonably high expected value, even if it’s not as effective as whichever top charity you’d prefer. And I think we agree that there’s no useful dividing line between slightly net good and slightly net harmful, and I certainly did not intend to imply that the issue here was that it was close to such a line, and since it was barely above the line, it shouldn’t be criticized. Instead, I’m arguing the point we disagree about, which was optimizing mindset, given that I think this was obviously a reasonably valuable investment.
And to explain my claim that it’s clearly valuable, first, there is tons of retained value in real estate, so the expected cost of the purchase was very small, except for opportunity cost of doing other things with the money—which I think was clearly understood to be far lower when the decision was made.
And the benefit is potentially very large. There is a strong potential for really useful retreats and conferences, better than most of the ones which have occurred already within EA. I know several papers that came out of previous GPI conferences, and the conferences would have been much better if they didn’t have everyone staying in different parts of Oxford, splitting up and making ad-hoc collaborations harder. In contrast, I found events like “Palmcone,” which was run by Lightcone over a week at a resort, incredibly valuable, and had several important connections and projects kickstarted. It was easily worth a multiple of the price of the flight, specifically because it was the type of immersive retreat that this would allow -several days of unstructured discussions with a relatively small group of people, which was really helped by being in a very nice location. However, I heard from people at Lightcone that the only reason it was possible was that the venue was available at a steeply discounted price due to a cancellation.