I strongly agree with both this specific sentiment and the general attitude that generates sentiments like this.
However, I think it’s worth pointing out that you don’t have to agree with the Labour Party’s current positions, or think that it’s doing a good job, to be a good (honest) member. I think as long as you sincerely wish the party to perform well in elections or have more influence, even if you hope to achieve that by nudging its policy platform or general strategy in a different direction from the current one, then I wouldn’t think you were being entryist or dishonest by joining.
(I feel like this criterion is maybe a bit weak and there should be some ideological essence of the Labour Party that you should agree with before joining, but I’m not sure it would be productive to pin down exactly what it was and I expect it strongly overlaps with “I want the Labour Party to do well” anyway)
I actually thought the “of course I’d rather you’d stay a member” part was odd, since nowhere in the post up to that point had you said anything to indicate that you supported Labour yourself. The post doesn’t say anything about whether Labour itself is good or bad, or whether that should factor into your decision to join it at all, but in this comment it sounds like those are crucial questions for whether this step is right or not.
Yeah I think you have to view this exercise as optimizing for one end of the correctness-originality spectrum. Most of what is submitted is going to be uncomfortable admitting in public because it’s just plain wrong, so if this exercise is to have any value at all, it’s in sifting through all the nonsense, some of it pretty rotten, in the hope of finding one or two actually interesting things in there.
GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasn’t useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.
I think you’re coming at this from a point of view of “more feedback is always better”, forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you’re not necessarily gaining anything by hearing more copies of each.
Certainly you won’t gain anything for free, and you may not be able to afford the non-monetary cost.
The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.
Consider who doesn’t benefit from your article, and if you can help them filter themselves out.
Consider how people might skim-read your article, and how to help them derive value from it.
Lead with the punchline – see if you can make the most important sentence in your article the first one.
Some information might be clearer in a non-discursive structure (like… bullet points, I guess).
Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you’re both working on.
Why would the whole community read it? You’d set out in the initial post, as Will has done, why people might or might not be interested in what you have to say, and only people who passed that bar would spend any real time on it. I don’t think the bar should be that high.
This is a question I consider crucial in evaluating the work of organizations, so it’s sort of embarrassing I’ve never really tried to apply it to the community as a whole. Thanks for bringing that to light.
I think one thing uniting all your collapse scenarios is that they’re gradual. I wonder how much damage could be done to EA by a relatively sudden catastrophe, or perhaps a short-ish series of catastrophes. A collapse in community trust could be a big deal: say there was a fraud or embezzlement scandal at CEA, OPP, or GiveWell. I’m not sure that would be catastrophic by itself, but perhaps if several of the organizations were damaged at once it would make people skeptical about the wisdom of reforming around any new centre, which would make it much harder to co-ordinate.
Another thing that I see as a potential risk is high-level institutions having a pattern of low-key misbehaviour that people start to see (wrongly, I hope) as an inevitable consequence of the underlying ideas. Suppose the popular perception starts to be “thinking about effectiveness in charity is all well and good, but it inevitably leads down a road of voluntary extinction / techno-utopianism / eugenics / something else low-status or bad”. Depending on how bad the thing is, smart thoughtful people might start self-selecting out of the movement, and the remainder might mismanage perceptions of them even worse.
Recent EA thinking on this is probably mostly:
https://founderspledge.com/stories/climate-change-executive-summary ”the giving recommendations based on this research are The Coalition for Rainforest Nations and Clean Air Task Force”
https://lets-fund.org/clean-energy/ who “are crowdfunding for the Clean Energy Innovation program at the Information Technology and Innovation Foundation”.
Both are claiming to have done a lot of research, but I don’t think either Founders’ Pledge or Let’s Fund have a GiveWell-like track record and I’m slightly nervous that we’re repeating the mistake we (as a community) made when we recommended Cool Earth based on Giving What We Can’s relatively cursory investigation into it, and then an only somewhat less cursory investigation suggested it wasn’t much use.
I don’t think that “going silent” or failing to report donations is indication that people are not meeting the pledge. Nowadays I don’t pay GWWC as an organisation much / any attention, but I’m still donating 10% a year (and then some).
To be honest I haven’t read closely enough to understand where you do and don’t account for “quiet pledge-keepers” in your analysis, but I at least think stuff like this is just plain wrong:
total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)
total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)
I couldn’t find The Clear Fund when I looked just now. Would be interested in someone confirming that it’s still there.
If you want to look up the maths elsewhere, it may be helpful to know that a constant, independent chance of death (or survival) per year is modelled by a negative binomial distribution.
Sounds like the fact there was already substantial doubt over whether the program worked was a key part of the decision to shut it down. That suggests that if the same kind of scandal had affected a current top charity, they would have worked harder to continue the project.
I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it’s necessary to meet the standards of those individuals or the community as a whole about what’s acceptable.
When you say “you don’t need to justify your actions to EAs”, then I have sympathy with that, because EAs aren’t special, we’re no particular authority and don’t have internal consensus anyway. But you seem to be also arguing “you don’t need to justify your actions to yourself / at all”. I’m not confident that’s what you’re saying, but if it is I think you’re setting too low a standard. If people aren’t required to live in accordance with even their own values, what’s the point in having values?
It’s odd to call Boris an opponent of the government. He’s a sitting MP—he’s part of the state. To me this seems to be more about the courts being able to hold Parliament accountable.
I like the idea here a great deal, but I expect there’s going to be a lot of variation in what creates what effect in whom. I wonder if there’s better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what’s idiosyncratic
There’s an unanswered question here of why Good Ventures makes grants that OpenPhil doesn’t recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don’t find it that surprising that they do so. People like to do more than one thing?
Have you attempted to contact GV or OpenPhil directly about this?
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.