james.lucassen
If this were fiction that would make Buck your manic-pixie-dream-girl and I find that hilarious.
So, pulling out this sentence, because it feels like it’s by far the most important and not that well highlighted by the format of the post:
what is desired is a superficial critique that stays within and affirms the EA paradigm while it also checks off the boxes of what ‘good criticism’ looks like and it also tells a story of a concrete win that justifies the prize award. Then everyone can feel good about the whole thing, and affirm that EA is seeking out criticism.
This reminds me a lot of a point mentioned in Bad Omens, about a certain aspect of EA which “has the appearance of inviting you to make your own choice but is not-so-subtly trying to push you in a specific direction”.
I’ve also anecdotally had this same worry as a community-builder. I want to be able to clear up misunderstandings, make arguments that folks might not be aware of, and make EA welcoming to folks who might be turned off by superficial pattern-matches that I don’t think are actually informative. But I worry a lot that I can’t avoid doing these things asymmetrically, and that maybe this is how the descent into deceit and dogmatism starts.
The problem at hand seems to be basically that EA has a common set of strong takes, which are leaning towards becoming dogma and screwing up epistemics. But the identity of EA encourages self-image as rational/impartial/unbiased, which makes it hard for us to discuss this out loud—it requires first considering that we strive to be rational/impartial/unbiased, but are nowhere near yet.
saving money while searching for the maximum seems bad
In the sense of “maximizing” you’re using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc.
However, I think the sense of “maximizing” used in the post you’re responding to, and more broadly in EA when people talk about “maximizing ethics”, is quite different. I understand it to mean something more like “doing the most good possible”—not aiming to clear a certain threshold, or trading off with other ethical or non-ethical priorities. It’s a philosophical commitment that says “even if you’re already saved a hundred lives, it’s just as ethically important to save one more. You’re not done.”
It’s possible that a commitment to a maximizing philosophy can lead people to adopt a mindset like the one you describe in this post—to the extent that that’s true I don’t disagree at all that they’re making a mistake. But I think there may be a terminological mismatch here that will lead to illusory disagreements.
Suggestion: the Future Fund should take ideas on a rolling basis, and assess them in rounds. EA is the kind of community where potentially good ideas bubble up all the time, and it would be a real shame if those were wasted because the funders only listen during narrow windows. Having an open drop-box to submit ideas costs FF almost nothing, and makes a bias-towards-action and constant passive brainstorming much easier.
Context: this idea
~20, assuming trend is linear. If it’s exponential, god help us all
I’m unsure if I agree or not. I think this could benefit from a bit of clarification on the “why this needs to be retired” parts.
For the first slogan, it seems like you’re saying that this is not a complete argument for longtermism—just because the future is big doesn’t mean its tractable, or neglected, or valuable at the margin. I agree that it’s not a complete argument, and if I saw someone framing it that way I would object. But I don’t think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It’s not complete, but it’s a quick way to summarize a big part of the argument.
For the second one, it sounds like you’re saying this is misleading—it doesn’t accurately represent the work being done, which is mostly on lock-in events, not affecting the long-term future. This is true, but it takes only one extra sentence to say “but this is hard so in practice we focus on lock-in”. It’s a quick way to summarize the philosophical motivations, but does seem pretty detached from practice.
I think my takeaway from thinking thru this comment is this:
Longtermism is a complicated argument with a lot of separate pieces
We have slogans that summarize some of those pieces and leave out others
Those slogans are useful in a high-context environment, but can be misleading for those that don’t already know all the context they implicitly rely on
But I am also afraid that … we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them
I’ve had a model of community building at the back of my mind for a while that’s something like this:
“New folks come in, and pick up knowledge/epistemics/heuristics/culture/aesthetics from the existing group, for as long as their “state” (wrapping all these things up in one number for simplicity) is “less than the community average”. But this is essentially a one way diffusion sort of dynamic, which means that the rate at which newcomers pick stuff up from the community is about proportional to the gap between their state and the community state, and proportional to the size of community vs number of relative newcomers at any given time.”
The picture this leads to is kind of a blackjack situation. We want to grow as fast as we can, for impact reasons. But if we grow too fast, we can’t onboard people fast enough, the community average starts dropping, and seems unlikely to recover (we go bust). On this view, figuring out how to “teach EA culture” is extremely important—it’s a limiting factor for growth, and failure due to going bust is catastrophic while failure from insufficient speed is gradual.
Currently prototyping something at the Claremont uni group to try and accelerate this. Seems like you’ve thought about this sort of thing a lot—if you’ve got time to give feedback on a draft, that would be much appreciated.
Personal check-for-understanding: would this be a fair bullet-point summary?
Enthusiastically engaging with EA in college != actually having an impact
Quantifying the value of an additional high-impact EA is hard
Counterfactual impact of a community-builder is unclear, and plausibly negative
Assorted optics concerns: lack of rigor, self-aggrandizement, elitism, impersonalness
Nice work! Lots of interesting results in here that I think lead to concrete strategy insights.
only 7.4% of New York University students knew what effective altruism (EA) is. At the same time, 8.8% were extremely sympathetic to EA … Interestingly, these EA-sympathetic students were largely ignorant about EA; only 14.5% knew about it before the survey.
This is a great core finding! I think I got a couple important lessons from these three numbers alone. Outreach could probably be a few times bigger without the proportion of EA students who know about it getting near enough to 100% for sharply diminishing returns. Knowing what EA is seems like about 2:1 evidence in favor of being EA-sympathetic, which is useful, but not that huge. Getting the impression that “nobody at my school knows about EA :(” isn’t actually very bad news—folks who are interested in EA do know about it at a meaningfully higher rate, so even at the ideal level, maybe only 50% of students will know what EA is.
Strikingly, 47.9% of this group did not consider existential risk mitigation to be a global priority.
This seems to suggest that recent thoughts about “longtermism” vs “X-risk” and the importance of arguing for the size of the far future may not be good for outreach. My impression is that maybe the importance of X-risk even without the “size of the future” piece of the argument seems important to someone who’s been around EA for a while, but isn’t obvious enough to be good for outreach, where attention is a scarce resource. Accepting the argument for high X-risk only leads to prioritizing it about half the time. I wonder how including a size-of-the-future argument would change this?
There were few robust demographic predictors of EA agreement. Neither gender, SAT scores, nor most study subjects significantly correlated with it.
This is a big update! I expected correlations on all three of those things. This suggests the current EA stereotype is more due to founder effect than actual difference in affinity for the ideas, which is huge for outreach targeting.
I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I’m not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works—as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author’s ability.
Potential ways around this that come to mind:
Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
Maybe even without that, there’s still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?
I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.
This is a great sentence, I will be stealing it :)
However, I think “having good legible epistemics” being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I’ve found anecdotally is that a sort of “friendly transparency” works pretty well for this—just be up front about what you believe and why, don’t try to hide ideas that might scare people off, be open about the optics on things, ways you’re worried they might come across badly, and why those bad impressions are misleading, etc.
This looks great! I’m concerned that it won’t get the traffic it needs to be useful to people. Have you considered/attempted reaching out to 80K to put a link on the job board or something? That’s my go-to careers resource, and I think the main way I could learn about the existence of something like this once this post is off the front page.
Thanks for this post—dealing with this phenomenon seems pretty important for the future of epistemics vs dogma in EA. I want to do some serious thinking about ways to reduce infatuation, accelerate doubt, and/or get feedback from distancing. Hopefully that’ll become a post sometime in the near-ish future.
This is good and I want to see explicit discussion of it. One framing that I think might be helpful:
It seems like the cause of a lot of the recent “identity crisis” in EA is that we’re violating good heuristics. It seems like if you’re trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.
However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talent and “EA celebrities”, then maybe we are just in one of the worlds where these heuristics lead us astray, despite being good overall.
Ultimately, I think it comes down to: “if we live in a world where inclusiveness leads to the highest impact, I want EA to be inclusive. If we live in a world where elitism leads to the highest impact, I want EA to be elitist”. That feels really uncomfortable to say, which I think is good, but we should be able to overcome discomfort IF we need to.
I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable.
What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.
There’s so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.
Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I’m confused by the line “I could create just a little more hedonium”. My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?
I ended up interpreting things as if “hedonium” was meant to mean “utility”, and the narrator is deciding what their last thought should be—how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly—or if I was incorrect, I hope this feedback is helpful :)
yup sounds like we’re on the same page—I think I steelmanned a little too hard. I agree that the people making these criticisms probably do in fact think that being shot by robots or something would be bad.
I’ll hop on the “I’d love to see sources” train to a certain extent, but honestly we don’t really need them. If this is happening it’s super important, and even if it isn’t happening right now it’ll probably start happening somewhat soon. We should have a plan for this.
Hm. I think I agree with the point you’re making, but not the language it’s expressed in? I notice that your suggestion is a change in endorsed moral principles, but you make an instrumental argument, not a moral one. To me, the core of the issue is here:
If EA becomes a very big movement, I predict that individuals on the fringes of the movement will commit theft with the goal to donate more to charity and violence against individuals and organisations who pose x-risks.
This seems to me more of a matter of high-fidelity communication than a matter of which philosophical principles we endorse. The idea of ethical injunctions is extremely important, but not currently treated a central EA pillar idea. I would be very wary of EA self-modifying into a movement that explicitly rejects utilitarianism on the grounds that this will lead to better utilitarian outcomes.
Agree that X-risk is a better initial framing than longtermism—it matches what the community is actually doing a lot better. For this reason, I’m totally on board with “x-risk” replacing “longtermism” in outreach and intro materials. However, I don’t think the idea of longtermism is totally obsolete, for a few reasons:
Longtermism produces a strategic focus on “the last person” that this “near-term x-risk” view doesn’t. This isn’t super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don’t make much sense.
S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren’t captured by the short-term x-risk view.
The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don’t seem totally implausible. The world is big, and if you’re particularly pessimistic about changing it, then this might not be enough to budge you. Throw in an extra 10^30, though, and you’ve got a really strong argument, if you’re the kind of person that takes numbers seriously.
Submitting this now because it seems important, and I want to give this comment a chance to bubble to the top. Will fill in more reasons later if any major ones come up as I continue thinking.