james.lucassen
Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).
I agree with most of your assessment here. But I think rather than “simple altruism”, it would be better to focus on “altruistic intent”. Making this substitution doesn’t change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.
That said, increasing altruistic intent is, I think, included under the heading of broad longtermism. I don’t have a source for this, but my impression is that not that much work goes towards broad longtermism because it seems really hard, not that urgent, and EAs tend to be bad at the key skills involved, like persuasion and politics.
I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable.
What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.
There’s so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.
Bro this is really scary. Well done.
Observation: prion-catalysis or not, any vaccine-evasion measures at all seem extraordinarily dangerous. For a highly infectious threat, the fastest response we have right now is mass vaccine manufacture, and that seems just barely fast enough. But our vaccine tech is public knowledge, and an apocalyptic actor can take all the time they want to design a countermeasure.
Once a threat with any sort of countermeasure is released, we first have to go through a vaccine development cycle to find that out in the first place, then a research cycle to figure out how to beat it, then a development/deployment cycle to use those research results and actually beat it. Those latter two phases seem quite slow and notably hard to speed up, since we’d have to find ways to prepare to do fast research, manufacturing, and deployment in a very general sense, to be able to respond to any plausible anti-vaccine measure.
Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I’m confused by the line “I could create just a little more hedonium”. My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?
I ended up interpreting things as if “hedonium” was meant to mean “utility”, and the narrator is deciding what their last thought should be—how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly—or if I was incorrect, I hope this feedback is helpful :)
thank machine doggo
This looks great! I’m concerned that it won’t get the traffic it needs to be useful to people. Have you considered/attempted reaching out to 80K to put a link on the job board or something? That’s my go-to careers resource, and I think the main way I could learn about the existence of something like this once this post is off the front page.
If this were fiction that would make Buck your manic-pixie-dream-girl and I find that hilarious.
~20, assuming trend is linear. If it’s exponential, god help us all
Suggestion: the Future Fund should take ideas on a rolling basis, and assess them in rounds. EA is the kind of community where potentially good ideas bubble up all the time, and it would be a real shame if those were wasted because the funders only listen during narrow windows. Having an open drop-box to submit ideas costs FF almost nothing, and makes a bias-towards-action and constant passive brainstorming much easier.
Context: this idea
Agree that X-risk is a better initial framing than longtermism—it matches what the community is actually doing a lot better. For this reason, I’m totally on board with “x-risk” replacing “longtermism” in outreach and intro materials. However, I don’t think the idea of longtermism is totally obsolete, for a few reasons:
Longtermism produces a strategic focus on “the last person” that this “near-term x-risk” view doesn’t. This isn’t super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don’t make much sense.
S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren’t captured by the short-term x-risk view.
The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don’t seem totally implausible. The world is big, and if you’re particularly pessimistic about changing it, then this might not be enough to budge you. Throw in an extra 10^30, though, and you’ve got a really strong argument, if you’re the kind of person that takes numbers seriously.
Submitting this now because it seems important, and I want to give this comment a chance to bubble to the top. Will fill in more reasons later if any major ones come up as I continue thinking.
I’ll hop on the “I’d love to see sources” train to a certain extent, but honestly we don’t really need them. If this is happening it’s super important, and even if it isn’t happening right now it’ll probably start happening somewhat soon. We should have a plan for this.
Oop, thanks for correction. To be honest I’m not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho >:/
Hi Organizers! The US requires proof of a negative COVID test to enter the country, even for citizens. Will/could you provide some advice or facilities at the conference for getting this? I (and I imagine many others) know literally nothing about the UK health system, am going to have to fly back to the US after the conference, and really don’t want to get stuck in airport hell :/
Talk to u/Infinity, I see them on the EA subreddit every now and then. They singlehandedly provide like 90% of the memes on there, and they’re pretty good 👍
pointed!
This is good and I want to see explicit discussion of it. One framing that I think might be helpful:
It seems like the cause of a lot of the recent “identity crisis” in EA is that we’re violating good heuristics. It seems like if you’re trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.
However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talent and “EA celebrities”, then maybe we are just in one of the worlds where these heuristics lead us astray, despite being good overall.
Ultimately, I think it comes down to: “if we live in a world where inclusiveness leads to the highest impact, I want EA to be inclusive. If we live in a world where elitism leads to the highest impact, I want EA to be elitist”. That feels really uncomfortable to say, which I think is good, but we should be able to overcome discomfort IF we need to.
Yes, 100% agree. I’m just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don’t know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.
I’m unsure if I agree or not. I think this could benefit from a bit of clarification on the “why this needs to be retired” parts.
For the first slogan, it seems like you’re saying that this is not a complete argument for longtermism—just because the future is big doesn’t mean its tractable, or neglected, or valuable at the margin. I agree that it’s not a complete argument, and if I saw someone framing it that way I would object. But I don’t think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It’s not complete, but it’s a quick way to summarize a big part of the argument.
For the second one, it sounds like you’re saying this is misleading—it doesn’t accurately represent the work being done, which is mostly on lock-in events, not affecting the long-term future. This is true, but it takes only one extra sentence to say “but this is hard so in practice we focus on lock-in”. It’s a quick way to summarize the philosophical motivations, but does seem pretty detached from practice.
I think my takeaway from thinking thru this comment is this:
Longtermism is a complicated argument with a lot of separate pieces
We have slogans that summarize some of those pieces and leave out others
Those slogans are useful in a high-context environment, but can be misleading for those that don’t already know all the context they implicitly rely on
Nice work! Lots of interesting results in here that I think lead to concrete strategy insights.
only 7.4% of New York University students knew what effective altruism (EA) is. At the same time, 8.8% were extremely sympathetic to EA … Interestingly, these EA-sympathetic students were largely ignorant about EA; only 14.5% knew about it before the survey.
This is a great core finding! I think I got a couple important lessons from these three numbers alone. Outreach could probably be a few times bigger without the proportion of EA students who know about it getting near enough to 100% for sharply diminishing returns. Knowing what EA is seems like about 2:1 evidence in favor of being EA-sympathetic, which is useful, but not that huge. Getting the impression that “nobody at my school knows about EA :(” isn’t actually very bad news—folks who are interested in EA do know about it at a meaningfully higher rate, so even at the ideal level, maybe only 50% of students will know what EA is.
Strikingly, 47.9% of this group did not consider existential risk mitigation to be a global priority.
This seems to suggest that recent thoughts about “longtermism” vs “X-risk” and the importance of arguing for the size of the far future may not be good for outreach. My impression is that maybe the importance of X-risk even without the “size of the future” piece of the argument seems important to someone who’s been around EA for a while, but isn’t obvious enough to be good for outreach, where attention is a scarce resource. Accepting the argument for high X-risk only leads to prioritizing it about half the time. I wonder how including a size-of-the-future argument would change this?
There were few robust demographic predictors of EA agreement. Neither gender, SAT scores, nor most study subjects significantly correlated with it.
This is a big update! I expected correlations on all three of those things. This suggests the current EA stereotype is more due to founder effect than actual difference in affinity for the ideas, which is huge for outreach targeting.
I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I’m not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works—as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author’s ability.
Potential ways around this that come to mind:
Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
Maybe even without that, there’s still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?