Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
You cited.. prioritization
OK, so essentially you don’t own up to strawmanning my views?
You… ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
I agree… EA brand?
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
One underlying reason your comment got a lot of upvotes was because the post was viewed many times. Controversy leads to pageviews. Arguably “net upvotes” is an OK metric for post quality (where popularity is important) whereas “net upvotes”/”pageviews” might make more sense for comments.
Side-issue: isn’t Karma from posts weighted at 10x compared to Karma in comments? Or at least, I think it once was. And that would help a bit in this particular instance.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holden writes “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
This largely seems reasonable to me. However, I’ll just push back on the idea of treating near/long-term as the primary split:
I don’t see people on this forum writing a lot about near-term AI issues, so does it even need a category?
It’s arguable whether near-term/long-term is a more fundamental division than technical/strategic. For example, people sometimes use the phrase “near-term AI alignment”, and some research applies to both near-term and long-term issues.
One attractive alternative might be just to use the categories AI alignment and AI strategy and forecasting.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causesI think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
I looked at some literature on this question, considering various reference classes back in 2014: YC founders, Stanford Entrepreneurs, VC-funded companies.
The essence of the problem in my view is 1) choosing (and averaging over) good reference classes, 2) understanding the heavy tails, and 3) understanding that startup founders are selected to be good at founding (a correlation vs causation issue).
First, consider the first two points:
1. Make very sure that your reference class consists mostly of startups, not less-ambitious family/lifestyle businesses.
2. The returns of startups are so heavy-tailed that you can make a fair estimate based on just the richest <1% of founders in the reference class (based on the public valuation and any dilution, or based on the likes of Forbes billionaire charts.).
For example, in YC, we see that Stripe and AirBnB are worth ~$100B each, and YC has maybe graduated ~2k founders, so each founder might make ~$100M on-expectation.
I’d estimated $6M and $10M on-expectation for VC-funded founders and Stanford-founders respectively.
A more controversial reference class is “earn-to-give founders”. Sam Bankman-Fried has made about $10B from FTX. If 50 people have pursued this path, the expected earnings are $200M.
The YC and “earn-to-give” founder classes are especially small. In aggregate, I think we can say that the expected earnings for a generic early-stage EA founder are in the range of $1-100M, depending on their reference class (including the degree of success and situation). Having said this, 60-90% of companies make nothing (or lose money). With such a failure rate, checking against one’s tolerance for personal risk is important.
Then, we must augment the analysis by considering the third point:
3. Startup founders are selected to be good at founding (correlation vs causation)
If we intervene to create more EA founders, they’ll perform less well than the EAs that already chose to found startups, because the latter are disproportionately suited to startups. How much worse is unclear—you could try to consider more and less selective classes of founders (i.e. make a forecast that conditions on / controls for features of the founders) but that analysis takes more work, and I’ll leave it to others.
EA popsci would be fun!
§1. The past was totally fucked.
§2. Bioweapons are fucked.
§3. AI looks pretty fucked.
§4. Are we fucked?
§5. Unfuck the world!
Good point—this has changed my model of this particular issue a lot (it’s actually not something I’ve spent much time thinking about).
I guess we should (by default) imagine that if at time T you recruit a person, that they’ll do an activity that you would have valued, based on your beliefs at time T.
Some of us thought that recruitment was even better, in that the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/don’t.
There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might’ve used for thinking, and for reducing your uncertainties.
On balance, recruitment now seems like a pretty bad way to deal with uncertainty.
How the Haste Consideration turned out to be wrong.
In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:
Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.
When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised—severely—so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn’t counter the haste consideration per se. It’s just one way that recruitment is less good than one might hope-> See AGB’s subthread].
The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally—the people who did the most good in the EA movement with me, and/or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.
Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on—as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!
A step that I think would be good to see even sooner is any professor at a top school getting in a habit of giving talks at gifted high-schools. At some point, it might be worth a few professors each giving dozens of talks per year, although it wouldn’t have to start that way.
Edit: or maybe just people with “cool” jobs. Poker players? Athletes?
What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?
Same energy as:
High impact teachers? (Teaching as Task Y)
The typical view, here, on high-school outreach seems to be that:
High-school outreach has been somewhat effective, uncovering one highly capable do-gooder per 10-100 exceptional students.
But people aren’t treating it with the requisite degree of sensitivity: they don’t think enough about what parents think, they talk about “converting people”, and there have been bad events of unprofessional behaviour.
So I think high-school outreach should be done, but done differently. Involving some teachers would be useful step toward professionalisation (separating the outreach from the rationalist community would be another).
But (1) also suggests that teaching at a school for gifted children could be a priority activity in itself. The argument is if a teacher can inspire a bright student to try to do good in their career, then the student might be manifold more effective than the teacher themselves would have been, if they had tried to work directly on the world’s problems. And students at such schools are exceptional enough (Z>2) that this could happen many times throughout a teacher’s career.
This does not mean that teaching is the best way to reach talented do-gooders. But it doesn’t have to be, because it could attract some EAs who who wouldn’t suit outreach paths. It leads to stable and respected employment, involves interpersonal contact that can be meaningful, and so on (at least, some interactions with teachers were quite meaningful to me, well before EA entered my picture).
I’ve said that teachers could help professionalise summer schools, and inspire students. I also think that a new high school for gifted altruists could be a high-priority. It could gather talented altruistic students together, so that they have more social support, and better meet their curricular needs (e.g. econ, programming, philosophy, research). I expect that such a school could attract great talent. It would be staffed with pretty talented at knowledgeable teachers. It would be advised by some professors at top schools. If necessary, by funding scholarships, it could grow its student based arbitrarily. Maybe a really promising project.
A friend’s “names guy” once suggested calling the EA movement “Unfuck the world”...
OK, what names would we expect to promote action-orientation if “GP” wouldn’t?
Yeah, it’s an unfortunate phrasing. Often when people, especially authorities, say that they feel that something is not on the table, they’re in-effect declaring that it is off the table, while avoiding the responsibility of explaining why. Which probably was not intended, but still came across as a bit uncool. It’s like: can’t we just figure out whether it’s a good idea, and then decide whether to put it on the table?
I like this style of thinking, but I don’t think it pushes in the direction that you suggest. EA entities with “priorities” in the name disproportionately work on surveys and policy, whereas those with “EA” in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.
On balance, I think “global priorities” connotes more concreteness and action-orientation than “EA”, which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.
I kinda think that “I’m an EA/he’s an EA/etc” is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.
Though you can just say “I’m interested in / I work on global priorities / I’m in the prioritisation community”, or anything that you would say about the AI safety community, for example.
TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.