Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team
Mjreard
You’d have to value animals at ~millionths of humans for scale and neglectedness not to be dispositive. Only countervailing considerations are things around cooperativeness, positive feedback loops, and civilizational stability, all of which are speculative and even sign uncertain
Quoted Zvi here seems to fail at some combination of thinking on the margin and distinguishing means and ends. GiveWell_USA is also not what Zvi’s take implies you should do.
Quoted Zvi emphasizes that functioning, long-run-sustainable groups/societies are necessary for achieving big things in the world. If you wholly neglect in-group beneficence, your group will fail, and you’ll fail at whatever big thing you were trying to do. This is true and I think any sensible EA acknowledges it.
What is not true and potentially down-right-silly, is saying that in-group beneficence must therefore become your terminal goal. I fear that’s what this post suggests.
If your terminal goal is global well-being, you should do whatever amount of in-group beneficence maximizes global well-being. And crucially, it is extremely implausible this amount is 100% of your effort or anything close to that. People watch Netflix and buy Range Rovers – these do nothing for their communities, yet no one worries the communities where people do these things are at risk of failure. In fact, the leisure hours and number of nice cars are often how community success is measured. That’s because these consumer goods are treated as terminal goods. Other things can fill that role with similarly low risk to group sustainability.
I think quoted Zvi is attacking a strawman EA who neglects all his relationships and investments to give his last scrap of food to the poor right now. Yes, that strawman EA’s movement won’t last long. EAs with good jobs getting promotions donating 10% or building highly-functional, well-paying non-profits are not that straw man though. They just notice that the marginal income/effort others put towards nicer cars and bigger houses can be applied to something more meaningful.
Separately, I don’t think GiveWell_USA is what Zvi’s quote has in mind. Neighbors and co-nationals are arbitrary groupings in more ways than just morally. I think Zvi’s suggestion is to invest in especially reciprocal and compounding relationships for you and your people defined by some measure you care about. For most people this is a cluster of family and close friends, for rationalists, I think it’s other rationalists, maybe churches, I don’t know. I suppose you could choose the US as your tribe, but reciprocity is going to be a lot weaker than in the previous examples and will thereby suffer from Zvi’s critique. An American Patriots Union building lots of tiny homes for the American homeless [1] might well see it’s coffers and membership run dry after a few decades of none of the people they housed feeling particularly inclined to join or otherwise help the APU in particular. Nothing about the beneficiaries being American made this inherently more reciprocal than bednets for foreigners.
If what you want is a guide on how to build a group that endlessly accumulates social capital and never deploys that capital to any end in particular, I think the partnership of major law/consulting/finance firms or the major political parties are the right models to work with. I just don’t think those groups’ practice of self-interest is worthwhile compared to saving lives.
- ^
let’s say hypothetically this is the most QALYs/$ you can do in the US
- ^
All fair points. My biggest point of disagreement is (2). As a true example, only around a third of EAG(x) attendees have applied for advising. That’s a much more expensive program for both the org and the participant. Tons of relevant people effectively don’t know advising exists (e.g. maybe they’ve heard of it once, but if it didn’t make it to the top of their stack that day, they forget about it). Importantly, our highest counterfactual comes from talking to only modestly engaged people, so maybe people who’ve done some reading but only know 1-2 other EAs – I see this program being aimed at these not-highly-engaged people makes the call more worthwhile, not less. “You were up for a call if someone leaned on you a bit,” seems roughly ideal to me.
I would guess that thinking more about the value of high impact career plan changes and accelerations would make a $50,000 experiment in scaling them seem not-excessive.
For example, it’s not unusual that hiring manager will say the first-best candidate for a role is 25% better than a second-best candidate. If the role pays $100,000/yr, naïvely, a single plan change coming from these calls pays for half the program. A bit less naïvely, people at top orgs are often paid multiples less than a grantmaker’s willingness to forgo donations, so perhaps one year of work from one change pays back the whole investment.
Now of course advising can’t claim full credit for the change, this is always divided among many sources/investments, but careers are 20+ years long post-change too. So accelerations will count for less and of course we’re hoping to drive multiple changes here.
A summary frame is that getting the right people into the right roles is high stakes and the cost of missing the best people is extremely high.
This all turns on the ability of the program to generate calls and calls to generate changes of course, so for some number of calls, this program will look bad. FWIW, we’re currently near the worst case scenario (10 qualified referrers with 2 calls each), but still within ~7x of average call cost. I’m hoping we can bring that down to 2-3x this month. I think the question then becomes whether you think advising overall is excessive, which is a separate conversation. Though it’s worth noting that everyone who’s looked at the program would ~emphatically disagree.
I take responsibility for the current situation. I substantially overestimated my ability to get this program in as many minds as I’d hoped to. Please help me remedy this!
One last thing I’ll highlight is that I expect people who qualify for and win the grants to be pretty engaged and agentic EAs who will use $5,000 very well to advance their careers and create good down the line. I think this is very substantially different from a cash reward.
One Month Left to Win $5,000 Career Grants from 80k
Have you listened to 80k Actually After Hours/Off the Clock? This is close to what I was aiming for, though I think we still skew a bit more abstract.
The bar for contributing is low. I think the high bar you contrast this with is the bar for being hired or paid to contribute, which is indeed high in the conventional sense.
This is an important distinction because people should not equate contributing with being hired/paid as I think they often do. If you think AI risk is the most important problem, you should strive to move further on the continuum of contribution regardless of whether you cross the threshold of being hired.
I also got the sense that some kind of averaging consideration (or inverse prioritarianism, where the flourishing of the ~90th percentile along some dimension) was the essential thing for vitalists. Helping the weakest be stronger doesn’t count when your concern is the peak strength of the strongest.
It’s very much an aesthetically grounded, rather than experientially- or scale-grounded ethics.
This is 80k’s first in-person HQ video pod too! I (and I imagine the podcast team) are very interested in feedback on this.
The argument for Adequate temporal horizon is somewhat hazier
I read you suggesting we’d be explicit about the time horizons AIs would or should consider, but it seems to me we’d want them to think very flexibly about the value of what can be accomplished over different time horizons. I agree it’d be weird if we baked “over the whole lightcone” into all the goals we had, but I think we’d want smarter-than-us AIs to consider whether the coffee they could get us in 5 minutes and one second was potentially way better than the coffee they could get in five minutes, or they could make much more money in 13 months vs a year.
Less constrained decision-making seems more desirable here, especially if we can just have the AIs report the projected trade offs to us before they move to execution. We don’t know our own utility functions that well and it’s something we’d want AIs to help with, right?
I’m surprised at the three disagree votes. Most of this seemed almost trivially true to me:
Popular political issues are non-neglected and likely to be more intractable (people have psychological commitments to one side)
The reputational cost you bear in terms of turning people off to high-marginal-impact issues by associating with their political enemies is great than the low maginal benefit to these popular issues
Make the trade-off yourself, but be aware of the costs
Seems like good advice/a solid foundation for thinking about this.
A minor personal concern I have is foreclosing a maybe-harder-to-achieve, but more valuable equilibrium: one where EAs are perceived as quite politically diverse and savvy in both sides of popular politics.
Crucially, this vision depends on EAs engaging with political issues in non-EA fora and not trying to debate which political views are EA or aren’t “EA” (or tolerated “within EA”). The former is likely to get EA ideas taken more seriously by a wider range of people à la Scott Alexander and Ezra Klein; the latter is likely to push people who were already engaged with EA ideas further towards their personal politics.
Is that just from the tooltip? I’m not sure how anonymous posting works. It’d be interesting to learn who the author was if they didn’t intend to be anonymous and if it was anyone readers would know.
I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions.
I think the value of posts principally derives from their saying something new and concrete and this post failed to do that. Anonymity contributed to this because at least knowing that person X with history Y held these views might have been new and useful.
Sadly I didn’t really know how to give a reliable forecast given the endogenous effect of providing the forecast. I’ll post a pessimistic (for 80k, optimistic for referrers) update to Twitter soon. Basically, I think your chances of winning the funding are ~50% if you get two successful referrals at this point. 5 successful referrals probably gets you >80%.
I suspect this will be easy for you in particular, Caleb. Take my money!
Good questions! Yes, they would need to speak and apply in English. There are no barred countries.
To that last point, I’m particularly excited about fans of 80k being referrers for talented people with very little context. If you think a classmate/colleague is incredibly capable, but you don’t back yourself to have a super productive conversation about impactful work with them, outsource that to us!
I wanted to stay very far on the right side of having all our activities clearly relate to our charitable purpose. I know cash indirectly achieves this, but it leaves more room for interpretation, has some arguable optics problems, and potentially leads to unexpected reward hacking. The so far lackluster reception to the program is solid evidence against the latter two concerns.
I think a general career grant would be better and will consider changing it to that. Thanks for raising this question and getting me there!
80k will sponsor conference trips for 10 people who refer others to 80k advising
Leopold’s implicit response as I see it:
Convincing all stakeholders of high p(doom) such that they take decisive, coordinated action is wildly improbable (“step 1: get everyone to agree with me” is the foundation of many terrible plans and almost no good ones)
Still improbable, but less wildly, is the idea that we can steer institutions towards sensitivity to risk on the margin and that those institutions can position themselves to solve the technical and other challenges ahead
Maybe the key insight is that both strategies walk on a knife’s edge. While Moore’s law, algorithmic improvement, and chip design hum along at some level, even a little breakdown in international willpower to enforce a pause/stop can rapidly convert to catastrophe. Spending a lot of effort to get that consensus also has high opportunity cost in terms of steering institutions in the world where the effort fails (and it is very likely to fail).
Leopold’s view more straightforwardly makes a high risk bet on leaders learning things they don’t know now and developing tools they can’t foresee now by a critical moment that’s fast approaching.
I think it’s accordingly unsurprising that confidence in background doom is the crux here. In Leopold’s 5% world, the first plan seems like the bigger risk. In MIRI’s 90% world, the second does. Unfortunately, the error bars are wide here and the arguments on both sides seem so inextricably priors-driven that I don’t have much hope they’ll narrow any time soon.
Not thinking very hard. I think it’s more likely to be an overestimate of the necessary disparity than an underestimate.
There are about 500m humans in tractably dire straits, so if there were 500t animals in an equivalently bad situation, you might be very naïvely indifferent between intervening on one vs the other at a million to one. 500t is probably an oom too high if we’re not counting insects and several ooms too low if we are.
I think the delta for helping animals (life of intense suffering → non-existence) is probably higher (they are in a worse situation), tractability is lower, but neglectedness is way higher such that careful interventions might create compounding benefits in the future in a way I don’t think is very likely in global health given how established the field is.