Building out LawAI’s research fellowships and public-facing educational programming
Mjreard
An implicit claim I’m making here is that “I don’t do labels” is kind of a bullshit non-response in a world where some labels are more or less descriptively useful and speakers have the freedom to qualify the extent to which the label applies.
Like I notice no one responds to the question “what’s your relationship to Nazism?” with “I don’t do labels.” People are rightly suspicious when people give that answer and there just doesn’t seem to be a need for it. You can just defer to the question asker a tiny bit and give an answer that reflects your knowledge of the label if nothing else.
Yeah one thing I failed to articulate is how not-deliberate most of this behavior is. There’s just a norm/trend of “be scared/cagey/distant” or “try [too] hard to manage perceptions about your relationship to EA” when you’re asked about EA in any quasi-public setting.
It’s genuinely hard for me understand what’s going on here. Like there are vastly worse ~student groups people have been a part of from their current professional outlook that don’t induce this much panic. It seems like an EA cultural tick.
EA Adjacency as FTX Trauma
I overstated this, but disagree. Overall very few people have ever heard of EA. In tech, maybe you get up to ~20% recognition, but even there, the amount of headspace people give it is very small and you should act as though this is the case. I agree it’s negative directionally, but evasive comments like these are actually a big part of how we got to this point.
There’s a lesson here for everyone in/around EA, which is why I sent the pictured tweet: it is very counterproductive to downplay what or who you know for strategic or especially “optics” reasons. The best optics are honesty, earnestness, and candor. If you have explain and justify why your statements that are perceived as evasive and dishonest are in fact okay, you probably did a lot worse than you could have on these fronts.
Also, on the object level, for the love of God, no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association. Don’t accept their premise and play into their narrative by being evasive like this. *This validates the criticisms and makes you look worse in everyone’s eyes than just saying you’re EA or you think it’s great or whatever.*
But what if I’m really not EA anymore? Honesty requires that you at least acknowledge that you *were.* Bonus points for explaining what changed. If your personal definition of EA changed over that time, that’s worth pondering and disclosing as well.
I think people overrate how predictable the effect of our actions on the future will be (even though they rate it very low in absolute terms); extinction seem like one of the very few (only?) things that seems like its effects will endure throughout a big part of the future. Still buy the theory that 0-1% of possible value is equally valuable to 98-99%; just about tractability
Donated.
I’ve been hugely impressed by the NT fellows and finalists I came across in my work at 80k and it seems like NT was either their first exposure to EA ideas or the first meaningful opportunity to actively apply the ideas (which can be just as important). I imagine uni groups are well in your debt for your role in helping finalists/fellows connect ahead of starting university too.
You’ve decided to give mostly to established institutions (GWWC, 80k, AMF, GW) – why those over more hits-based approaches (including things that wouldn’t be a burden on your time like giving to AIM or deputizing someone else to make risky grants to promising individuals/small orgs on your behalf)?
How do you think about opportunity costs when it comes to earning to give? Are there roles at other firms or in the US where you would expect to make substantially more (including downside risks), but pass on those for personal reasons?
Same for roles where you might make less but pass on those for ETG reasons.
I think earning to give is the correct primary route to impact for the majority of current EAs and a major current shortcoming of the movement is failing to socially reward earning to give relative to pursuing direct work. I worry that this project, if successful, would push this dynamic further in the wrong direction.
The short version of the argument is that excessive praise for ‘direct work’ has caused a lot of people who fail to secure direct work to feel un-valued and bounce off EA. Others have expanded their definitions of what counts as an impactful org to justify themselves according to the direct work standard when they could have more impact ETGing in a conventional job and donating to the very best existing orgs.
All the EA-committed dollars in the world are a tiny drop in the ocean of the world’s problems and it takes really incredible talent to leverage those dollars in a way that would be more effective than adding to them. Finding talent to do that is critical (I do this), but people need to be well calibrated and thoughtful in deciding whether and for how long to pursue particular direct work opportunities vs ETG. I think hurling (competing!) solemn pledges at them is not the way to make this happen.
The trailer for Ada makes me think it falls in a media no mans land between extremely low-cost, but potentially high-virality creator content and high-cost, fully produced series that go out on major networks. Interested to hear how Should We are navigating the (to me) inorganic nature of their approach.
Sounds like Bequest was making a speculative bet on high-cost, fully produced – which I think is worthwhile. When I think about in-the-water ideas like environmentalism and social justice, my sense is they leveraged media by gently injecting their themes/ideas into independently engaging characters and stories (i.e. the kinds of things for-profit studios would want to produce independent of whether these ideas appeared in the plot).
Not thinking very hard. I think it’s more likely to be an overestimate of the necessary disparity than an underestimate.
There are about 500m humans in tractably dire straits, so if there were 500t animals in an equivalently bad situation, you might be very naïvely indifferent between intervening on one vs the other at a million to one. 500t is probably an oom too high if we’re not counting insects and several ooms too low if we are.
I think the delta for helping animals (life of intense suffering → non-existence) is probably higher (they are in a worse situation), tractability is lower, but neglectedness is way higher such that careful interventions might create compounding benefits in the future in a way I don’t think is very likely in global health given how established the field is.
You’d have to value animals at ~millionths of humans for scale and neglectedness not to be dispositive. Only countervailing considerations are things around cooperativeness, positive feedback loops, and civilizational stability, all of which are speculative and even sign uncertain
Quoted Zvi here seems to fail at some combination of thinking on the margin and distinguishing means and ends. GiveWell_USA is also not what Zvi’s take implies you should do.
Quoted Zvi emphasizes that functioning, long-run-sustainable groups/societies are necessary for achieving big things in the world. If you wholly neglect in-group beneficence, your group will fail, and you’ll fail at whatever big thing you were trying to do. This is true and I think any sensible EA acknowledges it.
What is not true and potentially down-right-silly, is saying that in-group beneficence must therefore become your terminal goal. I fear that’s what this post suggests.
If your terminal goal is global well-being, you should do whatever amount of in-group beneficence maximizes global well-being. And crucially, it is extremely implausible this amount is 100% of your effort or anything close to that. People watch Netflix and buy Range Rovers – these do nothing for their communities, yet no one worries the communities where people do these things are at risk of failure. In fact, the leisure hours and number of nice cars are often how community success is measured. That’s because these consumer goods are treated as terminal goods. Other things can fill that role with similarly low risk to group sustainability.
I think quoted Zvi is attacking a strawman EA who neglects all his relationships and investments to give his last scrap of food to the poor right now. Yes, that strawman EA’s movement won’t last long. EAs with good jobs getting promotions donating 10% or building highly-functional, well-paying non-profits are not that straw man though. They just notice that the marginal income/effort others put towards nicer cars and bigger houses can be applied to something more meaningful.
Separately, I don’t think GiveWell_USA is what Zvi’s quote has in mind. Neighbors and co-nationals are arbitrary groupings in more ways than just morally. I think Zvi’s suggestion is to invest in especially reciprocal and compounding relationships for you and your people defined by some measure you care about. For most people this is a cluster of family and close friends, for rationalists, I think it’s other rationalists, maybe churches, I don’t know. I suppose you could choose the US as your tribe, but reciprocity is going to be a lot weaker than in the previous examples and will thereby suffer from Zvi’s critique. An American Patriots Union building lots of tiny homes for the American homeless [1] might well see it’s coffers and membership run dry after a few decades of none of the people they housed feeling particularly inclined to join or otherwise help the APU in particular. Nothing about the beneficiaries being American made this inherently more reciprocal than bednets for foreigners.
If what you want is a guide on how to build a group that endlessly accumulates social capital and never deploys that capital to any end in particular, I think the partnership of major law/consulting/finance firms or the major political parties are the right models to work with. I just don’t think those groups’ practice of self-interest is worthwhile compared to saving lives.
- ^
let’s say hypothetically this is the most QALYs/$ you can do in the US
- ^
All fair points. My biggest point of disagreement is (2). As a true example, only around a third of EAG(x) attendees have applied for advising. That’s a much more expensive program for both the org and the participant. Tons of relevant people effectively don’t know advising exists (e.g. maybe they’ve heard of it once, but if it didn’t make it to the top of their stack that day, they forget about it). Importantly, our highest counterfactual comes from talking to only modestly engaged people, so maybe people who’ve done some reading but only know 1-2 other EAs – I see this program being aimed at these not-highly-engaged people makes the call more worthwhile, not less. “You were up for a call if someone leaned on you a bit,” seems roughly ideal to me.
I would guess that thinking more about the value of high impact career plan changes and accelerations would make a $50,000 experiment in scaling them seem not-excessive.
For example, it’s not unusual that hiring manager will say the first-best candidate for a role is 25% better than a second-best candidate. If the role pays $100,000/yr, naïvely, a single plan change coming from these calls pays for half the program. A bit less naïvely, people at top orgs are often paid multiples less than a grantmaker’s willingness to forgo donations, so perhaps one year of work from one change pays back the whole investment.
Now of course advising can’t claim full credit for the change, this is always divided among many sources/investments, but careers are 20+ years long post-change too. So accelerations will count for less and of course we’re hoping to drive multiple changes here.
A summary frame is that getting the right people into the right roles is high stakes and the cost of missing the best people is extremely high.
This all turns on the ability of the program to generate calls and calls to generate changes of course, so for some number of calls, this program will look bad. FWIW, we’re currently near the worst case scenario (10 qualified referrers with 2 calls each), but still within ~7x of average call cost. I’m hoping we can bring that down to 2-3x this month. I think the question then becomes whether you think advising overall is excessive, which is a separate conversation. Though it’s worth noting that everyone who’s looked at the program would ~emphatically disagree.
I take responsibility for the current situation. I substantially overestimated my ability to get this program in as many minds as I’d hoped to. Please help me remedy this!
One last thing I’ll highlight is that I expect people who qualify for and win the grants to be pretty engaged and agentic EAs who will use $5,000 very well to advance their careers and create good down the line. I think this is very substantially different from a cash reward.
One Month Left to Win $5,000 Career Grants from 80k
Have you listened to 80k Actually After Hours/Off the Clock? This is close to what I was aiming for, though I think we still skew a bit more abstract.
The bar for contributing is low. I think the high bar you contrast this with is the bar for being hired or paid to contribute, which is indeed high in the conventional sense.
This is an important distinction because people should not equate contributing with being hired/paid as I think they often do. If you think AI risk is the most important problem, you should strive to move further on the continuum of contribution regardless of whether you cross the threshold of being hired.
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It’s ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying “We’ve had some overlap and it’s a nice philosophy for the most part, but it’s not a guiding light of what we’re doing here” seems like it strictly dominates.