Building out LawAI’s research fellowships and public-facing educational programming
Mjreard
This is 80k’s first in-person HQ video pod too! I (and I imagine the podcast team) are very interested in feedback on this.
The argument for Adequate temporal horizon is somewhat hazier
I read you suggesting we’d be explicit about the time horizons AIs would or should consider, but it seems to me we’d want them to think very flexibly about the value of what can be accomplished over different time horizons. I agree it’d be weird if we baked “over the whole lightcone” into all the goals we had, but I think we’d want smarter-than-us AIs to consider whether the coffee they could get us in 5 minutes and one second was potentially way better than the coffee they could get in five minutes, or they could make much more money in 13 months vs a year.
Less constrained decision-making seems more desirable here, especially if we can just have the AIs report the projected trade offs to us before they move to execution. We don’t know our own utility functions that well and it’s something we’d want AIs to help with, right?
I’m surprised at the three disagree votes. Most of this seemed almost trivially true to me:
Popular political issues are non-neglected and likely to be more intractable (people have psychological commitments to one side)
The reputational cost you bear in terms of turning people off to high-marginal-impact issues by associating with their political enemies is great than the low maginal benefit to these popular issues
Make the trade-off yourself, but be aware of the costs
Seems like good advice/a solid foundation for thinking about this.
A minor personal concern I have is foreclosing a maybe-harder-to-achieve, but more valuable equilibrium: one where EAs are perceived as quite politically diverse and savvy in both sides of popular politics.
Crucially, this vision depends on EAs engaging with political issues in non-EA fora and not trying to debate which political views are EA or aren’t “EA” (or tolerated “within EA”). The former is likely to get EA ideas taken more seriously by a wider range of people à la Scott Alexander and Ezra Klein; the latter is likely to push people who were already engaged with EA ideas further towards their personal politics.
Is that just from the tooltip? I’m not sure how anonymous posting works. It’d be interesting to learn who the author was if they didn’t intend to be anonymous and if it was anyone readers would know.
I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions.
I think the value of posts principally derives from their saying something new and concrete and this post failed to do that. Anonymity contributed to this because at least knowing that person X with history Y held these views might have been new and useful.
Sadly I didn’t really know how to give a reliable forecast given the endogenous effect of providing the forecast. I’ll post a pessimistic (for 80k, optimistic for referrers) update to Twitter soon. Basically, I think your chances of winning the funding are ~50% if you get two successful referrals at this point. 5 successful referrals probably gets you >80%.
I suspect this will be easy for you in particular, Caleb. Take my money!
Good questions! Yes, they would need to speak and apply in English. There are no barred countries.
To that last point, I’m particularly excited about fans of 80k being referrers for talented people with very little context. If you think a classmate/colleague is incredibly capable, but you don’t back yourself to have a super productive conversation about impactful work with them, outsource that to us!
I wanted to stay very far on the right side of having all our activities clearly relate to our charitable purpose. I know cash indirectly achieves this, but it leaves more room for interpretation, has some arguable optics problems, and potentially leads to unexpected reward hacking. The so far lackluster reception to the program is solid evidence against the latter two concerns.
I think a general career grant would be better and will consider changing it to that. Thanks for raising this question and getting me there!
80k will sponsor conference trips for 10 people who refer others to 80k advising
Leopold’s implicit response as I see it:
Convincing all stakeholders of high p(doom) such that they take decisive, coordinated action is wildly improbable (“step 1: get everyone to agree with me” is the foundation of many terrible plans and almost no good ones)
Still improbable, but less wildly, is the idea that we can steer institutions towards sensitivity to risk on the margin and that those institutions can position themselves to solve the technical and other challenges ahead
Maybe the key insight is that both strategies walk on a knife’s edge. While Moore’s law, algorithmic improvement, and chip design hum along at some level, even a little breakdown in international willpower to enforce a pause/stop can rapidly convert to catastrophe. Spending a lot of effort to get that consensus also has high opportunity cost in terms of steering institutions in the world where the effort fails (and it is very likely to fail).
Leopold’s view more straightforwardly makes a high risk bet on leaders learning things they don’t know now and developing tools they can’t foresee now by a critical moment that’s fast approaching.
I think it’s accordingly unsurprising that confidence in background doom is the crux here. In Leopold’s 5% world, the first plan seems like the bigger risk. In MIRI’s 90% world, the second does. Unfortunately, the error bars are wide here and the arguments on both sides seem so inextricably priors-driven that I don’t have much hope they’ll narrow any time soon.
Things downstream of OpenPhil are in the 90th+ percentile of charity pay, yes, but why do people work in the charity sector? Either because they believe in the specific thing (i.e. they are EAs) or because they want the warm glow of working for a charity. Non-EA charities offer more warm glow, but maybe there’s a corner of “is a charity” and “pays well for a charity even though people in my circles don’t get it” that appeals to some. I claim it’s not many and EA jobs are hard to discover for the even smaller population of people who have preferences like these and are high competence.
Junior EA roles sometimes pay better than market alternatives in the short run, but I believe high potential folks will disproportionately track lifetime earnings vs the short run and do something that’s better career capital.
I’d guess the biggest con is adverse selection. Why is this person accepting a below market wage at a low-conventional-status organization?
An EA might be the most conventionally talented candidate because they’re willing to take the role despite these things.
Agree with the analysis and quite likely to take the Off the Clock suggestion. Thank you!
I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn’t be.
It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.
I wish EA weren’t a question of identity, but behavior. Which actions are you choosing to take and why? Your reasons can draw on many perspectives at once and none needs to dominate. Even the question of “should I do meta EA work/advocacy?” can be taken this way and I think something like it is what’s at stake in the “should I identify as EA?” question.
I personally do meta work and feel free to criticize particular EA-identifying people and organizations. I also feel free to quit this work all together and be quieter about cause-spanning features of EA. If I quit and got quieter, I’d feel like that’s just another specific thing I did, rather than some identity line in the sand I’ve crossed. Maybe it’s because my boss sucked; maybe it’s because I felt compelled to focus on tax policy instead of GCRs, but it wouldn’t be because of my general feelings of worthiness or acceptance by the EA monolith.
A lot of what I’m saying is couched in the language of personal responsibility on your part in what kinds of identity questions you ask yourself, but I want to be clear that the salience of the identity question is also socially created by people asking questions in the direction of fidelity to ideas and their logical limits as a sort of status test, e.g. which utilitarian bullets you will or won’t bite. As much as we shouldn’t preoccupy ourselves with such questions, we shouldn’t preoccupy others with them either.
Your feedback for Actually After Hours: the unscripted, informal 80k podcast
If you haven’t, you should talk to the 80k advising team with regard to feedback. We obviously aren’t the hiring orgs ourselves, but I think we have reasonable priors on how they read certain experiences and proposals. We’ve also been through a bunch of EA hiring rounds ourselves and spoken to many, many people on both sides of them.
I think you’ve failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved.
Classically, Germany’s early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn’t have to support restricting nuclear power also. The option space for them was not “the net-bad energy policy that emerged” vs “libertarian paradise;” it was: “the existing/inevitable bad policies with a bet on solar R&D” vs “the existing/inevitable bad policies with no bet on solar R&D.”
I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they’re acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas.
To the extent you’re trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don’t however agree that’s reflective of how they’re treating the issue.
Tiny nit: I didn’t and don’t read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much.
I also got the sense that some kind of averaging consideration (or inverse prioritarianism, where the flourishing of the ~90th percentile along some dimension) was the essential thing for vitalists. Helping the weakest be stronger doesn’t count when your concern is the peak strength of the strongest.
It’s very much an aesthetically grounded, rather than experientially- or scale-grounded ethics.