Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team
Mjreard
EAGx Boston 2018 Postmortem
This interview with Obama Allan Defoe pointed to once is pretty instructive around these questions. On reflection, reasonable government actors see the case, it’s just really hard to prioritize given short-run incentives (“maybe in 20 years the next generation will see things coming and deal with it”).
My basic model is that government actors are all tied up by the stopping problem. If you ever want to do something good you need to make friends and win the next election. The voters and potential allies are even more short-termist than you. Availability bias explains why people would care about private nuclear weapons. Superintelligence codes as dinner party/dorm room chat. It will sell books, but it’s not action relevant.
I was wondering if someone was looking into far UVC devices. I did briefly and it seems they’re rare and maybe only available on a B2B basis. Also, I’d guess someone is currently working on a post about how EAGx Boston caused some higher-than-expected number of cases, so there’s an update in favor of extra caution there.
It seems like some discussion of s-risks is called for as they seem to be assumed away, though many longtermists are concerned about them.
I took a development class in law school and thought the total focus on aid/politics/culture was a feature of it being in a law school. I guess not.
“Leadership” and “eco-systems” sound very nice as far as they’re described here but I find this post unhelpful as a guide to what “EA” should do.
Assuming this post is addressing EA funders – rather than the collection of diverse, largely uncoordinated, people, organizations, and perspectives that ‘EA’ is – is the claim that funders should open 20 of these offices? Who do they pay to do that and apply the “high standards” for early membership? What are the standards? Should people have models of the world that distinguish good/bad opportunities and big/small ones? At what point does answering these questions become too analytical?
“Find all the smart altruistic people, point them to each other, give them some money, and let them do what they want” sounds nice, but aren’t there hundreds or thousands of organizations interested in funding various projects, not least of which the whole VC industry? My sense is that not analyzing why you might be the funder of last resort, at least a little bit, is a recipe to crash and burn very quickly. $1m/yr/office could feed a handful of people and keep the lights on, but it’s not scaling any projects. “EA” doesn’t have enough money to last long without a lot of analysis and it’s only been around for ~10 years.
People with diverse, niche interests and moxie have had really outsized influence on the world. It’s easy to say “go find them,” but the ones who will actually make a difference are very very few and far between and it takes some analysis to find them. There are a million people in Port Au Prince and probably hundreds of discernible perspectives on how to make things better there. Some multiplication for other localities. The Future Fund has 30 categories of ideas they want to pursue. Maybe that’s “too small,” but they’re largely unaddressed and really big in scale. If they wanted to count all wins as equal, I don’t doubt they could rack up a lot of very concrete wins and cool stories, but that seems to be what… all the rest of philanthropy is doing. And I’m glad they are!
There’s an undergrad econ thing where burning a dollar lowers the price level for all other dollar holders and increases their welfare, but everyone thinks you can do better than that by being more discerning. So just saying “more causes/ideas!” isn’t really helpful without some limiting principle.
Narrow AIs have moved from buggy/mediocre to hyper-competent very quickly (months). If early AGIs are widely copied/escaped, the global resolve and coordination required to contain them would be unprecedented in breadth and speed.
I expect warning shots, and expect them to be helpful (vs no shots), but take very little comfort in that.
1-3 seem good for generating more research questions like ASB’s, but the narrower research questions are ultimately necessary to get to impact. 4-8 seem like things EA is over-invested in relative to what ASB lays out here, not that more couldn’t be done there.
Speaking just to the little slice of the world I know:
Using a legal research platform (i.e. Westlaw, LexisNexis, Casetext) could be really helpful with several of these. If you’re good at thinking up search terms and analogous products/actors/circumstances (3D firearms, banned substances, and squatting on patents are good examples here), there’s basically always a case where someone wasn’t happy someone else was doing X, so they hired lawyers to figure out which laws were implicated by X and file a suit/indict someone, usually on multiple different theories/laws.
The useful part is that courts will then write opinions on the theories which provide a clear, lay-person explanation of the law at issue, what it’s for, how it works, etc. before applying it to the facts at hand. Basically, instead of you having to stare at some pretty abstract, high-level, words in a statute/rule and imagine how they apply, a lot of that work has already been done for you, and in an authoritative, citable way. Because cases rely on real facts and circumstances, they also make things more concrete for further analogy-making to the thing you care about.
Downside is these tools seem to cost at least ~$150/mo, but you may be able to get free access through a university or find other ways to reduce this. Google Scholar case law is free, but pretty bad.
Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.
So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.
Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.
Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact.
It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.
People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.
I think a chatbot fails the cost-benefit analysis pretty badly at this point. There are big reputational hits organizations can take for giving bad advice and potential hallucinations just create a lot of surface area there. Importantly, the upside is quite minimal too. If a user wants to, they can pull up ChatGPT and ask it to act as an 80k advisor. It might do okay (or similarly to how okay it would do if we tried to develop one), only it’d be much clearer that we didn’t sanction its output.
I appreciate the care and detail here, but would guess that wild animals dwarf everything considered here and present a much more difficult + important question.
How bad are forests per unit of land vs corn/soy/wheat fields or cattle ranches that have been replacing them seems like a key question.
I was confused by this:
There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this.
You seem to be saying that Sam was among the smartest and so saw that these were unprincipled hacks and ignored them, yet the rest of the post goes into great detail on how profoundly stupid Sam was for buying into such a naive, simplistic world model that very predictably failed on its own terms. I would expect the smartest people in the world to make better predictions and use better models.
I interpret EA as common sense morality+. Abiding by common sense morality will often leave you with a lot of time and energy left over. If you want to do more good, you should use those to increase welfare, and do so in a scientific and scope-sensitive way. Is that clearly not EA or an unprincipled hack?
I found the cultivated meat one a little surprising so made a market:
Too high. I thought there were huge scaling barriers based on something Linch wrote ~2 years ago. Maybe that’s wrong or been retracted.
As I understand it, [part of] Anthropic’s theory of change is to be a meaningful industry player so its safety agenda can become a potential standard to voluntarily emulate or adopt in policy. Being a meaningful industry player in 2024 means having desirable consumer products and advertising them as such.
It’s also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race.
Am I violating Reddiquette by advising people to browse the thread, use ctrl+F, and sort by new to find comments they might enjoy?