Research Engineering Intern at the Center for AI Safety. Helping to write the AI Safety Newsletter. Studying CS and Economics at the University of Southern California, and running an AI safety club there.
aogara
Would really appreciate links to Twitter threads or any other publicly available versions of these conversations. Appreciate you reporting what you’ve seen but I haven’t heard any of these conversations myself.
Plan Your Career on Paper
How to find good 1-1 conversations at EAGx Virtual
Matt Levine seems to agree. Some quotes from his article:
Is Binance paying FTX tens of billions of dollars for its equity? I would be very, very, very surprised!
My main assumption is that if you are a crypto exchange facing a “significant liquidity crunch,” and you call up a bigger crypto exchange to ask for help, and you sign a deal for them to buy you the same day, then the price that they are paying you is, roughly speaking, zero.
There is precedent. In my world, the most famous precedent is probably JPMorgan Chase & Co. buying Bear Stearns Cos. for $2 per share one Sunday in 2008, “less than one-tenth the firm’s market price on Friday.” (Later the price was revised up.) If you need a bailout from JPMorgan over the weekend, JPMorgan will step in and make sure that your business can keep operating and that your customers will get paid and that the financial system does not collapse, but you won’t get paid much.
But in the crypto world, the famous precedents are pretty much Sam Bankman-Fried
bailing out crypto lenders this summer
. FTXbailed out BlockFi Inc.
, getting an option to buy it foras little as $15 million
or as much as $240 million; it had been valued at $3 billion in 2021. Alameda helped out Voyager Digital with a $75 million loan, and FTX ultimately agreed tobuy its assets out of bankruptcy
, cashing out customers but paying something like $51 million for its actual business; its equity market capitalization was more than $1 billion in April.This summer large crypto firms were on sale at pennies on the dollar if you had some ready cash and a tolerance for risk; Bankman-Fried did… Now, it seems, his large crypto firm was on sale at pennies on the dollar, and Zhao has the cash.
My 2 cents: Nobody’s going to solve the question of social justice here, the path forward is to agree on whatever common ground is possible, and make sure that disagreements are (a) clearly defined, avoiding big vague words, (b) narrow enough to have a thorough discussion, and (c) relevant to EA. Otherwise, it’s too easy to disagree on the overall “thumbs up or down to social justice” question, and not notice that you in fact do agree on most of the important operational questions of what EA should do.
So “When introducing EA to newcomers, we generally shouldn’t discuss income and IQ, because it’s unnecessary and could make people feel unwelcome at first” would be a good claim to disagree on, because it’s important to EA, and because the disagreement is narrow enough to actually sort out.
Other examples of narrow and EA-relevant claims that therefore could be useful to discuss: “EA orgs should actively encourage minority applicants to apply to positions”; “On the EA Forum, no claim or topic should be forbidden for diversity reasons, as long as it’s relevant to EA”; or “In public discussions, EAs should make minority voices welcome, but not single out members of minority groups and explicitly ask for their opinions/experiences, because this puts them in a potentially stressful situation.”
On the other hand, I think this conversation has lots of claims that are (a) too vague to be true or false, (b) too broad to be effectively discussed, or (c) not relevant to EA goals. Questions like this would include “Are women oppressed?”, “Is truth more important than inclusivity?”, or “Is EA exclusionary?” It’s not obvious what it would really mean for these to be true or false, you’re unlikely to change anyone’s mind in a reasonable amount of time, and their significance to EA is unclear.
My guess is that we all probably agree a lot on specific operationalized questions relevant to EA, and disagree much more when we abstract to overarching social justice debates. If we stick to specific, EA-relevant questions, there’s probably a lot more common ground here than there seems to be.
This makes a lot of sense to me. Personally I’m trying to use my career to work on longtermism, but focusing my donations on global poverty. A few reasons, similar to what you outlined above:
I don’t want to place all my bets on longtermism. I’m sufficiently skeptical of arguments about AI risk, and sufficiently averse to pinning all my personal impact on a low-probability high-EV cause area, that I’d like to do some neartermist good with my life. Also, this.
Comparatively speaking, longtermism needs more people and global poverty needs more cash. GiveWell has maintained their bar for funding as “8x better than GiveDirectly”, and is delaying grants that would not meet that bar because they expect to find more impactful opportunities over the next few years. Meanwhile longtermists seem to have lowered the bar to funding significantly, with funding readily available for any individuals interested in working on or towards impactful longtermist projects. (Perhaps the expected value of longtermist giving still looks good because the scale is so much bigger, but getting a global poverty grant seems to require a much more established organization with a proven track record of success.)
The best pitch for EA in my experience is the opportunity to reliably save lives by donating to global poverty charities. When I tell people about EA, I want to be able to tell them that I do the thing I’m recommending. (Though, maybe I should be learning to pitch x-risk instead.
On the whole, it seems reasonable to me for somebody to donate to neartermist causes despite the fact that they believe in the longtermist argument. This is particularly true for people who do or will work directly on longtermism and would like to diversify their opportunities for impact.
This seems very important and I’m surprised it’s been downvoted. Perhaps they’ve already done so, but it might be valuable for OpenPhil to seriously reconsider its conflict of interest policy.
To answer the question, OpenPhil has a relationship disclosure policy. Before August 2017 they disclosed relationships publicly, but since then have disclosed only internally by default. Unlike the foundations you linked, OpenPhil does not require that employees with conflicts of interest remove themselves from the decisionmaking process for relevant grants. Instead, these conflicts are considered internally before grantmaking decisions are made.
To point out the most obvious conflict of interest, CEO Holden Karnofsky is married to Daniella Amodei and brother in law to Dario Amodei. After OpenPhil donated $30M to OpenAI, the Amodei siblings were promoted to VP level positions at OpenAI. They have since left to cofound Anthropic, which received a $124M Series A from folks including Dustin Moskovitz, the primary funder of OpenPhil. OpenPhil has been fairly transparent about this, stating it in their grant report on OpenAI (and I believe I’ve seen it elsewhere). But with OpenAI and Anthropic both contributing to the emerging arms race in language models, some have criticized the history of decisions that led to the success of these organizations. Putting optics aside, OpenPhil might want to consider whether a stronger stance against conflicts of interest might have led to different decisions, and whether those decisions would have been better or worse.
Very nice analysis. One potential confounder could be the unusually strong performance of startups and tech stocks since 2014 due to expansionary monetary policy. I’d be curious to see these numbers broken down by different timeframes: say, the performance of startups founded in 1995-2001, 2001-2008, 2008-2014, and 2014-2022. Perhaps success over the last decade depends strongly on the macro environment — or, maybe building a startup takes long enough that you can outlast any particular business cycle and achieve stable returns.
I spent about an hour today trying to convince a friend that works in private equity that OpenAI is undervalued at $30B. I pitched him on short AI timelines and transformative growth, and he didn’t disagree with those arguments directly. He mostly questioned whether OpenAI would reap the benefits of short timelines. A few of the points:
It’s a competitive industry with other players on par or not far behind. Google, Meta, and Anthropic are there already, and startups like Stability and Cohere could quickly close the gap. This is especially true if “scale is all you need”, rather than human capital or privately generated data.
The main opportunity is B2B, not B2C. Businesses are more cost sensitive and interested in cheaper alternatives than consumers, who gladly accept name brands.
Profits often lag behind research breakthroughs for years and even decades. There’s no billion dollar app for GPT yet. Investors “don’t care about anything that’s more than 15 years away.”
IMO these are boring economic arguments that don’t refute the core thesis of short timelines or AI risk. OpenAI is getting a similar evaluation to Grammarly, which also sells an LLM product, but with worse tech and better marketing. It’s being evaluated on short term revenue prospects more than considerations about TAI timelines.
This was my understanding as well, that he actually believes in utilitarianism and was only cynical about individual public stances like the value of regulation or the importance of transparency.
I really like this kind of post from 80,000 Hours: a quick update on their general worldview. Patient philanthropy isn’t something I know much about, but this article makes me take it seriously and I’ll probably read what they recommend.
Another benefit of shorter updates might be sounding less conclusive and more thinking-out-loud. Comprehensive, thesis-driven articles might give readers the false impression that 80K is extremely confident in a particular belief, even when the article tries to accurately state the level of confidence. It’s hard to predict how messages will spread organically over time, but frequently releasing smaller updates might highlight that 80K’s thinking is uncertain and always changing. (Of course, the opposite could be true.)
All four current fund managers at LTFF have degrees in computer science, and none have experience in policy. Similarly, neither of OpenPhil’s two staff members on AI Governance have experience working in government or policy organizations. These grantmakers do incredible work, but this seems like a real blind spot. If there are ways that policy can improve the long-term future, I would expect that grantmakers with policy expertise would be best positioned find them.
EDIT: See below for the new LTFF grantmaker with exactly this kind of experience :)
I think it’s worth engaging with Carol, the Salinas campaign, and more generally people who have been adversely affected by EA efforts. If EA wants win elections in party politics, it will require working together with people who run those parties. Narrowly speaking, you might think that they’re not focused on the most important issues or that you have better policy ideas, and you might be right. But the ability to build coalitions, working together despite disagreements to accomplish common goals, is a central challenge of party politics.
I’m not convinced that EAs should donate to the Salinas campaign. FiveThirtyEight gives her a 78% chance of winning her race, meaning that closer races would offer a better chance for donations to tip the scales. Salinas also doesn’t list pandemic preparedness on her Issues page, which was the key issue of the Flynn campaign and I believe an important and neglected cause. But if the argument for the cost-effectiveness of donations to the Salinas campaign were to change, or if EAs found a more cost-effective way to offset possible harms of the Flynn campaign by continuing to engage with Oregonian or Democratic politics, I would consider supporting such an effort.
More simply, EAs should be kind and understanding in our discussions with Carol and others affected by our work. Maybe they’re interested in the EA mindset, but they’re unsure how to interpret our actions. We should show them good examples of how we think.
I believe we should think in terms of marginal effectiveness rather than offsetting particular harms we (individually or as a community) cause (see the author’s “you will have contributed in a small way to this failure” argument). If you want to offset harm that you have done or if you feel guilty, there’s little reason to do good in that particular domain (in this case, by donating to Salinas) rather than doing good in a more effective manner.
I think many people would disagree, and I expect that they’ll interpret your unwillingness to offset direct harms as a moral failure and an inability to cooperate with others. There are some domains that call for ruthless cost-effectiveness, and others that call for building relationships and trust with people with whom you might not always agree. I think politics is the latter.
Will there be an EAG Virtual? Huge fan of those and might not be able to make any in person. Might be a good contingency plan with Omicron too!
Startups would be another good reference class. VCs are incentivized to scale as fast as possible so they can cash out and reinvest their money, but they rarely give a new organization as much money as Redwood received.
Startups usually receive a seed round of ~$2M cash to cover the first year or two of business, followed by ~$10M for Series A to cover another year or two. Even Stripe, a VC wunderkind that’s raised billions privately while scaling to thousands of employees around the world, began with $2M for their first year, $38M for the next three years (2012-2014), and $70M for the next two years after that.
I’m not sure how long Redwood’s $21M is meant to cover, but if it’s less than a period of 4 years, then they’re spending more than the typical 5M/year for a Series A startup. There’s a good argument to be made that OP can be more risk tolerant than most VCs and take a big swing on scaling Redwood quickly. But beyond cost-effectiveness, another downside of fast funding is that scaling organizations effectively is very difficult, and it could be counterproductive to hire quickly before you have senior management in place with clear lines of tractable work.
Some numbers here (https://www.investopedia.com/articles/personal-finance/102015/series-b-c-funding-what-it-all-means-and-how-it-works.asp) and here (https://www.fundz.net/what-is-series-a-funding-series-b-funding-and-more). For Stripe funding numbers, google crunchbase Stripe Seed / Series A / Series B.
One small personal experience: I worked a non-EA job for three years. None of my close friends were interested in EA, and my job wasn’t in a highly impactful cause area. I developed some other interests during those years, reading a lot about startups and VC and finance. Despite my enthusiasm when I first read Peter Singer and Doing Good Better, I think my interest in working on EA topics could have slowly faded and been replaced with other interesting ideas.
The EA community was a big part of what kept me engaged with EA. This forum was a steady stream of information about how to do good in the world, and one that allowed me to voice my own opinions and have lots of interesting conversations. I attended two online EA Globals which mostly made me identify more as an EA. Later I went back to school, where the university EA group leader reached out and encouraged me to join a reading group. We had weekly dinners and great conversations, and only a few months later, I quit my part-time job at a for-profit startup and began working on AI Safety.
It’s hard to say what the counterfactual is, but I think the odds I’d be working on AI Safety right now would have been much lower without the identity, personal connections, and intellectual engagement from the EA community. Part of it is nerdsniping — it’s not always easy to find smart, sensible conversations about the world, but I’ve always found EA to provide plenty of them. There are real downsides — I used to think that I deferred way too much on my opinions about cause prioritization (I think I’ve improved, but maybe I’ve just lost my independent thinking). Your post is a great analysis of those dynamics and I’m not trying to argue for a bottom line, but just wanted to share one personal benefit of the community.
Yeah, it’s kinda hilarious. Speaking so fast that your opponents can’t follow your arguments and therefore lose the round is common practice in some forms of competitive debate. But in other debate categories, using this tactic would immediately lose you the round. In my own personal experience of high school debate, the quality of competitive debate depends very heavily on the particular category of debate.
The video above is Policy Debate, the oldest form of debate which degenerated decades ago into unintelligible speed reading and arguments that every policy would result in worldwide nuclear annihilation. In the 1980s, the National Speech and Debate Association instituted a new form of debate called Lincoln Douglas that attempted to reground debate in commonsense questions about moral values; but LD has also fallen victim to speed reading and even galaxy-brained “kritiks” arguing that the structure of debate itself is racist or sexist and therefore that the round should be abandoned.
Public Forum debate, invented in 2002 as an antidote to Lincoln Douglas, is IMO a very healthy and educational form of debate. Here is the final round of the 2018 national championship (starting at 4:05) on the resolution, “On balance, the benefits of United States Participation in the North American Free Trade Agreement outweigh the consequences.” https://m.youtube.com/watch?v=MUnyLbeu7qU&feature=youtu.be
British Parliamentary debate is another form of debate that, in my experience, is more civil and less “game-able” than other forms of debate (though Harrison D disagrees below, with specifics about its pitfalls). One key difference is that, while Public Forum allows and encourages debaters to spend weeks researching and debating a single specific resolution, Parliamentary debate typically a involves generalized preparation on a subject or theme and only reveals the specific resolution a few minutes before the round begins. Because of this, I think Public Forum is more educational for debaters, but Parliamentary is probably easier to run a one-off tournament because debaters won’t be expected to have done as much preparation.
Extemporaneous Speaking is another category involving less preparation, where participants are asked a question about current affairs or politics and have 30 minutes to prepare a 7 minute off-the-cuff speech. There is no “opponent” in Extemp, perhaps limiting the level of discourse, but it might be possible to easily introduce EA-related topics because participants are expected to be conversant in a wide range of topic areas.
On the whole, I’m very glad to see this EA debate tournament being run, and would be very excited to see further work bringing EA topics into debate. I can understand why many people might find some debate tactics toxic and counterproductive, particularly in categories like Policy and LD, but I do think this is the failure of specific categories and tactics and not an indictment of all adversarial debate. Learning the best arguments for both sides of a resolution certainly teaches a bit of an “arguments as soldiers” approach, but I believe the greater effect is to lead debaters to real truths about which arguments are stronger and improve their personal understanding of the issues. In future EA debate events, I would only suggest that organizers be very conscious of these standards and norms when choosing a specific category of debate.
Just a thank you for sharing, it can be scary to share your personal background like this but it’s extremely helpful for people looking into EA careers.
In general, what do you think of the level of conflict of interests within EA grantmaking? I’m a bit of an outsider to the meta / AI safety folks located in Berkeley, but I’ve been surprised to find out the frequency of close relationships between grantmakers and grant receivers. (For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)
Do you think COIs pose a significant threat to the EA’s epistemic standards? How should grantmakers navigate potential COIs? How should this be publicly communicated?
(Responses from Linch or anybody else welcome)
I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I’ve started thinking it’s basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.
80,000 Hours targets the most professionally successful people in the world. That’s probably the right idea for them—giving good career advice takes a lot of time and effort, and they can’t help everyone, so they should focus on the people with the most career potential.
But, unfortunately for most EAs (myself included), the nine priority career paths recommended by 80,000 Hours are some of the most difficult and competitive careers in the world. If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, I’d guess you have slim-to-none odds of succeeding in any of them. The advice just isn’t tailored for you.
So how can the vast majority of people have an impactful career? My best answer: A lot of independent thought and planning. Your own personal brainstorming and reading and asking around and exploring, not just following stock EA advice. 80,000 Hours won’t be a gospel that’ll give all the answers; the difficult job of finding impactful work falls to the individual.
I know that’s pretty vague, much more an emotional mindset than a tactical plan, but I’m personally really happy I’ve started thinking this way. I feel less status anxiety about living up to 80,000 Hours’s recommendations, and I’m thinking much more creatively and concretely about how to do impactful work.
More concretely, here’s some ways you can do that:
Think of easier versions of the 80,000 Hours priority paths. Maybe you’ll never work at OpenPhil or GiveWell, but can you work for a non-EA grantmaker reprioritizing their giving to more effective areas? Maybe you won’t end up in the US Presidential Cabinet, but can you bring attention to AI policy as a congressional staffer or civil servant? (Edit: I forgot, 80k recommends congressional staffing!) Maybe you won’t run operations at CEA, but can you help run a local EA group?
The 80,000 Hours job board actually has plenty of jobs that aren’t on their priority paths, and I think some of them are much more accessible for a wider audience.
80,000 Hours tries to answer the question “Of all the possible careers people can have, which ones are the most impactful?” That’s the right question for them, but the wrong question for an individual. For any given person, I think it’s probably much more useful to think, “What potentially impactful careers could I plausibly enter, and of those, which are the most impactful?” Start with what you already have—skills, connections, experience, insights—and think outwards from there: how you can transform what you already have into an impactful career?
There are tons of impactful charities out there. GiveWell has identified some of the top few dozen. But if you can get a job at the 500th most effective charity in the world, you’re still making a really important impact, and it’s worth figuring out how to do that.
Talk to people working in the most important problems who aren’t top 1% of professional success—seeing how people like you have an impact can be really motivating and informative.
Personal donations can be really impactful—not earning to give millions in quant trading, just donating a reasonable portion of your normal-sized salary, wherever it is that you work.
Convincing people you know to join EA is also great—you can talk to your friends about EA, or attend/help out at a local EA group. Converting more people to EA just multiplies your own impact.
Don’t let the fact that Bill Gates saved a million lives keep you from saving one. If you put some hard work into it, you can make a hell of a difference to a whole lot of people.