Research Engineering Intern at the Center for AI Safety. Helping to write the AI Safety Newsletter. Studying CS and Economics at the University of Southern California, and running an AI safety club there.
aogara
The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.
I totally might be misunderstanding your point, but here’s the contradiction as I see it. If you believe (A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you’re hoping to predict the future of AI better than the market, I’d say the expected value of AI is already reflected in tech stock prices.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.
I think the background assumptions are probably doing a lot of work here. You’d have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.
Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn’t really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won’t be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.
It’s hard to predict when AI will happen, it’s worlds harder to translate that into present day stock-picking advice. If you’ve got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.
Fantastic, I completely agree, so I don’t think we have any substantive disagreement.
I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I’d say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that’s just a mark against your AI predictions.
You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn’t expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don’t expect AI companies to have higher-than-average future returns.
Would you agree with that?
Really valuable post, particularly because EA should be paying more attention to Future Perfect—it’s some of EA’s biggest mainstream exposure. Some thoughts in different threads:
1. Writing for a general audience is really hard, and I don’t think we can expect Vox to maintain the fidelity standards EA is used to. It has to be entertaining, every article has to be accessible to new readers (meaning you can’t build up reader expecations over time, like a sequence of blog posts or book would), and Vox has to write for the audience they have rather than wait for the audience we’d like.
In that light, look at, say, the baby Hitler article. It has to be connected to the average Vox reader’s existing interests, hence the Ben Shapiro intro. It has to be entertaining, so Matthew’s digresses onto time travel and Matrix. Then it has to provide valuable information content: an intro to moral cluelessness and expected value.
It’s pretty tough for one article to do all that, AND seriously critique Great Man history, AND explain the history of the Nazi Party. To me, dropping those isn’t shoddy journalism, it’s valuable insight into how to engage your readers, not the ideal reader.
Bottom line: People who took the 2018 EA Survey are twice more likely than the average American to hold a bachelor’s degree, and 7x more likely to hold a Ph.D. That’s why Robin Hanson and GiveWell have been great reading resources so far. But if we actually want EA to go mainstream, we can’t rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.
...
(P.S. Small matter, Matthews does not say that it’s “totally impossible” to act in the face of cluelessness, unlike what you implied—he says the opposite. And then: “If we know the near-term effects of foiling a nuclear terrorism plot are that millions of people don’t die, and don’t know what the long-term effects will be, that’s still a good reason to foil the plot.” That’s a great informal explanation. Edit to correct that?)
2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn’t take sides on controversial political issues, though of course individual EAs are free to.
Robin Hanson’s “pulling the rope sideways” analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it’s rarely effective to pick a side and pull. First, you’re one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody’s going to filibuster to prevent you from handing out malaria nets—everybody thinks it’s a good idea.
(This doesn’t mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer—they’re both nonpartisan ways to do good in politics.)
If EA were officially apolitical like this, we would benefit by Hanson’s logic: we can more easily achieve our goals without enemies, and we’re more likely to be right. But we’d could also gain credibility and influence in the long run by refusing to enter the political fray.
I think part of EA’s success is because it’s an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I’d say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.
But I could imagine an EA failure mode where, a decade from now, Vox is the most popular “EA” platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels—kinda how animal rights and environmentalism could be their own identities, but they’ve mostly been absorbed into the political left.
If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice—improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?
3. I have no personal or inside info on Future Perfect, Vox, Dylan Matthews, Ezra Klein, etc. But it seems like they’ve got a fair bit of respect for the EA movement—they actually care about impact, and they’re not trying to discredit or overtake more traditional EA figureheads like MacAskill and Singer.
Therefore I think we should be very respectful towards Vox, and treat them like ingroup members. We have great norms in the EA blogosphere about epistemic modesty, avoiding ad hominem attacks, viewing opposition charitably, etc. that allow us to have much more productive discussions. I think we can extend that relationship to Vox.
Using this piece as an example, if you were criticizing Rob Wiblin’s podcasting instead of Vox’s writing, I think people might ask you to be more charitable. We’re not anti-criticism—We’re absolutely committed to truth and honesty, which means seeking good criticism—but we also have well-justified trust in the community. We share a common goal, and that makes it really easy to cooperate.
Let’s trust Vox like that. It’ll make our cooperation more effective, we can help each other achieve our common goal, and, if necessary, we can always take back our trust later.
Agreed. If you accept the premise that EA should enter popular discourse, most generally informed people should be aware of it, etc., then I think you should like Vox. But if you think EA should be a small elite academic group, not a mass movement, that’s another discussion entirely, and maybe you shouldn’t like Vox.
I think I’d challenge this goal. If we’re choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.
1. Vox seems at least somewhat open to change: Matthews and Ezra seem genuinely pretty EA, they went out on a limb to hire Piper, and they’ve sacrificed some readership to maintain EA fidelity. Even if they place less-than-ideal priority on EA goals vs. progressivsim, profit, etc., they still clearly place some weight on pure EA.
2. We’re unlikely to convince Future Perfect’s readers that Future Perfect is bad/wrong and we in EA are right. We can convince core EAs to discredit Vox, but that’s unnecessary—if you read the EA Forum, your primary source of EA info is not Vox.
Bottom line: non-EAs will continue to read Future Perfect no matter what. So let’s make Future Perfect more EA, not less.
I think Vox’s Future Perfect could be a good platform for this—either one of you writing a guest article, or giving Vox the information and letting them write. It’s an interesting news story to cover these broken commitments, Vox’s readership already is fairly interested in animal rights, and they could build it into an ongoing series of articles tracking progress. Maybe consider reaching out directly to Kelsey Piper/Dylan Matthews/Vox?
Agreed on both, an article along the lines of “The world’s biggest pork producer just broke their animal welfare commitment” seems very valuable and possibly effective as shaming, while “Corporate animal welfare campaigning often fails to deliver” would definitely be counterproductive.
Check out Tyler Cowen’s Emergent Ventures.
We want to jumpstart high-reward ideas—moonshots in many cases—that advance prosperity, opportunity, liberty, and well-being. We welcome the unusual and the unorthodox.
Projects will either be fellowships or grants: fellowships involve time in residence at the Mercatus Center in Northern Virginia; grants are one-time or slightly staggered payments to support a project.
Think of the goal of Emergent Ventures as supporting new ideas and projects that are too difficult, too hard to measure, too unusual, too foreign, too small, or…too something to make their way through the usual foundation and philanthropic process.
Here’s the first cohort of grant recipients. I think your project would fit what they’re looking for, and it’s a pretty low cost to apply.
Really cool idea! Two possibilities:
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
There’s probably people who can answer better, but my crack at it: (from most to least important)
1. If people who care about AI safety also happen to be the best at making AI, then they’ll try to align the AI they make. (This is already turning out to be a pretty successful strategy: OpenAI is an industry leader that cares a lot about risks.)
2. If somebody figures out how to align AI, other people can use their methods. They’d probably want to, if they buy that misaligned AI is dangerous to them, but this could fail if aligned methods are less powerful or more difficult than not-necessarily-aligned methods.
3. Credibility and public platform: People listen to Paul Christiano because he’s a serious AI researcher. He can convince important people to care about AI risk.
I agree that LW has been a big part of keeping EA epistemically strong, but I think most of that is selection rather than education. It’s not that reading LW makes you much more clearer-thinking or focused on truth, it’s that only people who are that way to begin with decide to read LW, and they then get channeled to EA.
If that’s true, it doesn’t necessarily discredit rationality as an EA cause area, it just changes the mechanism and the focus: maybe the goal shouldn’t be making everybody LW-rational, it should be finding the people that already fit the mold, hopefully teaching them some LW-rationality, and then channeling them to EA.
The prize definitely seems useful for encouraging deeper, better content. One question: would a smaller, more frequent set of prizes be more effective? Maybe a prize every two weeks?
My intuition says a $1000 top prize won’t generate twice as much impact as a $500 top prize every two weeks—thinking along the lines of prospect theory, where a win is a win and winning $500 is worth a lot more than half of winning $1000; or prison reform literature, where a higher chance of a smaller punishment is more effective in deterring crime than a small chance of a big punishment.
These prize posts probably create buzz and motivate people to begin, improve, and finish their posts; doubling their frequency and halving their payout could be more effective at the same cost.
(Counterargument: the biggest cost isn’t money, it’s time, and a two week turnaround is a lot for moderators. Not sure how to handle that.)
As a college student, I volunteer a few hours a week at Faunalytics, an EA-aligned animal welfare advocacy/research group. I think volunteering with Faunalytics is a good candidate for a small-scale Task Y.
I started off by editing their old article archives and updating them to fit their new article formatting. It was pretty boring, but it was useful for Faunalytics because it let them publish their archived research summaries, and it let me show Faunalytics that I was committed and could be trusted with responsibility.
Sometimes I’d rewrite old articles that seemed poorly done, so after a few months, my supervisor liked my writing and moved me up to doing my own research summaries. Each week, I’d be assigned a paper about something relevant to animal or environmental advocacy. I’d write an 800 word summary in the style of a blog post, and Faunalytics would publish it to their library. Here’s some of what I wrote (the tagging system is buggy, it doesn’t list a lot of my articles).
I recently stopped doing research summaries for time reasons, but I’m now working with their research team on analyzing data from their annual Animal Tracker survey poll.
The parts I’ve really enjoyed about the work are:
The papers could be interesting, and I learned a bit about animal topics
I think most of what I wrote was informative and would be useful to e.g. animal activists who wanted to better understand a particular question. Examples: Does ecotourism help or harm local wildlife? What’s the relationship between domestic violence and animal abuse? (But, see below: informative and useful to some people is not necessarily the same as effective in doing good)
Writing research summaries is very engaging work, just the right level of difficulty, and my writing skills markedly improved
It can lead to other opportunities: They now trust me enough to let me do their data analysis project, which is really fun, educational, and (given that I’m a student) will be probably the most legitimate thing I’ve published once it’s done. I’d also be comfortable asking my supervisor for a recommendation letter for a job, and if I wanted to get more involved in EA animal rights, I think I’d be able to make connections through Faunalytics.
The parts that weren’t so great are:
On the whole, I’m not sure I’ve had much impact. If I were convinced that the majority of causes within animal welfare are effective, then I would probably think I’ve had a good positive impact. But I don’t think e.g. the environmental impacts of ecotourism are very important from an altruistic standpoint, which really decreases my value.
Being a low-commitment volunteer is simply a bad arrangement in a lot of ways. At least for me, doing something a few hours a week often leads to doing it zero hours a week, especially when it’s a volunteer relationship where you’ve made very little firm commitment and there’s no consequences for being late or failing to deliver. I think I combatted this pretty well by forcing myself to stick to deadlines, but I totally understand the GiveWell position of not accepting volunteers because they’re not committed enough.
On the whole, for anyone looking to explore working in EA more broadly, I think volunteering at Faunalytics is a great idea: the possibility of direct impact, mostly engaging work, and a strong opportunity to prove yourself and make connections that can lead to future opportunities. Check it out here if you’re interested, and feel free to message me with questions.
(Anybody have input on whether I should write a full post about my experience/advertising the opportunity?)
I really like the education review, it seems like a great introduction to the literature on effective education interventions. And it’s even better that you’ll be reviewing health interventions soon, given that they seem generally more effective than education, both in terms of certainty and overall impact.
But I would still have strong confidence that GiveWell’s top charities all have significantly higher expected value than the results of this investigation, for two reasons.
First, GiveWell has access to the internal workings of charities, allowing them to recommend charities that do a better job of achieving their intervention. This goes as far as GiveWell making almost a dozen site visits over the past five years to directly observe these charities in action. There’s just no way to replicate this without close, prolonged contact with all the relevant charities.
Second, GiveWell simply has more experience and expertise in development evaluations than someone doing this in their free time. It’s fantastic that you all are working with these donors, and your actions seem likely to have a strong impact. But GiveWell has 25 staff, a decade of experience in the area, and access to any relevant experts and insider information. It’s very difficult to replicate the quality of recommendations that come from that process. Doing the research yourself has other benefits: it increases engagement with the cause, it teaches a valuable skill, etc. But when there’s a million dollars to be donated, it might be best to trust GiveWell.
If the donors want an intervention that’s both certain and transformative, GiveDirectly seems like an obvious choice.
Just a thank you for sharing, it can be scary to share your personal background like this but it’s extremely helpful for people looking into EA careers.
Good point, I wasn’t fully considering that. I think Michael Plant’s recent investigation into mental health as a cause area is a perfect example of the value of independent research—mental health isn’t something . While I still think it’s going to be extremely difficult to beat GiveWell in i.e., evaluating which deworming charity is most effective, or which health intervention tends to be most effective, I do think independent researchers can make important contributions in identifying GiveWell’s “blind spots”.
Mental health and education both could be good examples. At this point, GiveWell doesn’t recommend either. But they’re not areas that GiveWell has spent years building expertise in. So it’s reasonable to expect that, in these areas, a dedicated newcomer can produce research that rivals GiveWell’s in quality.
So I’d revise my stance to: Do your own research if there’s an upstream question (like the moral value of mental suffering, the validity of life satisfaction surveys, or the intrinsic value of education) that you think GiveWell might be wrong about. Often, you’ll conclude that they were right, but the value of uncovering their occasional mistakes is high. Still, trust GiveWell if you agree with their initial assumptions on what matters.
I like the general idea that AI timelines matter for all altruists, but I really don’t think it’s a good idea to try to “beat the market” like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they’ll spend billions bidding up the stock price until they’re no longer undervalued.
Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief—but it should. It assumes that you’re capable of making better predictions than the aggregate stock market. Don’t bet on beating markets.