I can imagine a couple of scenarios:
a) GV asked Open Phil if they had the capacity to look into psychedelics/Alzheimer’s, and Open Phil said “no”
b) GV asked Open Phil for shallow investigations of those areas, and the results weren’t promising enough for Open Phil to want to continue, but weren’t so un-promising that GV gave up
c) GV has some research capacity independent of Open Phil, and decided to use it on these causes (maybe because Dustin/Cari see them as personally motivating/”warm fuzzies”, even if they are potentially high-impact)
...there are plenty of other possibilities I haven’t had time to think of, but some combination of (a) and (c) feels pretty likely to me. (This is entirely speculative; I have no special insight into the relationship between GV and Open Phil.)
After watching the voting unfold, I think the identity-politics concern is real. Many comments were heavily upvoted soon after being posted.
I’m not sure what timescale “soon after being posted” represents here. Is your concern more along the lines of:
(a) People seem to have been upvoting comments without having had time to read/think about them,
(b) People seem to have been upvoting comments without having had time to read/think about your post and how it interacted with those comments, or
(c) People seem to have been upvoting comments without having had time to read/think about all the other comments up to that point?
(Or some mix of those, of course.)
I remember taking 5-10 minutes to read some of the shorter arguments, then upvoting them because they made an interesting point or linked me to an article I found useful.
It feels like people aren’t likely to spend more than 10 minutes reading/thinking about a Forum comment other than in exceptional cases, but perhaps there are ways you can encourage the right kind of slow thinking in contest posts?
From the edit logs: “almost no unique, well-sourced content here. Merged what was unique to GiveWell”
This is the final note from an editor who deleted the page. This was in early 2017; I’d expect an independent Open Phil page to make a lot more sense now (if they want one to exist).
Dropping in late to note that I really like the meta-point here: It’s easy to get caught up in arguing with the “implications” section of a post or article before you’ve even checked the “results” section. Many counterintuitive arguments fall apart when you carefully check the author’s data or basic logic.
(None of the points I make here are meant to apply to Ben’s points—these are just my general thoughts on evaluating ideas.)
Put another way, arguments often take the form:
If A, then B
It’s tempting to attack “Therefore, B” with anti-B arguments C, D, and E, but I find that it’s usually more productive to start by checking the first two points. Sometimes, you’ll find issues that render “Therefore, B” moot; other times, you’ll see that the author’s facts check out and find yourself moving closer to agreement with “Therefore, B”. Both results are valuable.
For those who haven’t read the Meditation, it’s a discussion of ways in which competitive pressures push civilizations into situations where almost all of our energy and happiness are eaten up by the scramble for scarce resources.
(This is a very brief summary that leaves out a lot of important ideas, and I recommend reading the entire thing, despite its formidable length.)
I especially appreciate the “causal story” section of the post! I’m not sure I fully believe the explanation*, but it’s always good to propose one, rather than handwaving away the reasons that a good cause would be so neglected (an error I frequently see outside of EA, and occasionally in EA-aligned work on other new cause areas).
*The part that rings truest to me is “no ready channels for donation”. Ignorance seems more likely than deliberate neglect; I can picture many large environmental donors being asked about coal seam fires and reacting with “huh, never thought about it” or “is that actually a problem?”
My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven’t been written up by anyone with a strong connection to EA).
I can’t speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just… small, without enough research-hours available to devote to everything worth exploring.
(Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they’ve earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.)
Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, “EAs should do more X” means something like “these specific people and organizations should do more X and less Y”, unless we grow the pool of available people/organizations.)
Thanks for sharing this resource!
In future posts like this, I’d recommend including a little more information about the voting system. Does AMF need to win a majority of the votes to receive any money? If so, how close is it to doing so? If not, how much money is a vote likely to be worth?
(There’s a difference between taking time to install a new extension in order to give $50 in expectation vs. giving $0.50 or $0.00 in expectation—the “zero” number is possible if AMF needs a thousand votes to win, given the small size of the EA community.)
After Julia decided to step down, I proposed a list of six Forum users who I thought might be good candidates. She and I discussed the options and decided to begin by reaching out to Larks and Khorton, who both accepted; if they hadn’t, I’d have approached other candidates who I believe would also be solid judges.
(There are many more than six contributors who I’d by open to considering; the original shortlist was just six people who quickly came to mind, among whom I expected we’d get at least two “yes” responses.)I wanted to start with a relatively small addition, but there’s a good chance that the roster will expand later on. I can imagine getting up to a group of 8-10 people without the Prize becoming too difficult to coordinate, and I also wouldn’t be surprised if people sometimes joined up for a couple of months and then stepped down, based on their available time.
What gives you the sense that there’s a lot of causal/top-down planning in EA? It may make more sense to ask: “a lot of causal/top-down planning compared to what?”
On the one hand, the movement’s largest organizations sometimes recommend specific courses of action; on the other hand, they also sometimes recommend “keeping your options open” and “staying flexible”.
Also, EA encompasses a huge range of charities that work on a lot of different things, and new organizations spring up all the time. Overall, even the largest/oldest EA organizations are still practically startups compared with most major American corporations; they frequently make large changes to their mission/strategy on a year-to-year basis, as the result of new data or changes in the resources available to them. (CEA has done many different things over the last five years, GiveWell is undergoing massive change, etc.)
I enjoy posts like these, but it seems difficult to adapt to using them when I’m actually making a charitable donation (or taking other substantive action).
An idea along those lines: Examine the work of an EA organization that has public analysis of the benefits of various interventions (e.g. GiveWell) from the perspective of variable critical-level utilitarianism, and comment on how you’d personally change the way they calculate benefits if you had the chance.
(This may not actually be applicable to GiveWell; if no orgs fit the bill, you could also examine how donations to a particular charity might look through various utilitarian lenses. In general, I’d love to see more concrete applications of this kind of analysis.)
Information that makes me lean toward “most giving is local”:
In 2017, roughly 31% of all American donations went to religious institutions, and I’d guess that almost all of that money was for local churches and missions. Only 6% of giving was international.
More than half of all animal-related giving goes to animal shelters (again, I assume these are mostly local shelters).
Many popular giving categories are almost exclusively local: Community centers, food banks, museums, charity hospitals...
I’d also predict similar effects, though with a smaller magnitude, since some of the funding will be chewed up by marketing and processing costs for charities.
Within EA, I’ve seen a couple of examples of limited voucher systems (or systems with some similar properties):
Giving Games give people the chance to allocate funding to one charity out of a small group (usually 2-5)
I’ve personally run a limited “donation voucher” system on Facebook before, offering to donate a small amount ($50, IIRC) to any GiveWell charity on behalf of the first X people who took the offer (I think X = 5, but only three people actually asked)
I’d worry about what the existence of a voucher system might mean for charities that take more risks; if it were sufficiently well-funded, it could increase the attention paid to the philanthropic sector and lead to many more fights over controversial organizations like Planned Parenthood (or even someplace like MIRI).
I also suspect that much of the best philanthropy happens through large donations to lesser-known organizations from people who have the time and money to conduct research (e.g. the kind of work Open Phil does), and that people with less knowledge making smaller donations might not make high-impact choices. (I think it’s very likely that a voucher system would increase the correlation between charities’ spending on marketing and their donation revenue.)
That said, if someone were to propose a bill in the House that redistributed $25 billion in the form of $100 vouchers to every American adult for charitable giving, I might support it, assuming that the costs of the redistribution/vouchering process weren’t too high. The average charity people chose to support (e.g. “buying food for hungry people”) might still have more impact on total welfare than the average use of government funds.
Are there particular instances of complaints related to voting behavior that you can recall?
I remember seeing a couple of cases over the last ~8 months where users were concerned about low-information downvotes (people downvoting without explaining what they didn’t like). I don’t remember seeing any instances of concern around other aspects of the current system (for example, complaints about high-karma users dominating the perception of posts by strong-voting too frequently). However, I could easily be forgetting or missing comments along those lines.
Currently, you can see the number of votes a post or comment has received by hovering over its karma count. This does let you distinguish between “many upvotes and many downvotes” and “no votes”. Adding a count of upvotes and downvotes would provide more information about the distribution of strong votes (e.g. one strong upvote vs. several weak downvotes, or vice-versa). I can see how that could be useful, and I’ll bring it up with the Forum’s tech team to hear their thoughts. Thank you for the suggestion!
I didn’t intend it as a dodge, though I understand why this information is difficult to provide. I just think that talking about problems in a case where one party is anonymous may be inherently difficult when examples can’t easily come into play.
I could try harder to come up with my own examples for the claims, but that seems like an odd way to handle discussion; it allows almost any criticism to be levied in hopes that the interlocutor will find some fitting anecdote. (Again, this isn’t the fault of the critics; it’s just a difficult feature of the situation.)
What are some EA projects you consider “status quo”, and how is following the status quo relevant to the worthiness of the projects? (Maybe your concern comes from the idea that projects which could be handled by non-contrarians are instead taking up time/energy that could be spent on something more creative/novel?)
Could you say more about yours and Vaidehi’s history in the EA movement? What experience are you both bringing to this project that will help you make these connections?
(If you answer here, you may also want to update your Forum bio!)
Not a problem—I posted the reply long after the post went up, so I wouldn’t expect you to recall too many details. No need to send a PM, though I would love to read the article for point four (your link is currently broken). Thanks for coming back to reply!
I work for CEA, but these views are my own—though they are, naturally, informed by my work experience.
First, and most important: Thank you for taking the time to write this up. It’s not easy to summarize conversations like this, especially when they touch on controversial topics, but it’s good to have this kind of thing out in public (even anonymized).
I found the concrete point about Open Phil research hires to be interesting, though the claimed numbers for CFAR seem higher than I’d expect, and I strongly expect that some of the most recent research hires came to Open Phil through the EA movement:
Open Phil recruited for these roles by directly contacting many people (I’d estimate well over a hundred, perhaps 300-400) using a variety of EA networks. For example, I received an email with the following statement: “I don’t know you personally, but from your technical experience and your experience as an EA student group founder and leader, I wonder if you might be a fit for an RA position at Open Philanthropy.”
Luke Muehlhauser’s writeup of the hiring round noted that there were a lot of very strong applicants, including multiple candidates who weren’t hired but might excel in a research role in the future. I can’t guarantee that many of the strong applicants applied because of their EA involvement, but it seems likely.
While I wasn’t hired as an RA, I was a finalist for the role. Bastian Stern, one of the new researchers mentioned in this post, founded a chapter of Giving What We Can in college, and new researcher Jacob Trefethen was also a member of that chapter. If there hadn’t been an EA movement for them to join, would they have heard about the role? Several other Open Phil researchers (whose work includes the long-term future) also have backgrounds in EA community-building.
I’ll be curious to see whether, if Open Phil makes another grant to CFAR, they will note CFAR’s usefulness as a recruiting pipeline (they didn’t in January 2018, but this was before their major 2018 hiring round happened).
Also, regarding claims about 80,000 Hours specifically:
Getting good ops hires is still very important, and I don’t think it makes sense to downplay that.
Even assuming that none of the research hires were coached by 80K (I assume it’s true, but I don’t have independent knowledge of that):
We don’t know how many of the very close candidates came through 80,000 Hours…
...or how many actual hires were helped by 80K’s other resources…
...or how many researchers at other organizations received career coaching.
Open Phil’s enormous follow-on grant to 80K in early 2019 seems to indicate their continued belief that 80K’s work is valuable in at least some of the ways Open Phil cares about.
As for the statements about “compromising on a commitment to truth”… there aren’t enough examples or detailed arguments to say much.
I’ve attended a CFAR workshop, a mini-workshop, and a reunion, and I’ve also run operations for two separate CFAR workshops (over a span of four years, alongside people from multiple “eras” of CFAR/rationality). I’ve also spent nearly a year working at CEA, before which I founded two EA groups and worked loosely with various direct and meta organizations in the movement.
Some beliefs I’ve come to have, as a result of this experience (corresponding to each point):
1. “Protecting reputation” and “gaining social status” are not limited to EA or rationality. Both movements care about this to varying degrees—sometimes too much (in my view), and sometimes not enough. Sometimes, it is good to have a good reputation and high status, because these things both make your work easier and signify actual virtues of your movement/organization.
2. I’ve met some of the most rigorous thinkers I’ve ever known in the rationality movement—and in the EA movement, including EA-aligned people who aren’t involved with the rationality side very much or at all. On the other hand, I’ve seen bad arguments and intellectual confusion pop up in both movements from time to time (usually quashed after a while). On the whole, I’ve been impressed by the rigor of the people who run various major EA orgs, and I don’t think that the less-rigorous people who speak at conferences have much of an influence over what the major orgs do. (I’d be really interested to hear counterarguments to this, of course!)
3. There are certainly people from whom various EA orgs have wanted to dissociate (sometimes successfully, sometimes not). My impression is that high-profile dissociation generally happens for good reasons (the highest-profile case I can think of is Gleb Tsipursky, who had some interesting ideas but on the whole exemplified what the rationalists quoted in your post were afraid of—and was publicly criticized in exacting detail).
I’d love to hear specific examples of “low-status” people whose ideas have been ignored to the detriment of EA, but no one comes to mind; Forum posts attacking mainstream EA orgs are some of the most popular on the entire site, and typically produce lots of discussion/heat (though perhaps less light).
I’ve heard from many people who are reluctant to voice their views in public around EA topics—but as often as not, these are high-profile members of the community, or at least people whose ideas aren’t very controversial.
They aren’t reluctant to speak because they don’t have status — it’s often the opposite, because having status gives you something to lose, and being popular and widely-read often means getting more criticism over even minor points than an unknown person would. I’ve heard similar complaints about LessWrong from both well-known and “unknown” writers; many responses in EA/rationalist spaces take a lot of time to address and aren’t especially helpful. (This isn’t unique to us, of course — it’s a symptom of the internet — but it’s not something that necessarily indicates the suppression of unpopular ideas.)
That said, I am an employee of CEA, so people with controversial views may not want to speak to me at all—but I can’t comment on what I haven’t heard.
4. Again, I’d be happy to hear specific cases, but otherwise it’s hard to figure out which people are “interested in EA’s resources, instead of the mission”, or which “truth-finding processes” have been corrupted. I don’t agree with every grant EA orgs have ever made, but on the whole, I don’t see evidence of systemic epistemic damage.
The same difficulties apply to much of the rest of the conversation—there’s not enough content to allow for a thorough counterargument. Part of the difficulty is that the question “who is doing the best AI safety research?” is controversial, not especially objective, and tinged by one’s perspective on the best “direction” for safety research (some directions are more associated with the rationality community than others). I can point to people in the EA community whose longtermist work has been impressive to me, but I’m not an AI expert, so my opinion means very little here.
As a final thought: I wonder what the most prominent thinkers/public faces of the rationality movement would think about the claims here? My impression from working in both movements is that there’s a lot of mutual respect between the people most involved in each one, but it’s possible that respect for EA’s leaders wouldn’t extend to respect for its growth strategy/overall epistemics.