I’m not sure what the metric for the “good schools” list is but the ranking seemed off to me. Berkeley, Stanford, MIT, CMU, and UW are generally considered the top CS (and ML) schools. Toronto is also top-10 in CS and particularly strong in ML. All of these rankings are of course a bit silly but I still find it hard to justify the given list unless being located in the UK is somehow considered a large bonus.
jsteinhardt
I intended the document to be broader than a research agenda. For instance I describe many topics that I’m not personally excited about but that other people are and where the excitement seems defensible. I also go into a lot of detail on the reasons that people are interested in different directions. It’s not a literature review in the sense that the references are far from exhaustive but I personally don’t know of any better resource for learning about what’s going on in the field. Of course as the author I’m biased.
Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.
Disclosure: I am working at OpenPhil over the summer. (I don’t have any particular private information, both of the above facts are publicly available.)
EDIT: I don’t intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.
I’m worried that you’re mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role—both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).
For example: suppose that EA has a “shortage of operations people” but person A determines that they would have higher impact doing direct research rather than doing ops. Then in fact the best thing is for person A to work on direct research, even if there are already many other people doing research and few people doing ops. (Of course, person A could be mistaken about which choice has higher impact, but that is different from the trade considerations that comparative advantage is based on.)
I agree with the heuristic “if a type of work seems to have few people working on it, all else equal you should update towards that work being more neglected and hence higher impact” but the justification for that again doesn’t require any considerations of trading with other people . In general, if A and B can trade in a mutually beneficial way, then either A and B have different values or one of them was making a mistake.
FWIW, 50k seems really low to me (but I live in the U.S. in a major city, so maybe it’s different elsewhere?). Specifically, I would be hesitant to take a job at that salary, if for no other reason than I thought that the organization was either dramatically undervaluing my skills, or so cash-constrained that I would be pretty unsure if they would exist in a couple years.
A rough comparison: if I were doing a commissioned project for a non-profit that I felt was well-run and value-aligned, my rate would be in the vicinity of $50USD/hour. I’d currently be willing to go down to $25USD/hour for a project that is something I basically would have done anyways. Once I get my PhD I think my going rates would be higher, and for a senior-level position I would probably expect more than either of these numbers, unless it was a small start-up-y organization that I felt was one of the most promising organizations in existence.
EDIT: So that people don’t have to convert to per-year salaries in their heads, the above numbers if annualized would be $100k USD/year and $50k USD/year.
(Speaking for myself, not OpenPhil, who I wouldn’t be able to speak for anyways.)
For what it’s worth, I’m pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can’t really think of anyone more familiar with MIRI’s work than Paul who isn’t already at MIRI (note that Paul started out pursuing MIRI’s approach and shifted in an ML direction over time).
That being said, I agree that the public write-up on the OpenAI grant doesn’t reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I’m not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don’t know if they did for the OpenAI grant).
I think the argument along these lines that I’m most sympathetic to is that Paul’s agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people’s collective blind spot (because we’re all blinded by the same paradigm).
This doesn’t match my experience of why I find Paul’s justifications easier to understand. In particular, I’ve been following MIRI since 2011, and my experience has been that I didn’t find MIRI’s arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn’t generate myself is the one that I feel most applies to Paul’s agenda.
( * There was a brief period shortly after reading the sequences that I found them extremely convincing, but I think I was much more credulous then than I am now. )
Shouldn’t this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
Personally, I feel like I understand Paul’s approach better than I understand MIRI’s approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don’t understand much about AI. I think outreach by non-experts could end up being net-negative.
In general I think this sort of activism has a high potential for being net negative—AI safety already has a reputation as something mainly being pushed by outsiders who don’t understand much about AI. Since I assume this advice is targeted at the “average EA” (who presumably doesn’t know much about AI), this would only exacerbate the issue.
Thanks for clarifying; your position seems reasonable to me.
OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don’t trust it then that’s fine, but I don’t think that can function as an argument that the recommendation shouldn’t have been made in the first place (many people such as myself do trust it and got substantial value out of the recommendation and of reading what Chloe has to say in general).
I feel your overall engagement here hasn’t been very productive. You’re mostly repeating the same point, and to the extent you make other points it feels like you’re reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response. The fact that you and Larks are responsible for 20 of the 32 comments on the thread is a further negative sign to me (you could probably condense the same or more information into fewer better-thought-out comments than you are currently making).
Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.
I don’t agree with the_jaded_one’s conclusions or think his post is particularly well-thought-out, but I don’t think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one’s criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)
My reasons are very similar to Benjamin Hoffman’s reasons here.
In my post, I said
anything I write that wouldn’t incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn’t be worthwhile.
I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)
I think parts of academia do this well (although other parts do it poorly, and I think it’s been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it’s still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.
My impression is that criticism has traditionally been a strong part of Jewish culture, but I’m not culturally Jewish so can’t speak directly.
I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don’t know a ton about it.
Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don’t).
I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.
There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.
I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn’t incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn’t be worthwhile.
While I’m sympathetic to the fact that there’s also a lot of low-quality / lazy criticism of EA, I don’t think responses that involve setting a high bar for high-quality criticism are the right way to go.
(Note that I don’t think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)
- Jan 13, 2017, 3:32 PM; 5 points) 's comment on Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things) by (
- Jan 13, 2017, 4:08 PM; 0 points) 's comment on Contra the Giving What We Can pledge by (
Okay, thanks for the clarification. I now see where the list comes from, although I personally am bearish on this type of weighting. For one, it ignores many people who are motivated to make AI beneficial for society but don’t happen to frequent certain web forums or communities. Secondly, in my opinion it underrates the benefit of extremely competent peers and overrates the benefit of like-minded peers.
While it’s hard to give generic advice, I would advocate for going to the school that is best at the research topic one is interested in pursuing, or where there is otherwise a good fit with a strong PI (though basing on a single PI rather than one’s top-2/top-3 can sometimes backfire). If one’s interests are not developed enough to have a good sense of topic or PI then I would go with general strength of program.