ryancbriggs
I expected you to be right, but when I looked on the 80k job board right now of the 962 roles: 161 were in AI, 105 were in pandemics, and 308 were in global health and development. Hard to say exactly how that relates to funding, but regardless I think it shows development is also a major area of focus when measured by jobs instead of dollars.
I completely agree.
The AI Messiah
Thanks for the kind words Richard.
Re: your first point: I agree people have inside view reasons for believing in risk from AGI. My point was just that it’s quite remarkable to believe that, sure, all those other times the god-like figure didn’t show up, but that this time we’re right. I realize this argument will probably sound unsatisfactory to many people. My main goal was not to try to persuade people away from focusing on AI risks, it was to point out that the claims being made are very messianic and that that is kind of interesting sociologically.
Re: your second point: I should perhaps have been clearer: I am not making a parallel to religion as a way of criticizing EA. I think religions are kind of amazing. They’re one of the few human institutions that have been able to reproduce themselves and shape human behaviour in fairly consistent ways over thousands of years. That’s an incredible accomplishment. We could learn from them.
I appreciate the pushback. I’m thinking of all claims that go roughly like this: “a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish.” This is narrower than “all transformative change” but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.
I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That’s totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.
I really appreciate this response, which I think understands me well. I also think it expresses some of my ideas better than I did. Kudos Thomas. I have a better appreciation of where we differ after reading it.
I’m not sure that it’s purely “how much to trust inside vs outside view,” but I think that is at least a very large share of it. I also think the point on what I would call humility (“epistemic learned helplessness”) is basically correct. All of this is by degrees, but I think I fall more to the epistemically humble end of the spectrum when compared to Thomas (judging by his reasoning). I also appreciate any time that someone brings up the train to crazy town, which I think is an excellent turn of phrase that captures an important idea.
This was a good post overall, I just have one modification.
Your advisor is the most important choice you can make. Talk to as many people as possible in the lab before you join it. If you and your advisor do not get along, your experience will be terrible.
I received this advice, and things worked out for me, but it’s dangerously incomplete. It is true that you need a good relationship with an advisor, and their recommendation letter matters when you’re on the job market. But for many areas the prestige of the department and university is more important. Put simply: you should probably go to the most prestigious PhD program that will take you. See this for example: “Across disciplines, we find that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality.” Prestige is especially important if you want an academic position.
This is a good thing to flag. I actually agree re: anthropic reasoning (though frankly I always feel a bit unsettled by its fundamentally unscientific nature).
My main claim re: AI—as I saw it—was that the contours of the AI risk claim matched quite closely to messianic prophesies, just in modern secular clothing (I’ll note that people both agreed and disagreed with me on this point and interested people should read my short post and the comments). I still stand by that fwiw—I think it’s at minimum an exceptional coincidence.
One underrated response that I have been thinking about was by Jason Wagner, who paraphrased one reading of my claim as:
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
In this reading AI risk is real, no one has a great sense of how to explain it because much of its nature is unknown and simply weird, and so we fall back on narratives that we understand—so Christian-ish messiah-type stories.
I’ll re-word my comment to clarify the part re: “the dangers of anthropic reasoning”. I always forget if “anthropic” gets applied to not conditioning on existence and making claims, or the claim that we need to condition on existence when making claims.
Results of a survey of international development professors on EA
Thanks. I basically agree with what you say, I’d just note that lots of IDEV profs aren’t economists. I’m writing something I’ll aim at World Development (then JDS, then JID, etc) based on the survey data, for exactly the reasons you describe.
This is the breakdown of “discipline of PhD” in my sample.
Academic discipline Canada United States United Kingdom Anthropology 4 13 6 Economics 11 47 16 Geography 7 8 14 History 2 2 2 Linguistics and languages 0 1 0 Philosophy 1 0 1 Political science 24 59 17 Psychology 0 1 1 Public Policy or Public Administration 0 3 0 Sociology 3 12 2 Other 10 21 25 International Development Studies 10 4 23 Nothing selected 0 0 1 Development economics is a subfield of economics, international development is an interdisciplinary research area. The two are related but not the same. I think most international development people would see development economists are part of their enterprise, but the inverse would typically not be true.
I did not ask for impressions about CGD, JPAL, etc. I did ask an EA “feeling thermometer” question about EA in general (of the subset of people who said they knew enough about EA to discuss with a friend), and I got this (0 is as negative as possible and 100 is as positive as possible):
That spike at 50 is an answer of total indifference, which again affirms that many of the people who said they knew about EA probably didn’t know very much about it.
The question about “which subsets of the profession might be more or less interested in EA” is a very good one. I’m not sure, and I don’t think I can really ask my data to speak to that (but maybe...).
I think the lowest hanging fruit is probably more technically oriented people (economists or quant-oriented political scientists or sociologists), but personally I think a fairly wide cross-section of international development profs could contribute and might be interested in doing so.
You might also be interested in the full report.
I think that’s fair (see also, footnote 2). Fwiw this was the actual question: “Consider a charity whose programs are among the most cost-effective ways of saving the lives of children. In other words, thinking across all charities that currently exist, this one can save a child’s life for the smallest amount of money.
Roughly what do you think is the minimum amount of money that you would have to donate to this charity in order to expect that your money has saved the life of one child?”
Thanks for the kind words and thoughts. I wanted to keep the post short, but if you want more detail there is lots more in the link at the end.
I agree that that Q2 has some issues, but what makes Q2 is valuable is that other people have used it and so I have a collection of answers to the question from other samples (the public and experts). That’s why I used it (and why I also added my own question, Q1).
There are a lot of economists in my sample, and at least in the US political scientists get a lot of quant methods training so their numeracy tends to be high (in the UK and Canada this varies from place to place). I don’t think the issue is pure innumeracy. I also phrased the question so as to avoid some of the more common misinterpretations.
This was the actual question: “Consider a charity whose programs are among the most cost-effective ways of saving the lives of children. In other words, thinking across all charities that currently exist, this one can save a child’s life for the smallest amount of money.
Roughly what do you think is the minimum amount of money that you would have to donate to this charity in order to expect that your money has saved the life of one child?”
I think that idea has a lot of potential.
I wish I had useful comments Lauren but all I can say is that this was a really interesting read on a topic I haven’t thought much about.
This aligns with my somewhat similar experiences. I hear about profs setting up companies sometimes and I used to think it was done to make a bunch of money taking some idea to market. Lately I’ve been coming to think that it’s done in large part to dodge university bureaucracy.
I think that longtermism has grown very dramatically, but that it is wrong to equate it with EA (both as a matter of accurate description and for strategic reasons, as are nicely laid out in the above post).
I think the confusion here exists in part because the “EA vanguard” has been quite taken up with longtermism and this has led to people seeing it as more prominent in EA than it actually is. If you look to organizations like The Life You Can Save or Giving What We Can, they either lead with “global health and wellbeing”-type cause areas or focus on that exclusively. I don’t mean to say that this is good or bad, just that EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys.
Personally, I think OpenPhil’s worldview diversification is as good an intellectual frame for holding all this together as I’ve seen. We all get off the “crazy train” at some point, and those who think they’ll be hardcore and bite all bullets eventually hit something like this.