GiveDirectly are doing cash transfers in Yemen to help people there afford enough food to eat.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?
For example, can you explain why my lengthy comment isn’t a direct object level response?
Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response.
I think the elephant in the room is : “Why are they part-time?”
If making more grants is so important, either hire more people or work full-time, no? This is something I do not understand with the current status quo
Hi both,
Yes behavioural science isn’t a topic I’m super familiar with, but it seems very important!
I think most of the focus so far has been on shifting norms/behaviour at top AI labs, for example nudging Publication and Release Norms for Responsible AI.
Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is “meaningful human control”. Automation bias is an issue even up to the nuclear level—the concern is that people will more blindly trust ML systems, and won’t disbelieve them as people did in several Cold War close calls (eg Petrov not believing his computer warning of an attack). See Autonomy and machine learning at the interface of nuclear weapons, computers and people.
Jess Whittlestone’s PhD was in Behavioural Science, now she’s Head of AI Policy at the Centre for Long-Term Resilience.
Thanks!
This was very much Ellsberg’s view on eg the 80,000 Hours podcast:
“And it was just a lot better for Boeing and Lockheed and Northrop Grumman and General Dynamics to go that way than not to have them, then they wouldn’t be selling the weapons. And by the way what I’ve learned just recently by books like … A guys named Kofsky wrote a book called Harry Truman And The War Scare of 1947.
Reveals that at the end of the war, Ford and GM who had made most of our bombers went back to making cars very profitably. But Boeing and Lockheed didn’t make products for the commercial market, only for commercial air except there wasn’t a big enough market to keep them from bankruptcy. They had suddenly lost their vast orders for military planes in mid 1945. The only way they could avoid bankruptcy was to sell a lot of planes to the government, military planes. But against who? Not Germany we were occupying Germany, not Japan we were occupying Japan. Who was our enemy that you needed a lot of planes against. Well Russia had been our ally during the war, but Russia had enough targets to justify, so they had to be an enemy and they had to be the enemy, and we went off from there.
I would say that having read that book and a few others I could say, I now see since my book was written nine months ago, that the Cold War was a marketing campaign for selling war planes to the government and to our allies. It was a marketing campaign for annual subsidies to the aerospace industry, and the electronics industry. And also the basis for a protection racket for Europe, that kept us as a major European power. Strictly speaking we’re not a European power. But we are in effect because we provide their protection against Russia the super enemy with nuclear weapons, and for that purpose it’s better for the Russians to have ICBM, and missiles, and H-bombs, as an enemy we can prepare against. It’s the preparations that are profitable. All wars have been very profitable for the arms manufacturers, nuclear war will not be, but preparation for it is very profitable, and therefore we have to be prepared.”
Location: Bristol, UK
Remote: Yes
Willing to relocate: Yes, but only within Europe.
Skills: Javascript + TypeScript, ReactJS and React Native (2 years professional experience), HTML, CSS and SASS, some experience with GraphQL, NodeJs, PostgreSQL, Wordpress, PHP, Python and Django.
Resume: https://www.linkedin.com/in/matthew-goodman-96ab691b8/
Email: mattgoodman95@gmail.com
A point about hiring and grantmaking, that may or may not already be conventional wisdom:
If you’re hiring for high-autonomy roles at a non-profit, or looking for non-profit founders to fund, then advice derived from the startup world is often going to overweight the importance of entrepreneurialism relative to self-skepticism and reflectiveness.[1]
Non-profits, particularly non-profits with longtermist missions, are typically trying to maximize something that is way more illegible than time-discounted future profits. To give a specific example: I think it’s way harder for an organization like CEA to tell if it’s on the right track than it is for a company like Zoom to tell if it’s on the right track. CEA can track certain specific metrics (e.g. the number of “new connections” reported at each EAG), but it will often be ambiguous how strongly these metrics reflect positive impact—and there will also always be a risk that various negative indirect effects aren’t being captured by the key metrics being used. In some cases, evaluating the expected impact of work will also require making assumptions about how the world will evolve over the next couple decades (e.g. assumptions about how pressing risks from AI are).
I think this means that it’s especially important for these non-profits to employ and be headed by people who are self-skeptical and reflect deeply on decisions. Being entrepreneurial, having a bias toward action, and so on, don’t count for much if the organisation isn’t pointed in the right direction. As Ozzie Gooen has pointed out, there are many examples of massive and superficially successful initiatives (headed by very driven and entrepreneurial people) whose theories-of-impact don’t stand up to scrutiny.
A specific example from Ozzie’s post: SpaceX is a massive and extraordinarily impressive venture that was (at least according to Elon Musk) largely started to help reduce the chance of human extinction, by helping humanity become a multi-planetary species earlier than it otherwise would. But I think it’s hard to see how their work reduces extinction risk very much. If you’re worried about the climate effects of nuclear war, for example, then it seems important to remember that post-nuclear-war Earth would still have a much more hospitable climate than Mars. It’s hard to imagine a disaster scenario where building Martian colonies would be much better than (for example) building some bunkers on Earth.[2] So—relative to the organization’s stated social mission—all the talent, money, and effort SpaceX has absorbed might not ultimately come out to much.
A more concise way to put the concern here: Popular writing on hiring is often implicitly asking the question “How can we identify future Elon Musks?” But, for the most part, longtermist non-profits shouldn’t be looking to put future Elon Musks into leadership positions .[3]
I have in mind, for example, advice given by Y Combinator and advice given in Talent. ↩︎
Another example: It’s possible that many highly successful environmentalist organizations/groups have ended up causing net harm to the environment, by being insufficiently self-skeptical and reflective when deciding how to approach nuclear energy issues. ↩︎
A follow-up thought: Ultimately, outside of earning-to-give ventures, we probably shouldn’t expect the longtermist community (or at least the best version of it) to house many extremely entrepreneurial people. There will be occasional leaders who are extremely high on both entrepreneurialism and reflectiveness (I can currently think of at least a couple); however, since these two traits don’t seem to be strongly correlated, this will probably only happen pretty rarely. It’s also, often, hard to keep extremely entrepreneurial people satisfied in non-leadership positions—since, almost by definition, autonomy is deeply important to them—so there may not be many opportunities, in general, to harness the talents of people who are high on entrepreneurialism but low on reflectiveness. ↩︎
I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.
I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.
I don’t think the recent diff-in-diff literature is a huge issue here—you’re computing a linear approximation, which might be bad if the actual effect size isn’t linear, but this is just the usual issue with linear regression. The main problem the recent diff-in-diff literature addresses is that terrible things can happen if a) effects are heterogenous (probable here!) and b) treatment timing is staggered (I’m not super concerned here since the analysis is so course and assumes roughly similar timing for all units getting potatos.)
They try to establish something like a pretrends analysis in table II, but I agree that it would be helpful to have a lot more—like an event-study type plot would be nice. In general diff-in-diff is a nice way to get information about really hard to answer questions, but I wouldn’t take the effect size estimates too literally.
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it’s all part of the same overarching problem area.
I’m not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I’m trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that’s because of a misalignment of values between the user and the company or because it’s just really hard to learn human preferences because they’re complex. In doing this, it’s really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users’ actions.
Yes, that is correct, I am using the linguistic sense, similar to “implication” or “suggestion”.
I don’t think that’s the bottleneck in economic development
I think it’s too simplistic to say there’s a single bottleneck.
such as economic classes for youngsters, or funding more economists in these countries, or sending experts from top univerties to teach there, etc.
The latter two seem consistent with my proposal. Part of the problem is that there aren’t many economists in developing countries, hence the need to train more. And ASE does bring experts to teach at their campus.
I have not looked into it in detail (read: at all), but this comment is very negative on the World Food Program. Some people may appreciate a deeper dive however.
Location: New Jersey, USA
Remote: Yes
Willing to relocate: Likely Yes
Skills:
- Machine Learning (Python, TensorFlow, Sklearn): Familiar with creating custom NNs in Keras, properly using packaged ML algorithms, and (mostly) knowing what to use and when. I haven’t reproduced an ML paper in full, but probably could after a decent amount of time. I am in the process of submitting a paper on ensemble learning for splice site prediction to IEEE Access (late submission). - Python, R, HTML, CSS: I am competent in Python (5 years experience), and am familiar with R, HTML, and CSS. My website: https://rodeoflagellum.github.io - Forecasting: Top 75 on Metaculus. I believe I am slightly above average at making and updating forecasts. Look through my comments for some applications of time series models. - Writing: Examples (many incomplete) can be found on my website. One is the essay I wrote on Forecasting Designer Babies, which placed among the top submissions in the Impactful Forecasting Prize (on EAF). - Education: BA Math and Neuroscience
Resume: Available upon request
Email: rodeoflagellum AT gmail DOT com
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be clear, you’re using the linguistic sense of ‘ellipsis’, and not the punctuation mark?
I am no expert but by far the biggest org is the UN’s World Food Program.
I don’t see much reporting on them from Givewell but they get 4⁄4 from charity navigator.
I think so. Not sure where to donate though.
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated (“this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply”).
Here’s some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.
If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that’s OK.
This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it’s sort of flying under the radar.