This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
Of course we should be working on harms of tech right now also!
But this is not at all obvious! There are strong arguments that the contemporary ‘harms’ of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/neglectedness/tractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
cold “number crunching”
caring, thoughtful folks who truly care about helping others
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunching—the later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesn’t mean they don’t “truely care”—it’s that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/emotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between things—e.g. to say “depression is worse than malaria, but it’s cheaper to fix malaria”—in a way that seems profane to such people. I’m not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ‘disabled people’ with ‘sighted, hearing people’ and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charity—of trying to understand others’ points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusations—is an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and ‘woke’ ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
This post has five comments; one offers a critique, and the other four are extremely positive. All four positive comments come from accounts which, like that of the post, are newly registered with no sign of other interaction. Unlike PabloAMC’s comment, none of the four display much familiarity with EA principles or considerations. They have low karma / vote ratios*, suggesting they were upvoted by sub-1000 karma accounts, and possibly strongly upvoted once by a 10+ karma account.* 6⁄5, 5⁄4, 4⁄4, 4⁄3 at time of writing.
Thanks for writing this, very interesting topic.
You have some great charts on life satisfaction reports by age. One way to investigate tractability might be to look for significant groups of people, and see if any do not show this trough during adolence:
Kids in alternative schools (e.g. Charter Schools, or Montessori Schools)?
Amish kids, who get introduced to purposeful work much earlier?
The situation could easily reverse in a short time if awareness about AI risk causes a wave of new research interest, or if 80,000 Hours, AGI Safety Fundamentals Curriculum, AI Safety Camp and related programs are able to introduce more people into the field. So just because we have a funding glut now doesn’t mean we should assume that will continue through 2023 which is the time period that this NSF RfI pertains to.
Could you put some numbers around this please—e.g. how much you think we might be able to get the NSF to spend on this? I think we have a big difference in our models here; I can’t think of any scenario you’re thinking of where this seems plausible.
For context, it looks like the NSF currently spends around $8.5bn a year, and this particular program was only $12.5m. It seems unlikely to me that we could get them to spend 2% of the budget ($170m) on AI safety in 2023. In contrast, if there was somehow $170m dollars of high quality grant proposals I’m pretty confident the existing EA funding system would be able to fund it all.
This might make sense if all the existing big donors suddenly decided that AI safety was not very important, so we were very short on money. But if that happens it’s probably because they have become aware of compelling new arguments not to fund AI safety, in which case the decision is probably reasonable!
How high quality do you think the grants the NSF will make would be?
Right now there is a very large amount of EA money available for AI safety research, between at least four different major groups. Each makes use of well connected domain experts to solicit and evaluate grants, and have application processes designed to be easy for applicants to fill in and then distribute the funds in a rapid fashion. Awards can be predicted sufficiently confidently that it is possible to create a career from a single one of these funders. However, the pool of good applicants is extremely small—all the organisations have struggled to spend this money effectively, and actively look for new ways of spending money faster.
In contrast I would be pessimistic about the quality of potential NSF grants. My concern is that, while we might be able to influence them to fund something called ‘AI safety’, it would not actually be that closely related to the thing we care about. Chosen grant reviewers could have many prestigious qualifications but lack a deep understanding of the problem. NSF grants can, I understand, take a long time to apply to, and the evaluation is also very slow—taking over six months. Even then success is not assured. So it’s possible a lot of high quality safety researchers would prefer dedicated EA funding anyway. Who does this leave getting NSF grants? I worry it would be largely existing academics in related fields who are able to reframe their research as corresponding to ‘safety’, thereby diluting the field, without significantly contributing to the project—similar to ‘AI Ethics’.
I do agree that NSF funding could help significantly raise the prestige of the field, but I would want to be confident in how high fidelity the idea transmission to grantmakers would be before promoting the idea, and I’m not sure how we can ensure this degree of alignment.
Additionally, sometimes the question seems to ask about one specific cost or benefit of a policy, and respondents are unsure how to answer if they think that issue is unimportant but disagree/agree for other reasons.
Great post, I think this captures something very important about how the increasing size of, and focus on, externalities leads to more stakeholder vetos.
I think this dynamic affects private companies, governments and more. While “red tape” appears in institutions, I think the underlying cause of the “red tape” often comes from the behavior of private individuals. And I don’t think the world becoming more “libertarian” (at least in the narrow sense of seeking to shrink government) would necessarily solve much (at least, I wouldn’t expect it to lead to more subway stations!)
I think you’re correct that the underlying cause is individuals, but I do think there is something here about the solution. Private businesses have always had to deal with stakeholders, like suppliers and workers, and have historically been able to deal with this relatively well, because costs to these stakeholders could be compensated with fungible dollars. This allows for mutually beneficial agreements, competition and so on. In contrast, many of the stakeholder vetoes that are created by government do not allow such solutions: paying off stakeholders is considered bribery rather than legitimate payment. It’s true that making rights alienable is compatible with a relatively high degree of government oversight, but most people would probably regard it as a move in a libertarian direction.
Thanks as always for sharing your detailed thoughts with us!
Yeah I was considering writing something up if I had free time but probably wouldn’t be able to fulfill this requirement at decent quality within a reasonable timeframe.
Thanks for writing this good overview of a perennial topic.
Paying people higher salaries for EA jobs might be an alternative approach to at least part of this problem. It would allow people to save to protect themselves from future unemployment, without the difficult vetting and bad incentive effects of ‘insurance’. It doesn’t help people very early on in their careers, but probably no insurance product would either, as these people would often not have built up a credible history of contribution anyway.
Freedom of speach and freedom of research are important, and as long as someone doesn’t call to intentionally harm or discriminate against another, it’s important that we don’t condition funding on agreement with the funders’ views.
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
How is this a false dilemma?
Stop all technological progress
Advance low carbon technology
Technically it omits a third option (technological progress in areas other than low carbon technology) but it certainly seems to cover all the relevant possibilities to me. Whether we have carbon taxes and so on is a somewhat separate issue: Halstead is arguing that without technological progress, sufficiently high carbon taxes would be ruinously expensive.
I enjoyed some of the discussion of emergency powers. It could be good to mention the response to covid. Leaving to one side whether such policies were justified (they do seem to have saved many lives), country-wide lockdowns were surely one of the most illiberal policies enacted in history, and explicitly motivated by trying to address a global disaster. Outside of genocide and slavery, I struggle to think of many greater restrictions on individuals freedom than confining essentially the entire population to semi house arrest. In many cases these rules were brought in under special emergency powers, and sometimes later determined to be illegal after judicial review. However, these policies were often extremely popular with the general population, so I’m not sure they fit the democracy-vs-illiberalism dichotomy the article is sort of going for.
The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world’s leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.
Indeed, I find it hard to square the article’s support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people’s skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions.
You are correct that this would be much more useful—indeed this is essentially what I wrote into an earlier draft. Unfortunately the specific nature of the other ethical constraint makes it difficult to share even the existence of the conflict with any specific group/individual.
Thanks, fixed. The Redwood comment was an artifact of an earlier version of the sentence that referred to ‘well funded groups’ more generally.
You’re right, that was out of date; fixed in both copies.
Open Phil supports Criminal Justice Reform …
Historically yes, but not any more:
[W]e think the top global aid charities recommended by GiveWell (which we used to be part of and remain closely affiliated with) present an opportunity to give away large amounts of money at higher cost-effectiveness than we can achieve in many programs, including CJR, that seek to benefit citizens of wealthy countries.