Enthusiastic utilitarian and moral realist. I made this anonymous account to talk about the controversial stuff.
AnonymousQualy
Disclaimer: this is very much just me spitballing. And I only know about the U.S. And I ditched the tractability requirement (see below).
Reasonably high confidence that these are high-impact (though I don’t claim to be an expert):
Unconditional cash transfers (weaker form: child tax credit (big effects on child poverty))
Liberalizing land-use restrictions (like zoning )
Land value tax (this op-ed based on this research was making waves today)
Prison reform (I understand this used to be an EA cause area, but I don’t hear about it much anymore?)
Pandemic resistance (pretty sure there are EAs working on this right now?)
Welfare reform (this is a massive category that deserves to be broken down more, but in the interest of my not pouring too much work into this post I’ll just say: housing vouchers are way too difficult to get and too many programs use work requirements that don’t do much good)
Proportional representation (or, a weaker but perhaps more achievable electoral reform: ranked choice voting)
Liberalizing immigration
It felt wrong not to include something healthcare-related here, but health policy is an area I don’t know much about. As I understand it, the highest impact reforms in the U.S. would need to involve both moving in a single-payer direction, but also require supply-side interventions to decrease the costs of care and drugs (see patent idea below)
These policies are also interesting to think about, though I have lower confidence in their impact:
Sectoral bargaining
Occupational licensing reform (too many jobs require too many hoops to jump through)
Patent reform (could we get the same innovation with prizes? I don’t know but it seems worth considering)
Public school funding (LARGE CAVEAT: There’s clear benefits to higher quality teachers, but it’s less clear how much you’d need to pay high quality teachers to get them into disadvantaged classrooms where they’re needed)
I ditched the “tractability” requirement here because I’m not sure how to think about it. A lot of great policies already have people working at them but still aren’t getting enough attention. Could EA move the needle? Idk, maybe (though frankly, I think our current brand is probably too politically toxic). But then there’s also the fact that once any good policy starts getting attention, it triggers political resistance.
Also, I’m all but certain that policies in low-income countries have way more impact than here in the U.S. I just don’t know enough to speak to those.
my personal consumption decisions just have such a tiny effect compared to my career/donation decisions that it feels like I shouldn’t pay much attention to their direct consequences
This isn’t an argument against veganism; it’s an argument against prioritizing veganism as a cause area. And EA isn’t prioritizing veganism as far as I can tell?
I worry it could be really easy for EA to become a community where people rationalize doing bad things on account of the fact that those things are just a little bit bad compared to all the good things they do.
I really don’t want EA to become that.
This might just be an object-level disagreement about where EA’s main positive impact is likely to come from, on our respective models of the world. E.g., if you think EA mainly has a positive impact via increasing donations to GiveDirectlly, then I buy that EA’s current idea pipeline might be a lot weirder than optimal for that.
This is something of an argument for not including such different cause areas under the same banner.
Strong upvote.
Three additional arguments in favor of (marginally!!!!) greater social norm enforcement:
(1)
A movement can only optimize for one thing at a time. EA should be optimizing for doing the most good.
That means sometimes, EA will need to acquiesce to social norms against behaviors that—even if fine in isolation—pose too great a risk of damaging EA’s reputation and through it, EA’s ability to do the most good.
This is trivially true; I think people just disagree about where the line should be drawn. But I’m honestly not sure we’re drawing any lines right now, which seems suboptimal.
(2)
Punishing norm violations can be more efficient than litigating every issue in full (this is in part why humans evolved punishment norms in the first place).
And sometimes, enforcing social norms may not just more efficient; it may be more likely to reach a good outcome. For example, when the benefits of a norm are diffuse across many people and gradual, but the costs are concentrated and immediate, a collective action problem arises: the beneficiaries have little incentive to litigate the issue, while those hurt have a large incentive. Note how this interacts with point (1): reputation damages to EA at large are highly diffuse.
To strengthen this point, social norms often pass down knowledge that benefits adherents without their ever realizing it. Humans aren’t good at getting the best outcomes from our individual reasoning; we’re good at collective learning.
(3)
There are a lot more people in the world interested in norm violation than in doing the most good. Therefore, we should expect that a movement too tolerant of weirdness will create too high a ratio of norm-violators to helpful EAs (this is the witch hunt point made in the OP).
I agree! When I say “wing” I mean something akin to “AI risk” or “global poverty”—i.e., an EA cause area that specific people working on.
I agree! Greater leniency across cultural divides is good and necessary.
But I also think that:
(1) That doesn’t apply to the Bostrom letter
(2) There are certain areas where we might think our cultural norms are better than many alternatives; in these situations, it would make sense to tell the person from the alternate culture about our norm and try to persuade them to abide by it (including through social pressure). I’m pretty comfortable with the idea that there’s a tradeoff between cultural inclusion and maintaining good norms, and that the optimal balance between the two will be different for different norms.
Agreed.
I’m no cultural conservative, but norms are important social tools we shouldn’t expect to entirely discard. Anthropologist Joe Henrich’s writing really opened my eyes to how norms pass down complex knowledge that would be inefficient for an individual to try to learn on their own.
I wholeheartedly agree that EA must remain welcoming to neurodiverse people. Part of how we do that is being graceful and forgiving for people who inadvertantly violate social norms in pursuit of EA goals.
But I worry this specific comment overstates its case by (1) leaving out both the “inadvertent” part and the “in pursuit of EA goals” part, which implies that we ought to be fine with gratuitous norm violation, and (2) incorporating political bias. You say:
If we impose standard woke cancel culture norms on everybody in EA, we will drive away [neurodiverse people]. Politically correct people love to Aspy-shame. They will seek out the worst things a neurodiverse person has ever said, and weaponize it to destroy their reputation, so that their psychological traits and values are allowed no voice in public discourse.
I don’t want to speak for anyone with autism. However, as best I can tell, this is not at all a universal view. I know multiple peope who thrive in lefty spaces despite seeming (to me at least) like high decouplers. So it seems more plausible to me that this isn’t narrowly true about high decouplers in “woke” spaces; it’s broadly true about high decouplers in communities who’s political/ethical beliefs the decoupler does not share.
I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I’m not on the autism spectrum) the really big taboos—like race and intelligence—are usually obvious, as is the fact that you’re supposed to be careful when talking about them. The text of Bostrom’s email demonstrates he knows exactly what taboos he’s violating.
I also think we should be careful not to mistake correlation for causation, when looking at EA’s success and the traits of many of its members. For example, you say:
[if we punish social norm violation] we will drive away everybody with the kinds of psychological traits that created EA, that helped it flourish, and that made it successful
There are valuable EA founders/popularizers who seem pretty adept at navigating taboos. For example, every interview I’ve seen with Will MacCaskill involves him reframing counterintuitive ethics to fit with the average person’s moral intuitions. This seems to have been really effective at popularizing EA!
I agree that there are benefits from decoupling. But there are clear utilitarian downsides too. Contextualizing a statement is often necessary to anticipate its social welfare implications. Contextualizing therefore seems necessary to EA.
Finally, I want to offer a note of sympathy. While I don’t think I’m autistic, I do frequently find myself at odds with mainstream social norms. I prefer more direct styles of communication than most people. I’m a hardcore utilitarian. Many of the leftwing shibboleths common in among my graduate school classmates I find annoying, wrong, and even harmful. For all these reasons, I share your feeling that EA is “oasis.” In fact, it’s the only community I’m a part of that reaffirms my deepest beliefs about ethics in a clear way.
But ultimately, I think EA should not optimize to be that sort of reaffirming space for me. EA’s goal is wellbeing maximization, and anything other than wellbeing maximization will sometimes—even if only rarely—have to be compromised.
Lying to meet goals != contextualizing
It’s hard for me to follow what you’re trying to communicate. Are you saying that high contextualizers don’t/can’t apply their morals universally while high decouplers can? I don’t see any reason to believe that. Are you saying that decouplers are more honest? I also don’t see any reason to believe that.
I’d be very interested in seeing a more political wing of EA develop. If folks like me who don’t really think the AGI/longtermist wing is very effective can nonetheless respect it, I’m sure those who believe political action would be ineffective can tolerate it.
I’m not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won’t tank my professional reputation through guilt-by-association with racism).
I think this is a much needed corrective.
I frequently feel there’s a subtext here that high decouplers are less biased (whether the bias is racial, confirmation, in-group, status-seeking, etc.). Sometimes it’s not even a subtext.
But I don’t know of any research showing that high decouplers are less biased in all the normal human ways. The only trait “high decoupler” describes is tending to decontextualize a statement. And context frequently has implications for social welfare, so it’s not at all clear that high decoupling is, on average, useful to EA goals—much less a substitute to group-level check on bias.
I say all this while considering myself a high decoupler!
I think it is trivially true that we sometimes face a tradeoff between utilitarian concerns arising from social capital costs and epistemic integrity (see this comment).
But I don’t think the Bostrom situation boils down to this tradeoff. People like me believe Bostrom’s statement and its defenders don’t stand on solid epistemic ground. But the argument for bad epistemics has a lot of moving parts, including (1) recognizing that the statement and its defenses should be interpreted to include more than their most limited possible meanings, and that its omissions are significant, (2) recognizing the broader implausibility of a genetic basis for the racial IQ gap, and (3) recognizing the epistemic virtue in some situations of not speculating about empirical facts without strong evidence.
All of this is really just too much trouble to walk through for most of us. Maybe that’s a failing on our part! But I think it’s understandable. To convincingly argue points (1) through (3) above I would need to walk through all the subpoints made on each link. That’s one heck of a comment.
So instead I find myself leaving the epistemic issues to the side, and trying to convince people that voicing support for Bostrom’s statement is bad on consequentialist social capital grounds alone. This is understandably less convincing, but I think the case for it is still strong in this particular situation (I argue it here and here).
I’m arguing not for a “conflict of principles” but a conflict of impulses/biases. Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities. I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.
Decoupling by definition ignores context. Context frequently has implications for social welfare. Utilitarian goals therefore cannot be served without contextualizing.
I also dispute the idea that the movement’s founders were high decoupler rationalists to the degree that we’re talking about here. While people like Singer and MacAskill aren’t afraid to break from norms when useful, and both (particularly Singer) have said some things I’ve winced at, I can’t imagine either saying anything remotely like Bostrom’s statement, nor thinking that defending it would be a good idea.
it is not clear to me which opinion you believe Bostrom had in the 90s
I don’t know and am not really interested in whatever Bostrom’s actual opinion in the 90s was because I’m a consequentialist, not a virtue ethicist. Susan II’s post above highlights the reasons Bostrom should have expected his statement to be interpreted as a racist one, and why it was in fact reasonable for people (who both agree with and disagree with it) to interpret it that way.
was something important missing? what something present that shouldn’t have been?
I think that drawing attention to racial gaps in IQ test results without highlighting appropriate social context is in-and-of itself racist. We live in a world where ideas about differences in intelligence between races have caused a lot of suffering—more suffering than most other ideas out there.
I think the ideal apology would have at least walked through the history of claims of racial differences in intelligence and the harms they motivated, acknowledged their continued ability to cause harm, provided appropriate social context for the difference in IQ scores and apologized of the lack of it in the statement from the 90s, and highlighted the implausibility of a genetic basis for the difference.
If we disagree about the implausibility of a genetic basis for the difference in IQ scores, I’m not really interested in debating it. My view is that:
I find the research suggesting no genetic basis for racial IQ differences credible
I do not find the survey that people cite to the opposite effect compelling (it acknowledges that it highly unrepresentative—as an internet survey with a high nonresponse rate would be)
I believe the scientists who say that race is a social construct not a biological one
I believe the scientists who point to clear environmental influences on IQ
I think there’s a decent case to be made that a lot of social justice norms (though certainly not all) can be arrived at by utilitarian reasoning (“normie EA”) while a lot of opposition to social justice norms can be arrived at through a sort of truth seeking that actively eschews social norms (“rationalist”).
I think this is a decent argument, but I probably disagree. I think most high decouplers aren’t utilitarian or utilitarian-adjacent, and aren’t inclined to optimize for social welfare the way I think it is important for EA to. I have another comment arguing somewhat provocatively that rationalist transplants may harm the EA movement more than helping it by being motivated by norm-violative truth seeking over social welfare.
But as I say in the other post, I wouldn’t point out any individual rationalists/high-decouplers as bad for the movement; my argument is just about group averages ;)
FWIW, I’m highly longtermism skeptical for epistemic reasons, so I value the influx of people who care a lot of AGI alignment and whatnot much less than most people on here.
I might be wrong! But I stand by it. I don’t believe myself to be in an ideological bubble. I grew up in the south, went to college in a highly rural area, and have friends across the political spectrum. Most of my friends from college are actually Republican, a few are even Trump supporters (honestly, I think they have some racial bias, but if you asked them “is saying white people have higher IQs than black people racist?” I’m highly confident they would say yes).
The current controversy is pretty easily explainable to me without updating my priors: the EA community has attracted a lot of high decoupler rationalists who don’t much care about mainstream norms (which again, is a virtue in many cases—but not this one).
Here’s a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can’t help but feel that this is a big part of the issue we’re seeing here.
I would guess that the average “EA native” is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity—after all, EA breaks from intuitive morality a lot—but their first impulses are to consider consequences and goodness.
I would guess that the average “rationalist transplant” is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn’t be here if they didn’t) but their first impulses favor finding a norm-breaking truth. It may even be a somewhat deolontogical impulse: it’s good to challenge social norms in search of truth, independent of whether it creates good consequences.
I believe the EA native impulse seems more helpful to the EA cause than the rationalist impulse.
And I worry the rationalist impulse may even be actively harmful if it dilutes EA’s core values. For example, in this post a rationalist transplant describes themself as motivated by status instead of morality. This seems very bad to me.
Again, I recognize that this is a VERY uncharitable view. I’d like to hasten to say that there are probably a great many rationalist-transplants whose commitment to advancing social welfare are equal to or greater than mine, as an EA native. My argument is about group averages, not individual characteristics.
...
Okay, yes, I found that last sentence really enjoyable to write, guilty as charged
What makes the best solution the longtermists breaking off, instead of everyone else breaking off?
I more or less agrees with this post that (1) longtermism is dominant, (2) longtermism is a bad cause area, and (3) longtermism is bad for PR reasons. But I don’t think we can divorce EA from a cause area a majority of its members (and associated organizations!) find compelling. Even if we could, the PR damage that’s already been caused wouldn’t go away.
So it seems more realistic for exclusively near-termist EAs to try to carve out a separate space for ourselves. Obviously that’s a huge logistical task. I don’t really expect it to be successful. But I rate its chances of success higher than cutting longtermism out of EA.