I blog about political/economic theory and moral philosophy.
Sam Battis
Lightning Career Advice—A Forum Feature Suggestion
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
Hi Brad,
The counterfactual is definitely something that I think I should examine in more detail.
Agreed that marginal effect would be fairly logarithmic and I probably should have considered the fact that there is quite a lot of competition for employment at Earthjustice (i.e. need to be top 0.001% of lawyers to have counterfactual impact).
I am pretty completely convinced by the argument that seeking to work for Earthjustice is worse than ETG actually, so I might go and make some rather sweeping modifications to the post.
I think that the exercise does at least stand as a demonstration of the potential impact of systems change nonprofits with new/neglected focus and that Earthjustice is a success story in this realm.
Do you have a high level of confidence that Earthjustice is too large/established for it to compete with funding new and/or neglected projects?
Hi Jason, thanks for the response.
Agree that marginal increases have lower impact. I assume GiveWell-style research on the inner workings of the organization would be needed to see if funding efficacy is actually currently comparable to AMF, and I don’t presume to have that level of know-how. I’m just hoping to bring more attention to this area.
What tools are used to assess likely funging? Is a large deficit as % of operating costs a sign that funging would be relatively low, or are most organizations that don’t have the explicit goal of continuing to scale assumed to have very high funging costs of say 50% or higher?
Are systems change nonprofits effective? A shallow estimate of the efficacy of nonprofit legal work
Other species are instrumentally very useful to humans, providing ecosystem functions, food, and sources of material (including genetic material).
On the AI side, it seems possible that a powerful misaligned AGI would find ecosystems and/or biological materials valuable, or that it would be cheaper to use humans for some tasks than machines. I think these factors would raise the odds that some humans (or human-adjacent engineered beings) survive in worlds dominated by such an AGI.
I think it is potentially difficult to determine how good the average doctor is in a particular place and how much better one would be than the average, but if one could be reasonably confident that they could make a large counterfactual impact on patient outcomes, the impact could be significant. The easiest way to be sure of these factors that I can think of would be to go somewhere with a well-documented shortage of good doctors, while trying to learn about and emulate the attributes of good doctors.
Being a doctor may not be one of the highest impact career paths on Earth, but it might be the highest impact and/or the most fulfilling for a particular person. High impact and personal fit/fulfillment are fairly highly correlated, I think, and it’s worth exploring a variety of career options in an efficient way while making those decisions. In my experience, it can be very difficult to know what one’s best path is, but the things that have helped me the most so far are experiences that let me get a taste for the day-to-day in a role, as well as talking to people who are already established in one’s prospective paths.
EA should add systems change as a cause area—Macaskill or Ord v. [Someone with a view of history that favors systems change more who’s been on 80k hours].
From hazy memory of their episodes it seems like Ian Morris, Mushtaq Khan, Christopher Brown, or Bear Braumoeller might espouse this type of view.
True. I think they meant that it’s plausible humans would convert the entire population of cows into spare parts, instead of just the ones that have reached a certain age or state, if it served human needs better for cows to not exist.
I agree that activism in particular has a lot of idiosyncrasies, even within the broader field of systems change, that make it harder to model or understand but do not invalidate its worth. I think that it is worthwhile to attempt to better understand the realms of activism or systems change in general, and to do so, EA methodology would need to be comfortable engaging in much looser expected value calculations than it normally does. Particularly, I think a separate system from ITN may be preferable for this context, because “scope, neglectedness, and tractability” may be less useful for the purpose of deciding what kind of activism to do than other concepts like “momentum, potential scope, likely impact of a movement at maximum scope and likely impact at minimum or median scope/success, personal skill/knowledge fit, personal belief alignment” etc.
I think it’s worth attempting to do these sorts of napkin calculations and invent frameworks for things in the category of “things that don’t usually meet the minimum quantifiability bar for EA” as a thought exercise to clarify one’s beliefs if nothing else, but besides, regardless of whether moderately rigorous investigation endorses the efficacy of various systems change mechanisms or not, it seems straightforwardly good to develop tools that help those interested in systems change to maximize their positive impact. Even if the EA movement itself remained less focused on systems change, I think people in EA are capable of producing accurate and insightful literature/research on the huge and extremely important fields of public policy and social change, and those contributions may be taken up by other groups, hopefully raising the sanity waterline on the meta-decision of which movements to invest time and effort into. After all, there are literally millions of activist groups and systems-change-focused movements out there, and developing tools to make sense out of that primordial muck could aid many people in their search to engage with the most impactful and fulfilling movements possible.
We may never know whether highly-quantifiable non-systems change interventions or harder-to-quantify systems change interventions are more effective, but it seems possible that to develop an effectiveness methodology for both spheres is better than to restrict one’s contributions to one. For example, spreading good ideas in the other sphere may boost the general influence of a group’s set of ideals and methodologies, and also provide benefits in the form of cross-pollination from advances in the other sphere. If EA maximizes for peak highly-quantifiable action, ought there to be a subgroup that maximizes for peak implementation of “everything that doesn’t make the typical minimum quantifiability bar for EA”?
Race for the galaxy is an excellent game.
Gratefulness can sound cheesy but it’s one of the most scientifically-backed ways to make humans happy. I’ve found that a nice ritual to do with the people you live with is to go around the table and have each person say something they’re grateful for before eating dinner together.
It’s true that you didn’t technically advocate for it, but in context it’s implied that subsidies for abortion for people who are addicted to drug use would be a good policy to consider.
“We are not going to stop hearing about eugenics. Every time someone tries to call it something different, the “e” word and its association with historic injustice and abuse is invoked to end the discussion before it can begin.
When someone says that screening embryos for genetic diseases, giving educated women incentives to have children (like free child care for college educated women), or offering subsidized abortions for women addicted to drugs is “eugenics” they are absolutely using the term correctly.”
I accept that the idea “abortion is eugenics” is already advanced by some conservatives. However, I think that the policy of targeted abortion subsidies would convince more people that “abortion is eugenics,” and I think that this would make it easier to ban abortion.
I think the fact that Israel already has a very different cultural environment regarding genetic interventions means that those examples of targeted subsidies may well be much more controversial in other countries.
I’m glad you agree on that last point.
For me it’s been good to make a habit of looking for the least controversial policy that achieves desired goals. I often discover reasons that the more controversial options were actually less desirable in some way than the less controversial ones. This isn’t always the case, but in my experience it has been a definite pattern.
I suspect that the social cost of making “I have [better/worse genetics] than this person” a widespread, politically relevant, and socially permissible subject outweighs the potential benefits of policies like subsidized abortions for people addicted to drugs and special incentives for educated women to have kids.
With regard to targeted abortion subsidies, what about the risk of reanimating the “abortion is eugenics” argument against its legality, particularly in the US, where abortion has been banned in many states? If you believe that abortion’s legality has had very positive genetic effects, then shouldn’t preserving that legality be an extremely high priority, and the political cost of proposing this policy prohibitive?
If you want to make abortions more accessible, why not make them free for everyone, making it a better option for people who might not be able to face the short-term financial burden of its cost, reducing the odds of it being banned by reinforcing the idea that it is is a right, and avoiding the backlash that a targeted subsidy would unleash?
It seems like ensuring that everyone gets to decide when they are best prepared to have and raise kids, if ever, has such incredibly high positive externalities that if anything there is a strong argument that abortion should be free already. The same goes for condoms, birth control pills, and IUDs.
Why not advocate for new, universal public goods, rather than policies that unnecessarily risk negative social and political impacts?
Agreed on all counts.
Right, I was just looking for some ways to apply it to EA. I figured you were recommending that post-political-ness become a more explicit part of EA or more frequently used by EAs in their public or private evaluation of policy.
I agree this sort of loose framework of sentence maxing should be used by EAs when evaluating policy interventions, and it seems to be used, so I agree it should continue. And then on top of that, if someone EA-aligned wanted a potentially high-impact way to spend time advocating for post-political views, I would recommend the forum project.
When you say this is the only possible political framework for a utilitarian—if you’re referring to sentience maxing with whatever tools available, I agree. If you’re saying utilitarians should ignore the tools of political culture entirely and their instrumental uses, including supporting the rights and other deontological rules that utilitarians sometimes find justified, then I would disagree for the reasons stated.
For example, assuming democracy is the most effective government form, I would want some amount of pro-democracy emotional content in K-12 schools and a broader social penalty for advancing anti-democratic ideas like reducing voter eligibility/access, in order to safeguard it against short-term cultural shifts and meddling. I think hard-coding things that we are pretty sure are good or bad into culture is wise, so that we avoid having to rehash the same issues generation to generation. In this case “dogma” is basically just “accepting a moral conviction that has been baked into your culture through historical experience,” which is often quite useful.
If you’re saying that the economic system you outlined (which if I understand correctly is limited to a private market and wealth transfers, implying no public goods) is the only defensible one, then that’s also a separate debate we could have. I’m not sure if this is what you’re referring to when you say this is the only possible political framework.
I agree that to the extent that EA engages in policy evaluation or political/economic evaluation more generally, it should use a sentient-experience maximization framework, while discarding the labels of particular political theories, in the way that you described. And I think that so far every discussion I’ve seen of those matters in EA follows that framework, which is great.
With regard to specific arguments about post-politics:
I thought you made a strong case for post-politics in general, but arguing that a specific economic strategy is the best possible beyond all doubt is much more difficult to defend, and besides, does not seem very post-political. In general, a post-political person might argue for any economic strategy under the sun as optimal for sentient beings, though of course some arguments will be stronger than others.
Also, regardless of the systems they believe to be optimal, post-political people should be sure to entertain the possibility that they, others, or a given group or polity are actually too post-political—not having the optimal amount or type of cultural orthodoxy/dogma and being unwilling to address that issue.
This may come into play when an individual or group’s conviction in rights and other deontological tools becomes too weak, or the deontological rules and norms they follow are worse than competing options.
After all, an orthodoxy or political culture beyond post-politics is necessary for “tie-breaking” or producing decisions in situations where calculation is inconclusive. Some political culture beyond post-politics will inevitably sway people in such situations, and it is worth making sure that that political culture is a good one.
An individual post-political thinker therefore may embrace advocacy of a certain political culture because they think it is so valuable and under-utilized that advocating for it is a more efficient use of their resources than advocating for post-politics.
Generally I would say most people and institutions could stand to be more post-political, but I am not sure whether post-politics is currently a better advocacy target than other cultural/political movements.
If one was to advocate for such a movement, I’d guess the best way would be to create a political forum based on those principles and try to attract the people who like participating in political forums. Then the goal would be to make sure the discourse is focused on doing the most good for the most people, with rigorous evidence-based breakdowns for particular policies. This might be a decent use of time given that this post-political approach could improve thousands of people’s decisions as they relate to systems change.
If something like this was created, I would recommend adding a system for users to catalogue, quantify, and compare their political positions, including their degree of confidence in each position. The capability to quickly and easily compare political positions between individuals seems like a very fast way to advance the accuracy of individuals’ beliefs, especially in a community dedicated to strong beliefs lightly held and finding the policies that do the most good.
Good point. I’d imagine that this objection stems from the perspective “basically all the highest utility/dollar interventions are in x-risk, but continuing global health interventions costs us little because we already have those systems in place, so it’s not worth abandoning them.”
From this perspective, one might think that even maintaining existing global health interventions is a bad util/dollar proposition in a vacuum (as those resources would be better spent on x-risk), but for external reasons, splintering EA is not worth pressuring people to abandon global health.
Let’s imagine splintering EA to mean nearly only x-riskers being left in EA, and maybe a group of dissidents creating a competing movement.
These are the pros for x-riskers post-split:
Remaining EAs are laser-focused on x-risk, and perhaps more people have shifted their focus from partly global health and partly x-risk to fully x-risk than vice versa. (More x-risk EAs and x-risk EAs are more effective).
These are the cons for x-riskers post-split:
Remaining EAs have less broad public support and less money going into “general EA stuff” like community building and conferences, because some of the general EA money and influence was coming from people who mostly cared about global health. As a related consequence, it becomes harder to attract people initially interested in global health and convert them into x-riskers. (Less x-risk EAs and x-risk EAs are less effective).
It seems that most x-riskers think the cons outweigh the pros, or a split would have occurred—at least there would be more talk of one.
The thing is, refraining from adding climate change as an EA focus would likely have a similar pro/con breakdown to removing global health as an EA focus:
Pros: No EAs are persuaded to put money/effort that might have gone to x-risk into climate change.
Cons:
Loss of utils due to potentially EA-compatible people who expend time or money on climate change prevention/mitigation not joining the movement and adopting EA methods.
Loss of potential general funding and support for EA from people who think that the top climate change interventions can compete with the util/dollar rates of top global health and x-risk interventions, plus the hordes of people who aren’t necessarily thinking in terms of utils/dollar yet and just instinctively feel climate change is so important that a movement ignoring it can’t possibly know what they’re doing. Even if someone acting on instinct rather than utils/dollar won’t necessarily improve the intellectual richness of EA, their money and support would be pretty unequivocally helpful.
These are basically the same pros and cons to kicking out global health people, plus an extra cost to not infiltrating another cause area with EA methods.
Therefore, I would argue that any x-risker that does not want to splinter EA should also support EA branching out into new areas.
Worth noting that although those factors likely increase the expected strength of the relationship between money and happiness, when it comes to interpreting that strength, there are factors potentially reducing the proportion of the relationship that can be explained by “more money causes more happiness:”
Reverse causality (Being happier plausibly makes you earn more money through various social and health impacts)
Confounding variables (E.g. being a hard worker makes you happier, and being a hard worker just happens to make you more money on average)
Also, if you perceive success (i.e. status) as being strongly associated with having higher income, then the actual mechanism by which higher income raises your happiness may be through a status gain, rather than a consumption increase. And as status is a positional good, this would be a reason to down-revise our expectation that increasing a person’s consumption at these levels will actually increase a society’s net happiness.
I don’t believe Facebook’s structure and people’s prior associations with the quality of discussion that occurs on Facebook would enable rational debate at the level of the EA forum, but on any platform, I would agree that if a line in the sand is crossed and discussions of specific policies become conceived of as “Politics”, and tribalism creeps in, the results are usually quite bad.
I can’t imagine that political tribalism would fly on the EA forum, although of course it is necessary to be vigilant to maintain that. Indeed, if I were to rewrite that post today I would revise it to express much less confidence in a particular view of global systems, and focus more on the potential for thinking about global systems to offer opportunities for large impacts.
I think there is evidence EA is capable of doing this without damaging epistemics. It is currently widely accepted to talk about AI or nuclear regulations that governments might adopt, and I haven’t seen anything concerning in those threads. My argument is essentially just that policy interventions of high neglectedness and tractability should not be written off reflexively.
Earthjustice and other law groups (there’s a YIMBY Law group as well that is probably less impactful but at least worth looking into) are nice because they improve de facto systems, but don’t need to engage with the occasional messiness of overt system change. Instead, they ensure local governments follow the laws that are already in place.
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).