Some of this discussion reminds me of Millâs in his super underrated essay âUtility of Religionâ. He proposes there a kind of yangy humanistic religion, against a backdrop of atheism and concern about the evils of nature. Worth a read.
đ¸Zachary Brown
Thanks for the comment!
I agree that thereâs a mixed case for political tractability. Iâm curious why you donât find the argument compelling about the particular people who have influence on AI policy being more amenable to animal-related concerns? (To put it bluntly, EAs care about animals and are influential in AI, and animal ag industry lobbying hasnât really touched this issue yet.)
I like the analogy to cage-free campaigns, although I think I would draw different lessons from the analogy. I donât really think that the support for cage-free campaigns comes from support for restrictions that help individual animals rather than support for restrictions that restrict the total number of farmed animals. Instead, I think it comes for support for traditional and ânaturalâ ways of farming (where the chickens are imagined to roam free) rather than industrialised, modern, and intensive farming methods. On this view, cage-free campaigns succeed because they target only the farming methods that the public disapproves of. This theory can also explain why people express disapproval of factory farming, but a strong approval of farming and farmers.
I think PLF is a politically tractable target for regulation because, like cage-free campaigns, it targets only the type of farming people already dislike. When I say âEnd AI-run factory farms!â, the slogan makes inherently salient the technological, non-natural, industrial nature of the farming method. Restrictions here might not be perceived as restrictions on farming, theyâll be perceived only as restrictions on a certain sinister form of unnatural industrialised farming. (The general public mostly doesnât realise that most farming is industrialised.) To put this another way: I think the most politically tractable pro-animal movements are the ones that explicitly restrict their focus to Big Evil Factory Farms, and leave Friendly Farmer Joe alone. I think PLF restrictions share this character with cage-free campaigns.
And we know from cage-free campaigns that people are sometimes willing to tolerate restrictions of this sort even if they are personally costly.
AnÂiÂmal adÂvoÂcates should camÂpaign to reÂstrict AI preÂciÂsion liveÂstock farming
I basically fail to imagine a scenario where publishing the Trust Agreement is very costly to Anthropicâespecially just sharing certain details (like sharing percentages rather than saying âa supermajorityâ)âexcept that the details are weak and would make Anthropic look bad.
Anthropic might be worried that the details are strong, and would make Anthropic look vulnerable to similar governance chaos to what happened at OpenAI during the board turnover saga. A large public conversation on this could be bad for Anthropicâs reputation among its investors, team, or other stakeholders, who have concerns other than longterm safety, or might think that Anthropicâs non profit-motivated governance is opaque or bad for whatever other reason. To put this another way: Anthropic is probably reputation-managing, but it might not be their safety reputation that they are trying to manage. It might be their reputationâto potential investors, sayâas a reliable actor with predictable decision-making that wonât be upturned at the whims of the trust.
I would expect, though, that Anthropicâs major investors know the details of the governance structure and mechanics.
Iâm in the early stages of corporate campaign work similar to whatâs discussed in this post. Iâm trying to mobilise investor pressure to advocate for safety practices at AI labs and chipmakers. Iâd love to meet with others working on similar projects (or anyone interested in funding this work!). Iâd be eager for feedback.
You can see a write-up of the project here.
Frankenstein (Mary Shelley): moral circle expansion to a human created AI, kinda.
Elizabeth Costello (J M Coetzee): novel about a professor who gives animal rights lectures. The chapter thatâs most profoundly about animal ethics was published as âThe Lives of Animalsâ which was printed with commentary from Peter Singer (in narrative form!).
Darkness at Noon (Arthur Koestler): Novel with reflections from an imprisoned old Bolshevik, reflecting on his past revolutionary activity. Interesting reflections on ends vs. means reasoning, and on weighing considerations of moral scale /â the numbers affected vs personal emotional connection in moral tradeoff scenarios.
[Linkpost] Eric SchÂwitzgebel: AI sysÂtems must not conÂfuse users about their senÂtience or moral status
Thanks for putting this together! Super helpful.
I really appreciated this post and itâs sequel (and await the third in the sequence)! The âsecond mistakeâ was totally new to me, and I hadnât grasped the significance of the âfirst mistakeâ. The post did persuade me that the case for existential risk reduction is less robust than I had previously thought.
One tiny thing. I think this should read âfrom 20% to 10% riskâ:
More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 70% risk, from 20% to 18% risk, or from 10% to 0% risk. (Formally, relative risk reduction by f takes us from risk r to risk r â f).
Thanks for writing this! Hoping to respond more fully later.
In the meantime: I really like the example of what a ânear-term AI-Governance factor collection could look likeâ.
So the question is âwhat governance hurdles decrease risk but donât constitute a total barrier to entry?â
I agree. There are probably some kinds of democratic checks that honest UHNW individuals donât mind, but have relatively big improvements for epistemics and community risk. Perhaps there are ways to add incentives for agreeing to audits or democratic checks? It seems like SBFâs reputation as a businessman benefited somewhat from his association with EA (I am not too confident in this claim). Perhaps offering some kind of âSuper Effective Philanthropistâ title/âprize/âtrophy to particular UHNW donors that agree to subject their donations to democratic checks or financial audits might be an incentive? (Iâm pretty skeptical, but unsure.) Iâd like to do some more creative thinking here.
I wonder if submitting capital to your proposal seems a bit too much like the latter.
Probably.
I think this is a great post, efficiently summarizing some of the most important takeaways from recent events.
I think this claim is especially important:
âItâs also vital to avoid a very small number of decision-makers having too much influence (even if they donât want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders.â
Hereâs a sketchy idea in that vein for further consideration. One additional way to avoid extremely wealthy donors having too much influence is to try to insist that UHNW donors subject their giving to democratic checks on their decision-making from other EAs. For instance, what if taking a Giving What We Can pledge entitled you to a vote of some kind on certain fund disbursements or other decisions? What if Giving What We Can pledgers could put forward âshareholder proposalsâ on strategic decisions (subject to getting fifty signatures, say) at EA orgs, which other pledgers could then vote on? (Not necessarily just at GWWC) Obviously there are issues:
voters may not be the epistemic peers of grantmaking experts /â EA organization employees
voters may not be the epistemic peers of the UHNW donors themselves who have more reputational stake in ensuring their donations go well
UHNW donors have a lot of bargaining power when dealing with EA institutions and few incentives to open themselves up to democratic checks on their decision-making
determining who gets to vote is hard
some decisions need to be made quickly
sometimes there are infohazards
But there are advantages too, and I expect that often they outweigh the disadvantages:
wisdom of crowds
diversified incentives
democracy is a great look
This comment seems to be generating substantial disagreement. Iâd be curious to hear from those who disagree: which parts of this comment do you disagree with, and why?
Hi Cesar! You might be interested to check out the transparency page for the Against Malaria Foundation: https://ââwww.againstmalaria.com/ââtransparency.aspx
Iâd be interested in surveying on whether people believe that AI [could presently/âmight one day] do a better job governing the [United States/âmajor businesses/âUS military/âother important institutions] than [elected leaders/âCEOs/âgenerals/âother leaders].
I donât think this is true. Dunbarâs number is a limit on the number of social relationships an individual can cognitively sustain. But the sorts of networks needed to facilitate productive work are different than those needed to sustain fulfilling social relations. If there is a norm that people are willing to productively collaborate with the unknown contact of a known contact, then surely you can sustain a productive community with approx Dunbarâs number ^2 people (if each member of my Dunbar-sized community has their own equivalently-sized community with no shared members).
Thanks for contributing this critique, your invitation for argument, and your open-mindedness!
I think one important inequality in the distribution of power is that between presently living people and future generations. The latter have not only no political power, but no direct causal power at all. While we might decry a world where we have to persuade or compel billionaires -- or seek to become billionaires ourselvesâto have much hope at large-scale influence, these tools are much better than anything future generations have got. Our power over future generations is asymmetric and terrifying: their mere existence may depend on our present choices. To the extent that we might care about the distribution of power intrinsically and not just because of the effects on welfare (I donât personally find this view compelling), it seems like the highest priority redistributions of power are to those who have the least at present. One avenue of EA research I am excited about focuses on how we can build institutions and new systems of power to represent the interests of future generations in present political arrangements. You might also be interested in this analysis of opportunities for improving institutions by the Effective Institutions Projectâwhich I think is very good EA writing on power.
Animals find themselves in a somewhat similar political situation to future generations: that is, basically powerless. Albeit for different reasons, of course.
Yes, and how many people we project will have this association in the future. I think itâs reasonably likely that this view will pick up steam among vaguely activisty people on college campuses in the next five years. Thatâs an important demographic for growing EA.
Great piece, I thought. I think Carrick Flynnâs loss may in no small part be due to accidentally cultivating a white crypto-bro aesthetic. If thatâs right, it is a case of aesthetics mattering a fair amount. Personally, Iâd like to see EA do more to avoid donning this aesthetic, which anecdotally seems to turn a lot of people off.
Thanks for the comment. I was clearly too quick with that opening statement. Perhaps in part I let my epistemic guard down there out of general frustration at the neglectedness of the topic, and a desire to attract some attention with a bold opener. So much harm could accrue to nonhuman animals relative to humans, and I really want more discussion on this. PLF isâIâve argued, anywayâa highly visible threat to the welfare of zillions, but rarely mentioned. I hope youâll forgive an immodest but emotional claim.
Iâve edited the opener and the footnote to be more defensible, in response to this comment.
I actually donât believe, in the median scenario, that AIs are likely to both outnumber sentient animals and have a high likelihood of suffering, but I donât really want that to be the focus of this piece. And either way, I donât believe that with high certainty: in that respect, the statement was not reflective of my views.