Iâd be interested in surveying on whether people believe that AI [could presently/âmight one day] do a better job governing the [United States/âmajor businesses/âUS military/âother important institutions] than [elected leaders/âCEOs/âgenerals/âother leaders].
Zachary Brownđ¸
I donât think this is true. Dunbarâs number is a limit on the number of social relationships an individual can cognitively sustain. But the sorts of networks needed to facilitate productive work are different than those needed to sustain fulfilling social relations. If there is a norm that people are willing to productively collaborate with the unknown contact of a known contact, then surely you can sustain a productive community with approx Dunbarâs number ^2 people (if each member of my Dunbar-sized community has their own equivalently-sized community with no shared members).
Thanks for contributing this critique, your invitation for argument, and your open-mindedness!
I think one important inequality in the distribution of power is that between presently living people and future generations. The latter have not only no political power, but no direct causal power at all. While we might decry a world where we have to persuade or compel billionaires -- or seek to become billionaires ourselvesâto have much hope at large-scale influence, these tools are much better than anything future generations have got. Our power over future generations is asymmetric and terrifying: their mere existence may depend on our present choices. To the extent that we might care about the distribution of power intrinsically and not just because of the effects on welfare (I donât personally find this view compelling), it seems like the highest priority redistributions of power are to those who have the least at present. One avenue of EA research I am excited about focuses on how we can build institutions and new systems of power to represent the interests of future generations in present political arrangements. You might also be interested in this analysis of opportunities for improving institutions by the Effective Institutions Projectâwhich I think is very good EA writing on power.
Animals find themselves in a somewhat similar political situation to future generations: that is, basically powerless. Albeit for different reasons, of course.
Yes, and how many people we project will have this association in the future. I think itâs reasonably likely that this view will pick up steam among vaguely activisty people on college campuses in the next five years. Thatâs an important demographic for growing EA.
Great piece, I thought. I think Carrick Flynnâs loss may in no small part be due to accidentally cultivating a white crypto-bro aesthetic. If thatâs right, it is a case of aesthetics mattering a fair amount. Personally, Iâd like to see EA do more to avoid donning this aesthetic, which anecdotally seems to turn a lot of people off.
Iâd be a little bit concerned by this. I think thereâs a growing sentiment among young people (especially on university campuses) that classicism is aesthetically: regressive, retrograde, old-white-man stuff. Hereâs a quote from a recent New York Times piece:
âLong revered as the foundation of âWestern civilization,â [classics] was trying to shed its self-imposed reputation as an elitist subject overwhelmingly taught and studied by white men. Recently the effort had gained a new sense of urgency: Classics had been embraced by the far right, whose members held up the ancient Greeks and Romans as the originators of so-called white culture. Marchers in Charlottesville, Va., carried flags bearing a symbol of the Roman state; online reactionaries adopted classical pseudonyms; the white-supremacist website Stormfront displayed an image of the Parthenon alongside the tagline âEvery month is white history month.ââ
Edit: this is a criticism of classicism as a useful aesthetic, not of the enlightenment. Potentially theyâre severable.
Iâm curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs.
Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.
Suppose an aggressive community building tactic attracts one HEA, of average competence. In addition, it gives a number of people n a slightly negative view of EAânot a strongly felt opposition, just enough of a dislike that they mention it in conversations with other non-EAs sometimes. What n would we accept to make this community building tactic expected value neutral? (This piece seems to suggest that many current strategies fit this model.)
Iâm currently evaluating the feasibility and expected value of building a proxy voting advisory firm that would make EA-aligned voting recommendations. Would love to meet with you or anyone with expertise.
I think the virtues of moral expansiveness and altruistic sympathy for moral patients are really important for EAs to develop, and I think being vegan increased my stock of these virtues by reversing the âmoral dullingâ effect you postulate. (This paper makes the case for utilitarians to develop a set of similar virtues: https://ââpsyarxiv.com/ââw52zm.) Iâve also developed a visceral disgust response to meat as a result of being vegan, which is for me probably inseparable from the motivating feeling of sympathy for animals as moral patients.
When I was a nonvegan, I underestimated the extent to which eating meat was morally dulling to me, and I suspect this is common. It was hard to know how morally dulled I was until I experienced otherwise.
If a community claims to be altruistic, itâs reasonable for an outsider to seek evidence: acts of community altruism that canât be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EAâs credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesnât.
One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.
EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstrated intellectual productivity. But our selections are not necessarily the people who have made the greatest personal altruistic sacrifices. Often, theyâre researchers who live in relative luxuryâeven if theyâve taken a GWWC pledge. Perhaps we should be more conscious to elevate the EA profile of people like those in MacFarquharâs Strangers Drowning : people who have made exceptional sacrifices to make the world better, rather than people who have been most successful at producing EA-relevant intellectual output. Maybe the keynote speaker at the next EA conference should be someone who once undertook an effective hunger strike, say. (Maybe even regardless of whether they have heard of EA, or consider themselves EA.)
Thereâs an obvious reason to instead continue EAâs current role model selection strategy: having a talk from a really clever researcher is helpful for internal community epistemics. We want to grant speaking platforms to those who might be able to offer the most valuable information or best thought-through view. And itâs valuable for the external reputation of our community epistemics to have such people be the face of EA. We also donât want to promote the idea that the size of oneâs sacrifice is what ultimately matters.
But there are internal and external reasons to choose a role model based on the degree of inspiring altruistic sacrifice that person has made, too. Just as Will MacAskill can make me a little more informed, or guide my thinking in a slightly better direction, an inspiring story of personal sacrifice can make me a little more dedicated, a little more willing to work hard and sacrifice to make the world better. And externally, such a role model signals community focus on altruistic commitment.My low-confidence guess is that the optimum allocation of prestige still gives most EA attention and admiration to those with greatest demonstrated intellectual or epistemic powerâbut not all. Those whoâve demonstrated acts of moral sacrifice should be held up as exemplars too, especially in external-facing contexts.
Proportional Chances Voting is basically equivalent to a mechanism where one vote is selected at random to be the deciding vote, as Newberry and Ord register in a footnote (they refer to it as âRandom Dictatorâ; Iâve also seen it described as âlottery votingâ). Newberry and Ord do say that Proportional Chances is supposed to be different because of the negotiation period, but I donât see how Random Dictator is incompatible with negotiation.
Anyway, some of the literature on this mechanism may be of interest here, given footnotes 8-9. This paper proposes such a mechanism, defends its plausibility: Saunders, Ben. âDemocracy, Political Equality, and Majority Rule.â Ethics 121, no. 1 (2010): 148â77. I havenât read any good papers which offer interesting critiques of Saunders, but the paper seems to be influential, so maybe someone else knows of one?
As for calling new votes (footnote 9 of this post), votes could be scheduled by a separate body than that doing the voting, or could be scheduled by some regularised rule. For instance, in Kiraâs Dinner, the thought experiment in the Newberry and Ord paper, votes on what Kira should eat are scheduled according to the regular rhythm of Kira getting hungry. The voters take the votes as givenâI think there are usually similar ways to establish systems like this in real-world multi-person organizations.
To the extent average utilitarianism is motivated by avoiding the Repugnant Conclusion, I suspect that most average utilitarians would be as disturbed by aggregating over time as they are by aggregating within a generation, since we can establish a Repugnant Conclusion over times pretty straightforwardly. That said, to the extent intuitions differ when we aggregate over times, I can see that this could pose a challenge to average utilitarians.
I canât recall any work on this argument off the top of my head, but I did recently come across a hint of a related argument directed against distributive egalitarianism. From https://ââglobalprioritiesinstitute.org/ââeconomic-inequality-and-the-long-term-future-andreas-t-schmidt-university-of-groningen-and-daan-juijn-ce-delft/ââ : âAn additional question is whether distributive egalitarianism should extend to inequalities across generations.â Which links to a footnote: âOne of us elsewhere argues that distributive egalitarianism is implausible, because its extension to intergenerational distributions is necessary yet implausible [redacted].â Not sure why the citation is redacted, but I think âone of usâ refers to Andreas Schmidt. Of course, extending the analysis to future generations threatens average utilitarianism and distributive egalitarianism in different ways. But the fact that both are threatened by this type of argument suggests to me that a lot of moral theories ought to be stress-tested against âwhat about across generations?â arguments. I agree that thereâs an interesting set of questions here.
Hi Cesar! You might be interested to check out the transparency page for the Against Malaria Foundation: https://ââwww.againstmalaria.com/ââtransparency.aspx