There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
My guess is that pesticides impact on insect welfare probably falls into this category.
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
My guess is that pesticides impact on insect welfare probably falls into this category.
Some Positions In EA Leadership Should Be Elected
We should also think about why we want democracy. Intra-communal democracy is not an inherent good, and indeed, the EA community is not here for the sake of the EA community, but rather to have positive impact. However, we might think that ‘democratising’ or whatever we might want to call it may play important ethical or epistemic roles when we think a) diversifying viewpoints is important and b) justification and accountability are important. However, I think none of these are best served by elections.
For diversifying viewpoints, we may want this because of the epistemic situation we are in might suggest to us that a more ‘diverse’ (this may only be along certain axes eg expertise, assumptions, political viewpoint/party) decision making body is necessary. I certainly think this is true in a fair few areas EA functions in. However, it isn’t clear that elections, which often focus on popularity or consensus actually do that. Maybe we’d be better off doing some sort of deliberately diverse expert elicitation panel, or simply caring more about (relevant forms of) diversity in our hiring. For example, perhaps grantmakers should be making an effort to hire people with experience in conservative policy circles. Or maybe we simply do this by doing CB efforts to have a more pluralsitic ‘community’; again. I notbaly think EA (or certain parts of it, for example AI) have got MUCH MUCH better at this the last few years, such that its not actually obvious how much concerted effort is needed.
Accountability may be another reason. EAs tie lots of our identity to this community, and also much of our professional reputation. As such, we might want to be able to hold representatives accountable. However, it isn’t obvious that we can’t trust well constructed boards to do this, for example. Otherwise, I could imagine a scenario where a certain designated body (say, all people who have attended 2 EAGs or been employed at a certain list of organisations etc) can petition to remove someone from important leadership roles, and if a supermajority votes to remove them then they are removed. But this doesn’t really seem like an election.
More generally, it just isn’t clear to me what sorts of roles we want elected. The two main levers of power in EA are a) money and b) prestige. A lot of prestige is generated by who speaks at EAGs, appears on the 80000 hours podcast etc, and its really unlikely that having an elected person making these decisions would actually change very much, or encourage the sorts of outcomes wanted. Maybe there are better ways to harness the collective wisdom of the community in these decisions, but I think they are unlikely to look like elections. And for grantmaking, there also just appears minimal reason to do elections. The main issue with regards to grantmaking in this vicinity is how few grantmakers there are (although this is maybe better than it was), which creates centralisation and thus likely a sub-optimal tayloring of the landscape to the preferences of existing grantmakers, and tying of reputations to those grantmakers. This problem is not at all solved by elections, and maybe would get worse rather than better; the problem is solved by bringing more money from different sources into EA.
I think the best argument for elections is it would reduce the ‘who you know’ component of EA. But a) I think this is just a lot better now than it was—as the community has grown, i think much of this has been adjusted and b) its not obvious to me that elections wouldn’t optimise for something similar.
I think the argument that insect suffering is of overwhelming importance doesn’t actually require pure utilitarianism. It probably works for any form of aggregationist, and maybe even partially agreggationist ethics. Indeed, itsw not clear the problem isn’t worse under certain formations of deontological views, where discounting the life of an insect relative to a human would be unacceptable
Are the annoying happy lightbulbs when you upvote something here to stay, or they just an April Fool’s thing that haven’t been removed yet?
I think you should delete the post and resend it out another day (maybe on the 3rd?)
In fairness to Richard, I think it comes across in text a lot more strongly than in my view it came across listening on youtube
I really like this piece, and I think I share in a lot of these views. Just on some fairly minor points:
Deep Incommensurability. It seems like incommensurability helps with regards to avoiding MPL, but not actually that much. For example, there seem many moral theories (ie something that is somewhat like Person Affecting Views) that are incommensurable (or indifferent) between different size worlds, but not different qualities. So they may really care if it is a world of humans, or insects, or hedonium.
I can imagine views (they do run into non-identity, but maybe there is ways of formulating them that don’t) that this would be a real problem. For example, imagine a view that holds that simulated human existence if the best form of life, but is indifferent between that and non-existence. As such, they won’t care whether we leave the universe insentient, but faced with a pair-wise choice between hedonium and simulated humans, they will take the simulated humans everytime. So they don’t care much if we do extinct, but do care if the hedonistic utilitarians win. indeed, these views may be even less willing to take trades than many views that care about quantity. I imagine many religions, particularly universalist religions like Christianity and Islam, may actually fall into this category.
I think some more discussion of the ‘kinetics’ vs ‘equilibrium’ point you sort of allude to seems pretty interesting. I think you could reasonably hold the view that rational (or sensing or whatever other sort of beings) beings converge to moral correctness in infinite time. But we are likely not waiting infinite time before locking in decisions that cannot be reversed. Thus, because irreversible moral decisions could occur at a faster rate than correct moral convergence (ie the kinetics of the process is more important than what it would be at equilibrium), we shouldn’t expect the equilibrium process to dominate. I think you gesture towards this, but I think exploration of the ordering further would be very interesting.
I also wonder if views that are pluralist rather than monist about value may make the MPL problem worse or better. I think I could see arguments either way, depending on exactly how those views are formulated, but would be interesting to explore.
Very interesting piece anyway, thanks a lot, and really resonates with a lot I’ve been thinking about
I’m sure I’ll have a few more comments at some point as I revisit the essay.
Ye, I might be wrong, but something like Larry Temkin’s model might work best here (been a while since I read it so may be getting it wrong)
I think averageists may actually also care about the long term future a lot, and it may still have a MPL if they don’t hold (rapid) diminish returns to utility WITHIN lives (ie it is possible for the average life to be a lot worse or a lot better than today). Indeed, given (potentially) plausible views on interspecies welfare comparisons, and how bad the lvies of lots of non-humans seem today, this just does seem to be true. Now, its not clear they shouldn’t be at least a little more sympathetic to us converging on the ‘right’ world (since it seems easier), but it doesn’t seem like they get out of much of the argument either
I think a really important question in addressing this is something like—does the USA remain ‘unfanatical’ if the shackles are taken off powerful people. This is where I think the analysis of the USA goes a little bit wrong—we need to think about what the scenario looks like if it is possible for power to be much much more extremely concentrated than it is now. Certainly, in such a scenario, its not obvious thatit will be true post AGI that “even a polarizing leader cannot enforce a singular ideology or eliminate opposition”
You’re sort of right on the first point, and I’ve definitely counted that work in my views on the area. I generally prefer to refer to it as ‘making sure the future goes well for non-humans’ - but I’ve had that misinterpreted as just focused on animals. I
I think for me the fact that the minds will be non-human, and probably digital, matter a lot. Firstly, I think arguments for longtermism probably don’t work if the future is mostly just humans. Secondly, the fact that these beings are digital minds, and maybe digital minds very different to us, means a lot of common responses that are given for how to make the future go well (eg make sure they’re preferred government ‘wins’ the ASI race) definitely looks less promising me. Plus you run into trickier problems like what Carlsmith discusses in his Otherness and Control series, and on the other end, if conscious AIs are ‘small minds’ ala insects (lots of small conscious digital minds that are maybe not individually very smart) you run into a bunch of the same issues of how to adequately treat them. So this is sort of why I call it ‘digital minds’, but I guess thats fairly semantic.
On you’re second point, I basically think it could go either way. I think this depends on a bunch of things, including if, how strong and what type (ie what values are encoded) of ‘lock in’ we get, how ‘adaptive’ consciousness is etc. At least to me, I could see it going either way (not saying 50-50 credence towards both, but my guess is I’m at least less skeptical than you). Also, its possible that these are the more likely scenarios to have abundant suffering (although this also isn’t obvious to me given potential motivations for causing deliberate suffering).
I wish more work focused on digital minds really focused on answering the following questions, rather than merely investigating how plausible it is that digital minds similar to current day AI’s could be sentient:
What does good sets of scenarios for post-AGI governance need to look like to create good/avoid terrible (or whatever normative focus we want) futures, assuming digital minds are the dominant moral patients going into the future 1a) How does this differ dependent on what sorts of things can be digital minds eg whether sentient AIs are likely to happen ‘by accident’ by creating useful AIs (including ASI systems or sub-systems) vs whether sentient AIs have to be delibrately built? How do we deal with this trade off?
Which of these good sets of scenarios need certain actions to be taken pre-ASI development (actions beyond simply ensuring we don’t all die)? Therefore, what actions would we ideally take now to help bring about such good futures? This includes, in my view, what, if any, thicker concept of alignment than ‘intent alignment’ ought we to use.
Given the strategic, political, geopolitical and technological situation we are in, how, if at all, can we make concrete progress to this? We obviously can’t just ‘do research’ and hope this solves everything. Rather, we ought to use this to guide specific actions that can have impact. I guess this step feels rather hard to do without 1 and 2, but also, as far as I can tell, no one is really doing this?
I’m sure someone has expressed this same set of questions elsewhere, but i’ve not seen them yet, and at least to me, seem pretty neglected and important
Factory farming?
Just flagging, it seems pretty strange to have something about career choice in ‘Community’
Pretty sure EA basically invented that (yes people were working on stuff before then and outside of it, but still that seems different to ‘reinventing the wheel’)
I see no legitimate justification for attitudes that would consider humans as important enough that global health interventions would beat out animal welfare, particularly given the sheer number and scale of invertebrate suffering. If invertibrates are sentient, it seem animal welfare definitely could absorb 100m and remain effective on the margin, and probably also if they are not (which seems unlikely). The reasons I am not fully in favour is mostly because the interaction of animal welfare with population ethics is far stronger than the interaction of global health developments, and given the signifciant uncertainties involved with population ethics, I can’t be sure these don’t at least significant reduce the benefits of AW over GH work
Also not Holly, but another response might be the following:
Pausing in the very near future without a rise in political salience is just very very unlikely. The pause movement getting large influence is unlikely without a similar rise in political salience.
If a future rise in political salience occurs, this is likely an approximation of a ‘pivotal point’ (and if its not, well, policymakers are unlikely to agree to a pause at a pivotal point anyway)
Thus, what advocacy now is actually doing predominantly is creating the groundwork for a movement/idea that can be influential when the time comes.
I think this approach runs real risks, which I’d be happy to discuss, but also strikes me as an important response to the Shulman take.