Update: I have since been told that the deadline is going to be sooner, August 4th! So sorry for the late change.
August 18th and unfortunately US only—I’m hoping to change that someday but Vox has not taken the legal and regulatory steps that’d make it possible for them as a US-based company to make hires outside the US.
One way in which geoengineering increases societal fragility is if we pump particles into the atmosphere and then find ourselves obliged to keep pumping particles into the atmosphere in order to maintain the effects, and then suffer a significant collapse of infrastructure that makes us not capable of this any longer. This could result in extremely sudden warming and a rapid, unpredictable change in weather patterns. Something would have to go very wrong first, of course, but it could compound an existing catastrophe and take it from recoverable to irrecoverable.
Hmm. I think I’m thinking of concern for justice-system outcomes as a values difference rather than a reasoning error, and so treating it as legitimate feels appropriate in the same way it feels appropriate to say ‘an AI with poorly specified goals could wirehead everyone, which is an example of optimizing for one thing we wanted at the expense of other things we wanted’ even though I don’t actually feel that confident that my preferences against wireheading everyone are principled and consistent.
I agree that most peoples’ conceptions of fairness are inconsistent, but that’s only because most peoples’ values are inconsistent in general; I don’t think it means they’d necessarily have my values if they thought about it more. I also think that ‘the U.S. government should impose the same prison sentence for the same crime regardless of the race of the defendant’ is probably correct under my value system, which probably influences me towards thinking that other people who value it would still value it if they were less confused.
Some instrumental merits of imposing the same prison sentence for the same crime regardless of the race of the defendant:
I want to gesture at something in the direction of pluralism: we agree to treat all religions the same, not because they are of equal social value or because we think they are equally correct, but because this is social technology to prevent constantly warring over whose religion is correct/of the most social value. I bet some religious beliefs predict less recidivism, but I prefer not using religion to determine sentencing because I think there are a lot of practical benefits to the pluralistic compromise the U.S. uses here. This generalizes to race.
There are ways you can greatly exacerbate an initially fairly small difference by updating on it in ways that are all technically correct. I think the classic example is a career path with lots of promotions, where one thing people are optimizing for at each level is the odds of being promoted at the next level; this will result in a very small difference in average ability producing a huge difference in odds of reaching the highest level. I think it is good for systems like the U.S. justice system to try to adopt procedures that avoid this, where this is sane and the tradeoffs relatively small.
(least important): Justice systems run on social trust. If they use processes which undermine social trust, even if they do this because the public is objectively unreasonable, they will work less well; people will be less likely to report crimes, cooperate with police, testify, serve on juries, make truthful decisions on juries, etc. I know that when crimes are committed against me, I weigh whether I expect the justice system to behave according to my values when deciding whether to report the crimes. If this is common, there’s reason for justice systems to use processes that people consider aligned. If we want to change what people value, we should use instruments for this other than the justice system.
This is not for criminal investigation. This is for, when a person has been convicted of a crime, estimating when to release them (by estimating how likely they are to commit another crime).
Expanding on this: I don’t think ‘fairness’ is a fundamental part of morality. It’s better for good things to happen than bad ones, regardless of how they’re distributed, and it’s bad to sacrifice utility for fairness.
However, I think there are some aspects of policy where fairness is instrumentally really useful, and I think the justice system is the single place where it’s most useful, and the will/preferences of the American populace is demonstrably for a justice system to embody fairness, and so it seems to me that we’re missing a really important point if we decide that it’s not a problem for a justice system to badly fail to embody the values it was intended to embody just because we don’t non-instrumentally value fairness.
Huh, yeah, I disagree. It seems to me pretty fundamental to a justice system’s credibility that it not imprison one person and free another when the only difference between them is the color of their skin (or, yes, their height), and it makes a lot of sense to me that U.S. law mandates sacrificing predictive power in order to maintain this feature of the system.
Similarly, I don’t think all of the restrictions the legal system imposes on what kinds of evidence to use are, in fact, motivated by long-term harm-reduction considerations. I think they’re motivated by wanting the system to embody the ideal of justice. EAs are mostly consequentialists (I am) and mostly only interested in harm, not in fairness, but I think it’s important to realize that the overwhelming majority of people care a lot about whether a justice system is fair in addition to whether it is harm-reducing, and that this is the actual motivation for the laws I discuss above, even if you can technically propose a defense of them in harm-reduction terms.
I agree with Habryka here—it seems potentially very damaging to EA for arguments to be advanced with obvious holes in them, especially if the motivation for that seems to be political. In that spirit I want to find a better source to cite for the point I’m trying to make here. I think EA is really hard. I think we’ll consistently get things wrong if we relax our standards for accuracy at all.
I do think criminal justice predictive algorithms are a decent example of ML interpretability concerns and ‘what we said isn’t what we meant’ concerns. I think most people do not actually want a system which treats two identical people differently because one is black and one is white; human values include ‘reduce recidivism’ but also ‘do not evaluate people on the basis of skin color’. But because of the statistical problem, it’s actually really hard to prevent a system from either using race or guessing race from proxies and using its best guess of race. That’s illegal under current U.S. antidiscrimination law, and I do think it’s not really what we want—that is, I think we’re willing to sacrifice some predictive power in order to not use race to decide whether people remain in prison or not, just like we’re willing to sacrifice predictive power to get people lawyers and willing to sacrifice predictive power to require cops to have a warrant and willing to sacrifice predictive power to protect the right not to incriminate yourself. But none of that nebulous stuff makes it into the classifier, and so the classifier is genuinely exhibiting unintended behavior—and unintended behavior we struggle to make it stop exhibiting, since it’ll keep trying to find proxies for race and using them for prediction.
I’m curious if Larks/others think that this summary is decent and would avoid misleading someone who didn’t know the stats background; if so, I’ll try to write it up somewhere in more depth (or find it written up in more depth) so I can link that instead of the existing links.
I just want to quickly call attention to one point: “these are still pure benefits” seems like a mistaken way of thinking about this—or perhaps I’m just misinterpreting you. To me “pure benefits” suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.
None of this is to say ‘it isn’t worth it’. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn’t very strong evidence that a movement is doing something unusually wrong). My goal is not to say ‘this is too much work’ but simply ‘this is work’ - because if we don’t acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).
Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I’ve seen a lot of social justice communities suffer because they’re unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.
So I think it’s a bad idea to think in terms of ‘pure benefit’. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.
I’m also curious why you think the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I’m coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?
All your posts on cause prioritization have been really valuable for me but I think this is my favorite so far. It clearly motivates what you’re doing and the circumstances under which we’ll end up being forced to do it, it compares the result you got from using a formal mathematical model to the result you got when you tried to use your intuitions and informal reasoning on the same problem, which both helps sanity-check the mathematical model and helps make the case that it’s a useful endeavor for people who already have estimates they’ve arrived at through informal reasoning, and it spells out the framework explicitly enough other people can use it.
I’m curious why you don’t think the specific numbers can be trusted. My instincts are that the cage free numbers are dubious as to how much they improve animal lives, that your choice of prior will affect this so much it’s probably worth having a table of results given different reasonable priors, and that “the value of chickens relative to humans” is the wrong way to think about how good averting chicken suffering is compared to making people happier or saving their lives (chickens are way less valuable to me than humans. chicken-torture is probably nearly as morally bad as human-torture; I am not sure that the things which make torture bad vary between chickens and people). Are those the numbers that you wanted us to flag as dubious, or were you thinking of different ones?
I really agree here—other factors that make Facebook conversations particularly inflammatory include Facebook’s lack of threading, so you can’t easily see who a person is responding to and if the tone of the response is appropriate to the original post, the way Facebook comment threads rapidly stack up with hundreds of comments, some only tangentially related to the original post, and the wide variance in moderation schemes. I’ve been disillusioned by some of the conversations on Facebook, but this comment made me more optimistic that is a platform issue, not a problem with open discussion of EA concerns.
No, sorry, they are not. And not all of these are pitfalls I’ve witnessed specifically in EA outreach—the atheist/skeptic community and campus conservative/libertarian groups are where I watched a lot of these mistakes get made.
I think they were concerned that the Stanford brand name would be used for publicity and /or fundraising by organizations outside their control.
The question we’ve had the most success with for a regular/weekly meetup is “what is something interesting you’ve learned/read/thought about recently”. The advantage to keeping it consistent is that people know what to expect; this question also avoids most of the disadvantages of keeping the question consistent (namely that people repeat themselves and get bored). It also tends to provoke fascinating answers.