Is it just something like “preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)”?
This sounds right. Before 2016, I would have said that rough value alignment (normatively “suffering-focused”) is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We’ve had qualia realists both on the team (part-time) and as interns, and some team members now don’t hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.
People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don’t focus on suffering as much.
Hm, this does not fit my observations. We had and still have people on our team who don’t have strong confidence in either view, and there exists also a sizeable cluster of people who seem highly confident in both qualia realism and morality being about reducing suffering, the most notable example being David Pearce.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do. This is far from a consensus and many team members value moral reflection a great deal, but many of us expect less “work” to be done by value-reflection procedures than others in the EA movement seemingly expect. Perhaps this is due to different ways of thinking about extrapolation procedures, or perhaps it’s due to us having made stronger lock-ins to certain aspects of our moral self image.
Paul Christiano’s indirect normativity write-up for instance deals with the “Is “Passing the Buck” Problematic?” objection in an in my view unsatisfying way. Working towards a situation where everyone has much more time to think about their values is more promising the more likely it is that there is “much to be gained,” normatively. But this somewhat begs the question. If one finds suffering-focused views very appealing, other interventions become more promising. There seems to be high value of information on narrowing down one’s moral uncertainty in this domain (much more so, arguably, than with questions of consciousness or which computations to morally care about). One way to attempt to reduce one’s moral uncertainty and capitalize on the value of information is by thinking more about the object-level arguments in population ethics; another way to do it is by thinking more about the value of moral reflection, how much it depends on intuition or self-image-based “lock ins” vs. how much it (either in general or in one’s personal case) is based on other things that are more receptive to information gains or intelligence gains.
Personally, I would be totally eager to place the fate of “Which computations count as suffering?” into the hands of some in-advance specified reflection process, even when I feel like I don’t understand the way moral reflection will work out in the details of this complex algorithm. I’d be less confident in my current understanding of consciousness than I’d be confident in being able to pick a reassuring-seeming way of delegating the decision-making to smarter advisors. However, I get the opposite feeling when it comes to questions of population ethics. There, I feel like I have thought about the issue a lot, experience it as easier and more straightforward to think about than consciousness and whether I care about insects or electrons or Jupiter brains, and I have some strong intuitions and aspects of my self-identity about the matter and am unsure in which legitimate ways (as opposed to failures of goal preservation) I could gain evidence that would strongly change my mind. It would feel wrong to me to place the fate of my values into some in-advance specified, open-ended deliberation algorithm where I won’t really understand how it will play out and what initial settings make which kind of difference to the end result (and why). I’d be fine with quite “conservative” reflection procedures where I could be confident that it would likely output something that does not seem too far away from my current thinking, but would be gradually more worried about more open-ended ones.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do.
Thanks for pointing this out. I’ve noticed this myself in some of FRI’s writings, and I’d say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most “turns me off” about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I’m highly uncertain about just about every meta-ethical and normativequestion, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I’d want to “lock in” at this point and since I’m by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it’s regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.
That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is “locked in” yet then it becomes very natural too. I’m not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they’d end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the “default” interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.
I’d also agree that designing trustworthy reflection procedures is important. My intuitions here are:
(1) value-drift is a big potential problem with FRI’s work (even if they “lock in” caring about suffering, if their definition of ‘suffering’ drifts, their tacit values do too);
(2) value-drift will be a problem for any system of ethics that doesn’t cleanly ‘compile to physics’. (This is a big claim, centering around my Objection 6, above.)
Perhaps we could generalize this latter point as “if information is physical, and value is informational, then value is physical too.”
This sounds right. Before 2016, I would have said that rough value alignment (normatively “suffering-focused”) is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We’ve had qualia realists both on the team (part-time) and as interns, and some team members now don’t hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.
Hm, this does not fit my observations. We had and still have people on our team who don’t have strong confidence in either view, and there exists also a sizeable cluster of people who seem highly confident in both qualia realism and morality being about reducing suffering, the most notable example being David Pearce.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do. This is far from a consensus and many team members value moral reflection a great deal, but many of us expect less “work” to be done by value-reflection procedures than others in the EA movement seemingly expect. Perhaps this is due to different ways of thinking about extrapolation procedures, or perhaps it’s due to us having made stronger lock-ins to certain aspects of our moral self image.
Paul Christiano’s indirect normativity write-up for instance deals with the “Is “Passing the Buck” Problematic?” objection in an in my view unsatisfying way. Working towards a situation where everyone has much more time to think about their values is more promising the more likely it is that there is “much to be gained,” normatively. But this somewhat begs the question. If one finds suffering-focused views very appealing, other interventions become more promising. There seems to be high value of information on narrowing down one’s moral uncertainty in this domain (much more so, arguably, than with questions of consciousness or which computations to morally care about). One way to attempt to reduce one’s moral uncertainty and capitalize on the value of information is by thinking more about the object-level arguments in population ethics; another way to do it is by thinking more about the value of moral reflection, how much it depends on intuition or self-image-based “lock ins” vs. how much it (either in general or in one’s personal case) is based on other things that are more receptive to information gains or intelligence gains.
Personally, I would be totally eager to place the fate of “Which computations count as suffering?” into the hands of some in-advance specified reflection process, even when I feel like I don’t understand the way moral reflection will work out in the details of this complex algorithm. I’d be less confident in my current understanding of consciousness than I’d be confident in being able to pick a reassuring-seeming way of delegating the decision-making to smarter advisors. However, I get the opposite feeling when it comes to questions of population ethics. There, I feel like I have thought about the issue a lot, experience it as easier and more straightforward to think about than consciousness and whether I care about insects or electrons or Jupiter brains, and I have some strong intuitions and aspects of my self-identity about the matter and am unsure in which legitimate ways (as opposed to failures of goal preservation) I could gain evidence that would strongly change my mind. It would feel wrong to me to place the fate of my values into some in-advance specified, open-ended deliberation algorithm where I won’t really understand how it will play out and what initial settings make which kind of difference to the end result (and why). I’d be fine with quite “conservative” reflection procedures where I could be confident that it would likely output something that does not seem too far away from my current thinking, but would be gradually more worried about more open-ended ones.
Thanks for pointing this out. I’ve noticed this myself in some of FRI’s writings, and I’d say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most “turns me off” about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I’m highly uncertain about just about every meta-ethical and normative question, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I’d want to “lock in” at this point and since I’m by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it’s regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.
That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is “locked in” yet then it becomes very natural too. I’m not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they’d end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the “default” interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.
I really enjoyed your linked piece on meta-ethics. Short but insightful. I believe I’d fall into the second bucket.
If you’re looking for what (2) might look like in practice, and how we might try to relate it to the human brain’s architecture/drives, you might enjoy this: http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
I’d also agree that designing trustworthy reflection procedures is important. My intuitions here are: (1) value-drift is a big potential problem with FRI’s work (even if they “lock in” caring about suffering, if their definition of ‘suffering’ drifts, their tacit values do too); (2) value-drift will be a problem for any system of ethics that doesn’t cleanly ‘compile to physics’. (This is a big claim, centering around my Objection 6, above.)
Perhaps we could generalize this latter point as “if information is physical, and value is informational, then value is physical too.”