What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like “preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)”?
It seems to me that a belief in anti-realism about consciousness explains a lot of Brian’s (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don’t focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?
Is it just something like “preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)”?
This sounds right. Before 2016, I would have said that rough value alignment (normatively “suffering-focused”) is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We’ve had qualia realists both on the team (part-time) and as interns, and some team members now don’t hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.
People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don’t focus on suffering as much.
Hm, this does not fit my observations. We had and still have people on our team who don’t have strong confidence in either view, and there exists also a sizeable cluster of people who seem highly confident in both qualia realism and morality being about reducing suffering, the most notable example being David Pearce.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do. This is far from a consensus and many team members value moral reflection a great deal, but many of us expect less “work” to be done by value-reflection procedures than others in the EA movement seemingly expect. Perhaps this is due to different ways of thinking about extrapolation procedures, or perhaps it’s due to us having made stronger lock-ins to certain aspects of our moral self image.
Paul Christiano’s indirect normativity write-up for instance deals with the “Is “Passing the Buck” Problematic?” objection in an in my view unsatisfying way. Working towards a situation where everyone has much more time to think about their values is more promising the more likely it is that there is “much to be gained,” normatively. But this somewhat begs the question. If one finds suffering-focused views very appealing, other interventions become more promising. There seems to be high value of information on narrowing down one’s moral uncertainty in this domain (much more so, arguably, than with questions of consciousness or which computations to morally care about). One way to attempt to reduce one’s moral uncertainty and capitalize on the value of information is by thinking more about the object-level arguments in population ethics; another way to do it is by thinking more about the value of moral reflection, how much it depends on intuition or self-image-based “lock ins” vs. how much it (either in general or in one’s personal case) is based on other things that are more receptive to information gains or intelligence gains.
Personally, I would be totally eager to place the fate of “Which computations count as suffering?” into the hands of some in-advance specified reflection process, even when I feel like I don’t understand the way moral reflection will work out in the details of this complex algorithm. I’d be less confident in my current understanding of consciousness than I’d be confident in being able to pick a reassuring-seeming way of delegating the decision-making to smarter advisors. However, I get the opposite feeling when it comes to questions of population ethics. There, I feel like I have thought about the issue a lot, experience it as easier and more straightforward to think about than consciousness and whether I care about insects or electrons or Jupiter brains, and I have some strong intuitions and aspects of my self-identity about the matter and am unsure in which legitimate ways (as opposed to failures of goal preservation) I could gain evidence that would strongly change my mind. It would feel wrong to me to place the fate of my values into some in-advance specified, open-ended deliberation algorithm where I won’t really understand how it will play out and what initial settings make which kind of difference to the end result (and why). I’d be fine with quite “conservative” reflection procedures where I could be confident that it would likely output something that does not seem too far away from my current thinking, but would be gradually more worried about more open-ended ones.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do.
Thanks for pointing this out. I’ve noticed this myself in some of FRI’s writings, and I’d say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most “turns me off” about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I’m highly uncertain about just about every meta-ethical and normativequestion, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I’d want to “lock in” at this point and since I’m by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it’s regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.
That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is “locked in” yet then it becomes very natural too. I’m not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they’d end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the “default” interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.
I’d also agree that designing trustworthy reflection procedures is important. My intuitions here are:
(1) value-drift is a big potential problem with FRI’s work (even if they “lock in” caring about suffering, if their definition of ‘suffering’ drifts, their tacit values do too);
(2) value-drift will be a problem for any system of ethics that doesn’t cleanly ‘compile to physics’. (This is a big claim, centering around my Objection 6, above.)
Perhaps we could generalize this latter point as “if information is physical, and value is informational, then value is physical too.”
Rather than put words in the mouths of other people at FRI, I’d rather let them personally answer which philosophical premises they accept and what motivates them, if they wish.
For me personally, I’ve just had, for a long time, the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time.
Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That’s something that has come later, though.
To clarify, none of this means that I would only value suffering prevention: I’d much rather see a universe-wide flourishing civilization full of minds in various states of bliss, than a dead and barren universe. My position is more of a prioritarian one: let’s first take care of everyone who’s experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.
Is it just something like “preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)”?
I also don’t want to speak for FRI as a whole, but yeah, I think it’s safe to say that a main thing that makes FRI unique is its suffering focus.
My high confidence in suffering-focused values results from moral anti-realism generally (or, if moral realism is true, then my unconcern for the moral truth). I don’t think consciousness anti-realism plays a big role because I would still be suffering-focused even if qualia were “real”. My suffering focus is ultimately driven by the visceral feeling that extreme suffering is so severe that nothing else compares in importance. Theoretical arguments take a back seat to this conviction.
Interesting. I’m a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.
Sorry, I meant that emotion is what makes me care about (extreme) suffering in the first place. With that foundation, one should use arguments to clarify what reducing suffering looks like in practice and what “suffering” even means. Also, there’s some blending of rational arguments and emotion. I now care a bit about suffering in fundamental physics on an emotional level because my conception of suffering has been changed by learning more about the world and philosophy of mind. (That said, I still care a lot about animals.)
What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like “preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)”?
It seems to me that a belief in anti-realism about consciousness explains a lot of Brian’s (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don’t focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?
This sounds right. Before 2016, I would have said that rough value alignment (normatively “suffering-focused”) is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We’ve had qualia realists both on the team (part-time) and as interns, and some team members now don’t hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.
Hm, this does not fit my observations. We had and still have people on our team who don’t have strong confidence in either view, and there exists also a sizeable cluster of people who seem highly confident in both qualia realism and morality being about reducing suffering, the most notable example being David Pearce.
The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default “prudent” thing to do. This is far from a consensus and many team members value moral reflection a great deal, but many of us expect less “work” to be done by value-reflection procedures than others in the EA movement seemingly expect. Perhaps this is due to different ways of thinking about extrapolation procedures, or perhaps it’s due to us having made stronger lock-ins to certain aspects of our moral self image.
Paul Christiano’s indirect normativity write-up for instance deals with the “Is “Passing the Buck” Problematic?” objection in an in my view unsatisfying way. Working towards a situation where everyone has much more time to think about their values is more promising the more likely it is that there is “much to be gained,” normatively. But this somewhat begs the question. If one finds suffering-focused views very appealing, other interventions become more promising. There seems to be high value of information on narrowing down one’s moral uncertainty in this domain (much more so, arguably, than with questions of consciousness or which computations to morally care about). One way to attempt to reduce one’s moral uncertainty and capitalize on the value of information is by thinking more about the object-level arguments in population ethics; another way to do it is by thinking more about the value of moral reflection, how much it depends on intuition or self-image-based “lock ins” vs. how much it (either in general or in one’s personal case) is based on other things that are more receptive to information gains or intelligence gains.
Personally, I would be totally eager to place the fate of “Which computations count as suffering?” into the hands of some in-advance specified reflection process, even when I feel like I don’t understand the way moral reflection will work out in the details of this complex algorithm. I’d be less confident in my current understanding of consciousness than I’d be confident in being able to pick a reassuring-seeming way of delegating the decision-making to smarter advisors. However, I get the opposite feeling when it comes to questions of population ethics. There, I feel like I have thought about the issue a lot, experience it as easier and more straightforward to think about than consciousness and whether I care about insects or electrons or Jupiter brains, and I have some strong intuitions and aspects of my self-identity about the matter and am unsure in which legitimate ways (as opposed to failures of goal preservation) I could gain evidence that would strongly change my mind. It would feel wrong to me to place the fate of my values into some in-advance specified, open-ended deliberation algorithm where I won’t really understand how it will play out and what initial settings make which kind of difference to the end result (and why). I’d be fine with quite “conservative” reflection procedures where I could be confident that it would likely output something that does not seem too far away from my current thinking, but would be gradually more worried about more open-ended ones.
Thanks for pointing this out. I’ve noticed this myself in some of FRI’s writings, and I’d say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most “turns me off” about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I’m highly uncertain about just about every meta-ethical and normative question, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I’d want to “lock in” at this point and since I’m by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it’s regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.
That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is “locked in” yet then it becomes very natural too. I’m not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they’d end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the “default” interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.
I really enjoyed your linked piece on meta-ethics. Short but insightful. I believe I’d fall into the second bucket.
If you’re looking for what (2) might look like in practice, and how we might try to relate it to the human brain’s architecture/drives, you might enjoy this: http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/
I’d also agree that designing trustworthy reflection procedures is important. My intuitions here are: (1) value-drift is a big potential problem with FRI’s work (even if they “lock in” caring about suffering, if their definition of ‘suffering’ drifts, their tacit values do too); (2) value-drift will be a problem for any system of ethics that doesn’t cleanly ‘compile to physics’. (This is a big claim, centering around my Objection 6, above.)
Perhaps we could generalize this latter point as “if information is physical, and value is informational, then value is physical too.”
Rather than put words in the mouths of other people at FRI, I’d rather let them personally answer which philosophical premises they accept and what motivates them, if they wish.
For me personally, I’ve just had, for a long time, the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time.
Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That’s something that has come later, though.
To clarify, none of this means that I would only value suffering prevention: I’d much rather see a universe-wide flourishing civilization full of minds in various states of bliss, than a dead and barren universe. My position is more of a prioritarian one: let’s first take care of everyone who’s experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.
I also don’t want to speak for FRI as a whole, but yeah, I think it’s safe to say that a main thing that makes FRI unique is its suffering focus.
My high confidence in suffering-focused values results from moral anti-realism generally (or, if moral realism is true, then my unconcern for the moral truth). I don’t think consciousness anti-realism plays a big role because I would still be suffering-focused even if qualia were “real”. My suffering focus is ultimately driven by the visceral feeling that extreme suffering is so severe that nothing else compares in importance. Theoretical arguments take a back seat to this conviction.
Interesting. I’m a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.
Sorry, I meant that emotion is what makes me care about (extreme) suffering in the first place. With that foundation, one should use arguments to clarify what reducing suffering looks like in practice and what “suffering” even means. Also, there’s some blending of rational arguments and emotion. I now care a bit about suffering in fundamental physics on an emotional level because my conception of suffering has been changed by learning more about the world and philosophy of mind. (That said, I still care a lot about animals.)