The report aims to be a “direct inquiry into moral status,” but because it does so from an anti-realist perspective, a certain notion of idealized preferences comes into play. In other words: if you don’t think “objective values” are “written into the fabric of the universe,” then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on. I won’t make the case for this meta-ethical approach here, but I link some relevant sources in the report, in particular in footnote 239.
This is one reason I say at the top of the report that:
This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.
And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my “idealized” moral intuitions would be.
a certain notion of idealized preferences comes into play. In other words: if you don’t think “objective values” are “written into the fabric of the universe,” then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on.
But there are plenty of accounts of anti-realist ethics, and they don’t all make everything reducible to preferences or provide this account of what we ought to value. I still don’t see what makes this view more noteworthy than the others and why Open Phil is not as interested in either a general overview of what many views would say or a purely empirical inquiry from which diverse normative conclusions can be easily drawn.
And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my “idealized” moral intuitions would be.
I don’t see what reason they have to take them into account at all, unless they accept ideal advisor theory and are using your preferences as a heuristic for what their preferences would be if they did the same thing that you are doing. Is ideal advisor theory their view now? And is it also the case for other moral topics besides animal ethics?
Also: can we expect a reduced, empirical-only version of the report to be released at some point?
These are reasonable questions, and I won’t be able to satisfactorily address them in a short comment reply. Nevertheless, I’ll try to give you a bit more of a sense of “where I’m coming from” on the topic of meta-ethics.
As I say in the report,
I suspect my metaethical approach and my moral judgments overlap substantially with those of at least some other Open Philanthropy Project staff members, and also with those of many likely readers, but I also assume there will be a great deal of non-overlap with my colleagues at the Open Philanthropy Project and especially with other readers. My only means for dealing with that fact is to explain as clearly as I can which judgments I am making and why, so that others can consider what the findings of this report might imply given their own metaethical approach and their own moral judgments.
We don’t plan to release an “empirical-only” version of the report, but I think those with different meta-ethical views will be able to read the empirical sections of the report — e.g. most of section 3, appendices C-E, and some other sections — and think for themselves about what those empirical data imply given their own meta-ethical views.
However, your primary question seems to be about why Open Phil is willing to make decisions that are premised on particular views about meta-ethics that we find plausible (e.g. ideal advisor theory) rather than a broader survey of (expert?) views about meta-ethics. I’ll make a few comments about this.
First: the current distribution of expert opinion is a significant input to our thinking, but we don’t simply defer to it. This is true with respect to most topics that intersect with our work, not just meta-ethics. Instead, we do our best to investigate decision-relevant topics deeply enough ourselves that we develop our own opinions about them. Or, as we said in our blog post on Hits-based giving (re-formatted slightly):
We don’t defer to expert opinion or conventional wisdom, though we do seek to be informed about them… following expert opinion and conventional wisdom is likely to cut against our goal of seeking neglected causes… We do think it would be a bad sign if no experts… agreed with our take on a topic, but when there is disagreement between experts, we need to be willing to side with particular ones. In my view, it’s often possible to do this productively by learning enough about the key issues to determine which arguments best fit our values and basic epistemology.
Second: one consequence of investigating topics deeply enough to form our own opinions about them, rather than simply deferring to what seems to be the leading expert opinion on the topic (if any exists), is that (quoting again from “Hits-based giving”) “we don’t expect to be able to fully justify ourselves in writing.” That is why, throughout my report, I repeat that my report does not really “argue” for the assumptions I make and the tentative conclusions I come to. But I did make an effort to refer the reader to related readings and give some sense of “where I’m coming from.”
Third: even if I spent an entire year writing up my best case for (e.g.) ideal advisor theory, I don’t think it would convince you, and I don’t think it would be thoroughly convincing to myself, either. We can’t solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) “execute our mission to ‘accomplish as much good as possible with our giving’ without waiting to first resolve all major debates in moral philosophy.”
In the end, anyone who is trying to do as much “good” as possible — or even just “more good than bad” — must either (1) wrestle with the sorts of difficult issues we’re wrestling with (or the similarly unsolved problems of some other moral framework), and come to some “best guesses for now,” or (2) implicitly make assumptions about ~all those same fraught issues anyway, but without trying to examine and question them. (At least, this is true so long as “good” isn’t just defined with respect to domain-narrow, funder-decided “goods” like “better scores by American children on standardized tests.”)
We don’t think it’s possible for Open Phil or any other charitable project to definitively answer such questions, but we do prefer to act on questioned/examined assumptions rather than on largely unexamined assumptions. Hence our reports on difficult philosophical questions summarize what we did to examine these questions and what our best-guess conclusions are for the moment, but those reports do not convincingly argue for any solid “answers” to these questions. (Besides the moral patienthood report, see also e.g. here and here.)
Of course, you might think the above points are reasonable, but still want to know more about why I find ideal advisor theory particularly compelling among meta-ethical views. I can’t think of anything especially brief to say, other than “that’s the family of views I find most plausible after having read, thought, and argued about meta-ethics for several years.” I haven’t personally written a defense of ideal advisor theory, and I’m not aware of a published defense of ideal advisor theory that I would thoroughly endorse. If you’re curious to learn more about my particular views, perhaps the best I can do is point you to Pluralistic moral reductionism, Mixed Reference: The Great Reductionist Project, and ch. 9 of Miller (2013), and Extrapolated volition (normative moral theory).
Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to “a general overview of what many [meta-ethical] views would say.” I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high “weight” on ideal advisor theories — either as a final “theory” of normative morality, or as a very important input to our moral thinking. Because of this, it seemed likely to be more informative (to our decision-making about grants) per unit effort to conduct an investigation that was a mix of empirical data (not premised on any meta-ethical theory) and moral philosophy (premised on some kind of ideal advisory theory), rather than to produce a more neutral survey of the implications of a large variety of meta-ethical theories, most of which we (the people at Open Phil who engage most deeply with the details of our philosophical framework for giving) have considered before and decided to give little or no weight to (again, as far as I know).
One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you’re thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.
That said, as I say in the report, I don’t think my tentative moral judgments in the report depend on my meta-ethical views, and the empirical data I present don’t depend on them either. Also, we continue to question and examine the assumptions behind our current philosophical framework for giving, and I expect that framework to evolve over time as we do so.
A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one’s ethical views unavoidably infect one’s way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.
We can’t solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) “execute our mission to ‘accomplish as much good as possible with our giving’ without waiting to first resolve all major debates in moral philosophy.”
Yeah, but what you’re doing is antithetical to that. You’re basically assuming that you have solved a major debate in philosophy and not paying attention to uncertainty. At the very least, we should know more clearly if Open Phil is going to be an Ideal Advisor Theory grantmaking organization from now on. Meta-ethics is what you use to figure out how to decide what your mission should be in the first place. Introducing it as this stage and in this manner is kind of weird.
Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to “a general overview of what many [meta-ethical] views would say.” I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high “weight” on ideal advisor theories — either as a final “theory” of normative morality, or as a very important input to our moral thinking.
To be quite honest it is hard to believe that a significant portion of the assorted staff at Open Phil independently reviewed philosophical arguments and independently arrived at the same relatively niche meta-ethical view. It sounds a lot more like an information cascade.
One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you’re thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.
But that just makes the whole methodology even more confusing since you are talking about meta-ethics and empirical issues at the same time, while not talking about the normative issues in the middle, and then coming to normative conclusions. If you really use ideal advisor theory as a meta-ethical approach then you should use it to determine a model of normative ethics, and then match that with science on consciousness. Two people with the same meta-ethical views could have very different normative views but you’re not explicating this possibility. At the same time, you might have the same normative views as someone else but there is no way to tell since you’re only talking about meta-ethics.
A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one’s ethical views unavoidably infect one’s way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.
Ethical views might, but it’s not clear how meta-ethical views would.
The report aims to be a “direct inquiry into moral status,” but because it does so from an anti-realist perspective, a certain notion of idealized preferences comes into play. In other words: if you don’t think “objective values” are “written into the fabric of the universe,” then (according to one meta-ethical perspective) all that exists are particular creatures that value things, and facts about what those creatures would value if they had more time to think about their values and knew more true facts and so on. I won’t make the case for this meta-ethical approach here, but I link some relevant sources in the report, in particular in footnote 239.
This is one reason I say at the top of the report that:
And in fact, as I understand it, the people involved in making Open Phil grantmaking decisions about farm animal welfare do have substantial disagreements with my own moral intuitions, and are not making their grantmaking decisions solely on the basis of my own moral intuitions, or even solely on the basis of guesses about what my “idealized” moral intuitions would be.
Why? It’s not more widely accepted than realism, and arguably it’s decision-irrelevant regardless of its plausibility as per Ross’s deflationary argument (http://www-bcf.usc.edu/~jacobmro/ppr/deflation-ross.pdf).
But there are plenty of accounts of anti-realist ethics, and they don’t all make everything reducible to preferences or provide this account of what we ought to value. I still don’t see what makes this view more noteworthy than the others and why Open Phil is not as interested in either a general overview of what many views would say or a purely empirical inquiry from which diverse normative conclusions can be easily drawn.
I don’t see what reason they have to take them into account at all, unless they accept ideal advisor theory and are using your preferences as a heuristic for what their preferences would be if they did the same thing that you are doing. Is ideal advisor theory their view now? And is it also the case for other moral topics besides animal ethics?
Also: can we expect a reduced, empirical-only version of the report to be released at some point?
(Just FYI, I’m drafting a reply to this, but it might be a while before it’s ready to post.)
These are reasonable questions, and I won’t be able to satisfactorily address them in a short comment reply. Nevertheless, I’ll try to give you a bit more of a sense of “where I’m coming from” on the topic of meta-ethics.
As I say in the report,
We don’t plan to release an “empirical-only” version of the report, but I think those with different meta-ethical views will be able to read the empirical sections of the report — e.g. most of section 3, appendices C-E, and some other sections — and think for themselves about what those empirical data imply given their own meta-ethical views.
However, your primary question seems to be about why Open Phil is willing to make decisions that are premised on particular views about meta-ethics that we find plausible (e.g. ideal advisor theory) rather than a broader survey of (expert?) views about meta-ethics. I’ll make a few comments about this.
First: the current distribution of expert opinion is a significant input to our thinking, but we don’t simply defer to it. This is true with respect to most topics that intersect with our work, not just meta-ethics. Instead, we do our best to investigate decision-relevant topics deeply enough ourselves that we develop our own opinions about them. Or, as we said in our blog post on Hits-based giving (re-formatted slightly):
Second: one consequence of investigating topics deeply enough to form our own opinions about them, rather than simply deferring to what seems to be the leading expert opinion on the topic (if any exists), is that (quoting again from “Hits-based giving”) “we don’t expect to be able to fully justify ourselves in writing.” That is why, throughout my report, I repeat that my report does not really “argue” for the assumptions I make and the tentative conclusions I come to. But I did make an effort to refer the reader to related readings and give some sense of “where I’m coming from.”
Third: even if I spent an entire year writing up my best case for (e.g.) ideal advisor theory, I don’t think it would convince you, and I don’t think it would be thoroughly convincing to myself, either. We can’t solve moral philosophy. All we can do is take pragmatic steps of acceptable cost to reduce our uncertainty as we aim to (as I say in the report) “execute our mission to ‘accomplish as much good as possible with our giving’ without waiting to first resolve all major debates in moral philosophy.”
In the end, anyone who is trying to do as much “good” as possible — or even just “more good than bad” — must either (1) wrestle with the sorts of difficult issues we’re wrestling with (or the similarly unsolved problems of some other moral framework), and come to some “best guesses for now,” or (2) implicitly make assumptions about ~all those same fraught issues anyway, but without trying to examine and question them. (At least, this is true so long as “good” isn’t just defined with respect to domain-narrow, funder-decided “goods” like “better scores by American children on standardized tests.”)
We don’t think it’s possible for Open Phil or any other charitable project to definitively answer such questions, but we do prefer to act on questioned/examined assumptions rather than on largely unexamined assumptions. Hence our reports on difficult philosophical questions summarize what we did to examine these questions and what our best-guess conclusions are for the moment, but those reports do not convincingly argue for any solid “answers” to these questions. (Besides the moral patienthood report, see also e.g. here and here.)
Of course, you might think the above points are reasonable, but still want to know more about why I find ideal advisor theory particularly compelling among meta-ethical views. I can’t think of anything especially brief to say, other than “that’s the family of views I find most plausible after having read, thought, and argued about meta-ethics for several years.” I haven’t personally written a defense of ideal advisor theory, and I’m not aware of a published defense of ideal advisor theory that I would thoroughly endorse. If you’re curious to learn more about my particular views, perhaps the best I can do is point you to Pluralistic moral reductionism, Mixed Reference: The Great Reductionist Project, and ch. 9 of Miller (2013), and Extrapolated volition (normative moral theory).
Another question you seem to be asking is why Open Phil chose to produce a report with this framing first, as opposed to “a general overview of what many [meta-ethical] views would say.” I think this is because ideal advisor theory is especially popular among the people at Open Phil who engage most deeply with the details of our philosophical framework for giving. As far I know, all these people (myself included) have substantial uncertainty over meta-ethical views and normative moral theories (see footnote 12 on normative uncertainty), but (as far I know) we put unusually high “weight” on ideal advisor theories — either as a final “theory” of normative morality, or as a very important input to our moral thinking. Because of this, it seemed likely to be more informative (to our decision-making about grants) per unit effort to conduct an investigation that was a mix of empirical data (not premised on any meta-ethical theory) and moral philosophy (premised on some kind of ideal advisory theory), rather than to produce a more neutral survey of the implications of a large variety of meta-ethical theories, most of which we (the people at Open Phil who engage most deeply with the details of our philosophical framework for giving) have considered before and decided to give little or no weight to (again, as far as I know).
One more comment on ideal advisor theory: What I mean by ideal advisor theory might be less narrow than what you’re thinking of. For example, on my meaning, ideal advisor theory could (for all I know) result in reflective equilibria as diverse as contractarianism, deontological ethics, hedonic utilitarianism, egoism, or a thorough-going nihilism, among other views.
That said, as I say in the report, I don’t think my tentative moral judgments in the report depend on my meta-ethical views, and the empirical data I present don’t depend on them either. Also, we continue to question and examine the assumptions behind our current philosophical framework for giving, and I expect that framework to evolve over time as we do so.
A final clarification: another reason I discuss my meta-ethical views so much (albeit mostly in the appendices) is that I suspect one’s ethical views unavoidably infect one’s way of discussing the relevant empirical data, and so I chose to explain my ethical views in part so that people can interpret my presentation of the empirical data while having some sense of what biases I may bring to that discussion as a result of my ethical views.
Yeah, but what you’re doing is antithetical to that. You’re basically assuming that you have solved a major debate in philosophy and not paying attention to uncertainty. At the very least, we should know more clearly if Open Phil is going to be an Ideal Advisor Theory grantmaking organization from now on. Meta-ethics is what you use to figure out how to decide what your mission should be in the first place. Introducing it as this stage and in this manner is kind of weird.
To be quite honest it is hard to believe that a significant portion of the assorted staff at Open Phil independently reviewed philosophical arguments and independently arrived at the same relatively niche meta-ethical view. It sounds a lot more like an information cascade.
But that just makes the whole methodology even more confusing since you are talking about meta-ethics and empirical issues at the same time, while not talking about the normative issues in the middle, and then coming to normative conclusions. If you really use ideal advisor theory as a meta-ethical approach then you should use it to determine a model of normative ethics, and then match that with science on consciousness. Two people with the same meta-ethical views could have very different normative views but you’re not explicating this possibility. At the same time, you might have the same normative views as someone else but there is no way to tell since you’re only talking about meta-ethics.
Ethical views might, but it’s not clear how meta-ethical views would.