Thanks for this. It was great, particularly hearing about how people think about things rather than just the outcomes reached.
A couple of comments.
Could you explain what you meant about beliefs? I’m unclear of what you think a belief is and what would be a good account of forming, having or reporting beliefs. This isn’t a critical comment asking you to produce a full theory of mind, more that what you say sounds interesting but is unclear and I’d like you to expand.
Reading this, I got a sense you were having to reinvent the philosophical wheel whilst trying to avoid doing so. You seem to be dong lots of what is straightforward and implicitly moral philosophy but making this explicity. Whilst that has an appeal—people don’t agree in moral philosophy and educating people is not really what OxPrio is about—I think it might just be easier to get people’s assumptions on the table so you can see what follows from them.
As a couple of examples, if you’re comparing GiveDirectly to MIRI you have to making implicit assumptions about population axiology (i.e. how much future people matter). It’s not your view on future people is one part of the calculation, it basically is the whole calculation. Alternatively, if you’re looking at AMF to GiveDirectly and just comparing present people that’s going to be very substantially determined by your view about the badness of death.
I wonder if it would help to run through some candidate theories in moral philosophy so that people can use that to form part of their model, rather than having to generate a new theory for themselves on the fly.
A further thought: it would be really nice to get a handle on which prioritisation were truly empirical questions and which philosophical.
When I and Tom came up with that, I don’t think we meant “belief” to be imbued with the usual philosophical connotations. Rather, we intended it to mean something like “action-guiding, introspectively accessible representation of a state of affairs existing independently of whether it is queried”.
When people ask me what I think about the world, I can often come up with lots of intelligent sounding answers—but it is unfortunately more rare that my actual actions, plans and normative evaluations are somehow suitably hooked up to, and crucially depend upon, those answers.
Thanks for this. It was great, particularly hearing about how people think about things rather than just the outcomes reached.
A couple of comments.
Could you explain what you meant about beliefs? I’m unclear of what you think a belief is and what would be a good account of forming, having or reporting beliefs. This isn’t a critical comment asking you to produce a full theory of mind, more that what you say sounds interesting but is unclear and I’d like you to expand.
Reading this, I got a sense you were having to reinvent the philosophical wheel whilst trying to avoid doing so. You seem to be dong lots of what is straightforward and implicitly moral philosophy but making this explicity. Whilst that has an appeal—people don’t agree in moral philosophy and educating people is not really what OxPrio is about—I think it might just be easier to get people’s assumptions on the table so you can see what follows from them.
As a couple of examples, if you’re comparing GiveDirectly to MIRI you have to making implicit assumptions about population axiology (i.e. how much future people matter). It’s not your view on future people is one part of the calculation, it basically is the whole calculation. Alternatively, if you’re looking at AMF to GiveDirectly and just comparing present people that’s going to be very substantially determined by your view about the badness of death.
I wonder if it would help to run through some candidate theories in moral philosophy so that people can use that to form part of their model, rather than having to generate a new theory for themselves on the fly.
A further thought: it would be really nice to get a handle on which prioritisation were truly empirical questions and which philosophical.
Excellent stuff, look forward to reading more.
When I and Tom came up with that, I don’t think we meant “belief” to be imbued with the usual philosophical connotations. Rather, we intended it to mean something like “action-guiding, introspectively accessible representation of a state of affairs existing independently of whether it is queried”.
When people ask me what I think about the world, I can often come up with lots of intelligent sounding answers—but it is unfortunately more rare that my actual actions, plans and normative evaluations are somehow suitably hooked up to, and crucially depend upon, those answers.