A separate comment I had been writing about that section of the interview:
Ajeya and Rob discussed “Fairness agreements”. This seemed to me like a novel and interesting approach that could be used for normative/moral uncertainty (though Open Phil seem to be using it for worldview uncertainty, which is related but a bit different)
I currently feel more inclined towards some other approaches to moral uncertainty
But at this stage where the topic of moral uncertainty has received so little attention, it seems useful to come up with additional potential approaches
And it may be that, for a while, it remains useful to have multiple approaches one can bring to bear on the same question, to see where their results converge and diverge
On a meta level, I found it interesting that the staff of an organisation primarily focused on grantmaking appear to have come up with what might be a novel and interesting approach to normative/moral uncertainty
That seems like the sort of abstract theoretical philosophy work that one might expect to only be produced by academic philosophers, rather than people at a more “applied” org
A more direct response to your comment:
I haven’t heard of the idea before, and had read a decent amount on moral uncertainty around the start of 2020. That, plus the way the topic was introduced in this episode, makes me think that this might be a new idea that hasn’t been publicly written up yet.
[Update: I was slightly mistaken; there are a few written paragraphs on the idea here]
I think it’s understandable to have been a bit confused by that part; I don’t think I fully understood the idea myself, and I got the impression that it was still at a somewhat fuzzy stage
(I’d guess that with an hour of effort I could re-read that part of the transcript and write an ok explainer, but unfortunately I don’t have time right now. But hopefully someone else will be able to do that, ideally better and more easily than I could!)
No worries that you don’t have the time to explain it Michael! I’m glad to hear that others haven’t heard of the idea before and that this is a new topic. Hopefully someone else can explain it in more depth. I think sometimes concepts featured in 80K podcast episodes or other EA content can be really hard to grasp, and maybe others can create visuals, videos, or better explanations to help.
An example of another hard to grasp topic in 80K’s past episodes is complex cluelessness. I think Hilary Greaves and Arden did a good/okay job in explaining it, and I kinda get the idea, but it would be hard for me to explain without looking up the paper, reading the transcript, or listening to the podcast again.
I also still find the concept of complex cluelessness slippery, and am under the impression that many EAs misunderstand and misuse the term compared to Greaves’ intention. But if you haven’t seen it already, you may find this talk from Greaves’ helpful.
A separate comment I had been writing about that section of the interview:
Ajeya and Rob discussed “Fairness agreements”. This seemed to me like a novel and interesting approach that could be used for normative/moral uncertainty (though Open Phil seem to be using it for worldview uncertainty, which is related but a bit different)
I currently feel more inclined towards some other approaches to moral uncertainty
But at this stage where the topic of moral uncertainty has received so little attention, it seems useful to come up with additional potential approaches
And it may be that, for a while, it remains useful to have multiple approaches one can bring to bear on the same question, to see where their results converge and diverge
On a meta level, I found it interesting that the staff of an organisation primarily focused on grantmaking appear to have come up with what might be a novel and interesting approach to normative/moral uncertainty
That seems like the sort of abstract theoretical philosophy work that one might expect to only be produced by academic philosophers, rather than people at a more “applied” org
A more direct response to your comment:
I haven’t heard of the idea before, and had read a decent amount on moral uncertainty around the start of 2020. That, plus the way the topic was introduced in this episode, makes me think that this might be a new idea that hasn’t been publicly written up yet.
(See also the final bullet point here)
[Update: I was slightly mistaken; there are a few written paragraphs on the idea here]
I think it’s understandable to have been a bit confused by that part; I don’t think I fully understood the idea myself, and I got the impression that it was still at a somewhat fuzzy stage
(I’d guess that with an hour of effort I could re-read that part of the transcript and write an ok explainer, but unfortunately I don’t have time right now. But hopefully someone else will be able to do that, ideally better and more easily than I could!)
No worries that you don’t have the time to explain it Michael! I’m glad to hear that others haven’t heard of the idea before and that this is a new topic. Hopefully someone else can explain it in more depth. I think sometimes concepts featured in 80K podcast episodes or other EA content can be really hard to grasp, and maybe others can create visuals, videos, or better explanations to help.
An example of another hard to grasp topic in 80K’s past episodes is complex cluelessness. I think Hilary Greaves and Arden did a good/okay job in explaining it, and I kinda get the idea, but it would be hard for me to explain without looking up the paper, reading the transcript, or listening to the podcast again.
I also still find the concept of complex cluelessness slippery, and am under the impression that many EAs misunderstand and misuse the term compared to Greaves’ intention. But if you haven’t seen it already, you may find this talk from Greaves’ helpful.