And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
The aggregate portfolio of Open Phil and Good Ventures is quite bad for the world… I also genuinely think you are causing pretty huge harm for the world.
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
This comment overall isn’t obviously immediately relevant
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
He recently made this comment on LessWrong, which expresses some of his views on the harm that OP causes.