Have you read the whole Twitter thread including Jaime’s responses to comments? He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.
When I click the link I see three posts that go Sevilla, Lifland, Sevilla. I based my comments above on those. I haven’t read through all the other replies by others or posts responding to them. If there is context in those or else where that is relevant I’m open to changing my mind based on that.
He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.
Can you say what statements lead you to this conclusion? For example, you quote him saying something I haven’t seen, perhaps part of the thread I didn’t read.
“But I want to be clear that even if you convinced me somehow that the risk that AI is ultimately bad for the world goes from 15% to 1% if we wait 100 years I would not personally take that deal. If it reduced the chances by a factor of 100 I would consider it seriously. But 100 years has a huge personal cost to me, as all else equal it would likely imply everyone I know [italics mine] being dead. To be clear I don’t think this is the choice we are facing or we are likely to face.“
To me, this seems to confirm what I said above:
Based on my read of the thread, the comment was in response to a question about benefiting people sooner rather than later. This is why I say it reduces to an existing-person-effecting view (which, at least as far as I am aware, is not an unacceptable position to hold in EA). The question is functionally about current vs future people, not literally Sevilla’s friends and family specifically.
Yes, Sevilla is motivated specifically by considerations about those he loves, and yes, there is a trade-off, but that trade-off is really about current vs future people. People who aren’t longtermists for example would also implicate this same trade-off. I don’t think Sevilla would be getting the same reaction here if he just said he isn’t a longtermist. Because of the nature of the available actions, the interests of Sevilla’s loved-ones is aligned with those of current people (but not necessarily future people). The reason why “everyone [he] know[s]” will be dead is because everyone will be dead, in that scenario.
You might think that having loved-ones as a core motivation above other people is inherently a problem. I think this is answered above by Jeff Kaufman:
I don’t think impartiality to the extent of not caring more about the people one loves is a core value for very many EAs? Yes, it’s pretty central to EA that most people are excessively partial, but I don’t recall ever seeing someone advocate full impartiality.
I agree with this statement. Therefore my view is that simply stating that you’re more motivated by consequences to your loved-ones is not, in and of itself, a violation of a core EA idea.
Jason offers a refinement of this view. Perhaps what Kaufman says is true, but what if there is a more specific objection?
There are a number of jobs and roles that expect your actions in a professional capacity to be impartial in the sense of not favoring your loved ones over others. For instance, a politician should not give any more weight to the effects of proposed legislation on their own mother than the effect on any other constituent.
Perhaps the issue is not necessarily that Sevilla has the motivation itself, but that his role comes with a specific conflict-of-interest-like duty, which the statement suggests he is violating. My response was addressing this argument. I claim that the duty isn’t as broad as Jason seems to imply:
It seems like the view expressed reduces to an existing-person-effecting view. Is their any plausible mechanism by which an action by Epoch is supposed to impact Sevilla’s friends/relatives specifically? I seriously doubt it. The only plausible mechanism would be that AI goes well instead of poorly, which would benefit all existing people. This makes the politician comparison, as stated, dis-analogousness. Would you say that if a politician said their motivation to become a politician was to make a better world for their children, for example, that would somehow violate their duties? Seems like a lot of politicians might have issue if that were the case.
Does a politician who votes for a bill and states they are doing so to “make a better world for their children”, violate a conflict-of-interest duty? Jason’s argument seems to suggest they would. Let’s assume they are being genuine, they really are significantly motivated by care for their children, more than for a random citizen. They apply more weight to the impact of the legislation on their children then to others, violating Jason’s proposed criteria.
Yet I don’t think we would view such statements as disqualifying for a politician. The reason is that the mechanism by which they benefit their children really only operates by also helping everyone else. Most legislation won’t have any different impact on their children compared to any other person. So while the statement nominally suggests a conflict-of-interest, in practice the politicians incentives are aligned, the only way that voting for this legislation helps their children is that it helps everyone, and that includes their children. If the legislation plausibly did have a specific impact on their child (for example impacting an industry their child works in), then that really could be a conflict-of-interest. My claim is there needs to be some greater specificity for a conflict to exist. Sevilla’s case is more like the first case than the second, or at least that is my claim:
Is their any plausible mechanism by which an action by Epoch is supposed to impact Sevilla’s friends/relatives specifically? I seriously doubt it. The only plausible mechanism would be that AI goes well instead of poorly, which would benefit all existing people.
So, what has Sevilla done wrong? My analysis is this. It isn’t simply that he is more motivated to help his loved-ones (Kaufman argument). Nor is it something like a conflict-of-interest (my argument). In another comment on this thread I said this:
People can do a bad thing because they are just wrong in their analysis of a situation or their decision-making.
I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren’t norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual “problem” is.
The reason why “everyone [he] know[s]” will be dead is because everyone will be dead, in that scenario.
We are already increasing maximum human lifespan, so I wouldn’t be surprised if many people who are babies now are still alive in 100 years. And even if they aren’t, there’s still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.
Although I haven’t thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don’t see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don’t necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don’t think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don’t think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That’s kind of what I was getting at here:
I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren’t norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual “problem” is.
I know I’ve been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don’t want to ruin people’s day with my self-important rants, so, having said my piece, I’ll drop the discussion for now and let you get on with other things.
(although it you would like to response you are of course welcome, I just mean to say I won’t continue the back-and-forth after, so as not to create a pressure to keep responding.)
I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.
Have you read the whole Twitter thread including Jaime’s responses to comments? He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.
When I click the link I see three posts that go Sevilla, Lifland, Sevilla. I based my comments above on those. I haven’t read through all the other replies by others or posts responding to them. If there is context in those or else where that is relevant I’m open to changing my mind based on that.
Can you say what statements lead you to this conclusion? For example, you quote him saying something I haven’t seen, perhaps part of the thread I didn’t read.
To me, this seems to confirm what I said above:
Yes, Sevilla is motivated specifically by considerations about those he loves, and yes, there is a trade-off, but that trade-off is really about current vs future people. People who aren’t longtermists for example would also implicate this same trade-off. I don’t think Sevilla would be getting the same reaction here if he just said he isn’t a longtermist. Because of the nature of the available actions, the interests of Sevilla’s loved-ones is aligned with those of current people (but not necessarily future people). The reason why “everyone [he] know[s]” will be dead is because everyone will be dead, in that scenario.
You might think that having loved-ones as a core motivation above other people is inherently a problem. I think this is answered above by Jeff Kaufman:
I agree with this statement. Therefore my view is that simply stating that you’re more motivated by consequences to your loved-ones is not, in and of itself, a violation of a core EA idea.
Jason offers a refinement of this view. Perhaps what Kaufman says is true, but what if there is a more specific objection?
Perhaps the issue is not necessarily that Sevilla has the motivation itself, but that his role comes with a specific conflict-of-interest-like duty, which the statement suggests he is violating. My response was addressing this argument. I claim that the duty isn’t as broad as Jason seems to imply:
Does a politician who votes for a bill and states they are doing so to “make a better world for their children”, violate a conflict-of-interest duty? Jason’s argument seems to suggest they would. Let’s assume they are being genuine, they really are significantly motivated by care for their children, more than for a random citizen. They apply more weight to the impact of the legislation on their children then to others, violating Jason’s proposed criteria.
Yet I don’t think we would view such statements as disqualifying for a politician. The reason is that the mechanism by which they benefit their children really only operates by also helping everyone else. Most legislation won’t have any different impact on their children compared to any other person. So while the statement nominally suggests a conflict-of-interest, in practice the politicians incentives are aligned, the only way that voting for this legislation helps their children is that it helps everyone, and that includes their children. If the legislation plausibly did have a specific impact on their child (for example impacting an industry their child works in), then that really could be a conflict-of-interest. My claim is there needs to be some greater specificity for a conflict to exist. Sevilla’s case is more like the first case than the second, or at least that is my claim:
So, what has Sevilla done wrong? My analysis is this. It isn’t simply that he is more motivated to help his loved-ones (Kaufman argument). Nor is it something like a conflict-of-interest (my argument). In another comment on this thread I said this:
I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren’t norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual “problem” is.
We are already increasing maximum human lifespan, so I wouldn’t be surprised if many people who are babies now are still alive in 100 years. And even if they aren’t, there’s still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.
Although I haven’t thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don’t see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don’t necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don’t think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don’t think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That’s kind of what I was getting at here:
I know I’ve been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don’t want to ruin people’s day with my self-important rants, so, having said my piece, I’ll drop the discussion for now and let you get on with other things.
(although it you would like to response you are of course welcome, I just mean to say I won’t continue the back-and-forth after, so as not to create a pressure to keep responding.)
I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.