The reason why “everyone [he] know[s]” will be dead is because everyone will be dead, in that scenario.
We are already increasing maximum human lifespan, so I wouldn’t be surprised if many people who are babies now are still alive in 100 years. And even if they aren’t, there’s still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.
Although I haven’t thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don’t see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don’t necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don’t think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don’t think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That’s kind of what I was getting at here:
I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren’t norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual “problem” is.
I know I’ve been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don’t want to ruin people’s day with my self-important rants, so, having said my piece, I’ll drop the discussion for now and let you get on with other things.
(although it you would like to response you are of course welcome, I just mean to say I won’t continue the back-and-forth after, so as not to create a pressure to keep responding.)
I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.
We are already increasing maximum human lifespan, so I wouldn’t be surprised if many people who are babies now are still alive in 100 years. And even if they aren’t, there’s still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.
Although I haven’t thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don’t see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don’t necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don’t think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don’t think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That’s kind of what I was getting at here:
I know I’ve been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don’t want to ruin people’s day with my self-important rants, so, having said my piece, I’ll drop the discussion for now and let you get on with other things.
(although it you would like to response you are of course welcome, I just mean to say I won’t continue the back-and-forth after, so as not to create a pressure to keep responding.)
I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.