I’m not sure this is super relevant to our core disagreement (if we have one), but how are you counting this? Glancing at that article, it looks like a pessimistic take on climate change’s harm puts excess deaths at around 10m per year, and such damage would persist much more than 10 years.
But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?
How exactly could that be true?
Because coders who don’t work explicitly on AI alignment still spend their working lives trying to get code to do what they want. The EA/rat communities tend not to consider that ‘AI safety’, but it seems prejudicial not to do so in the widest sense of the concept.
We might consider ‘jobs with “alignment” or “safety” in the title’ to be a neglected and/or more valuable subfield, but to do so IMO we have to acknowledge the OP’s point.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
I’m happy to leave it there, but to clarify I’m not claiming ‘no difference in the type of work they do’, but rather ‘no a priori reason to write one group off as “not concerned with safety”’.
I’m not sure this is super relevant to our core disagreement (if we have one), but how are you counting this? Glancing at that article, it looks like a pessimistic take on climate change’s harm puts excess deaths at around 10m per year, and such damage would persist much more than 10 years.
But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?
Because coders who don’t work explicitly on AI alignment still spend their working lives trying to get code to do what they want. The EA/rat communities tend not to consider that ‘AI safety’, but it seems prejudicial not to do so in the widest sense of the concept.
We might consider ‘jobs with “alignment” or “safety” in the title’ to be a neglected and/or more valuable subfield, but to do so IMO we have to acknowledge the OP’s point.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
I’m happy to leave it there, but to clarify I’m not claiming ‘no difference in the type of work they do’, but rather ‘no a priori reason to write one group off as “not concerned with safety”’.