Ahem. What if AGI won’t be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There’s also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can’t really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won’t be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn’t put in any effort in writing this comment, so I’m wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!
I think this is 100% wrong, but 100% the correct[1] way to reason about it!
I’m pretty sure water scarcity is a distraction wrt modelling AI futures; but it’s best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don’t have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can’t learn to generate better behaviours wrt to X.
Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range. You don’t have a sensor wrt to the evidence unless you plant your flag down.
One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It’s important to be willing to risk thinking&saying dumb things, but it’s also important that the mistakes are honest consequences of your best effort.
A failure-mode I’ve commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,
While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that “ah well, I only lost because I deliberately have an unusual playstyle; not because I’m bad or anything.”
Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?
To clarify, I’m mainly just sceptical that water-scarcity is a significant consideration wrt the trajectory of transformative AI. I’m not here arguing against water-scarcity (or data poisoning) as an important cause to focus altruistic efforts on.
Hunches/reasons that I’m sceptical of water as a consideration for transformative AI:
I doubt water will be a bottleneck to scaling
My doubt here mainly just stems from a poorly-argued & uncertain intuition about other factors being more relevant. If I were to look into this more, I would try to find some basic numbers about:
How much water goes into the maintenance of data centers relative to other things fungible water-sources are used for?
What proportion of a data center’s total expenditures are used to purchase water?
I’m not sure how these things work, so don’t take my own scepticism as grounds to distrust your own (perhaps-better-informed) model of these things.
Assuming scaling is bottlenecked by water, I think great-power conflict are unlikely to be caused by it
Assuming conflicts happen due to water-bottleneck, I don’t think this will significantly influence the long-term outcome of transformative AI
Note: I’ll read if you respond, but I’m unlikely to respond in turn, since I’m trying to prioritize other things atm. Either way, thanks for an idea I hadn’t considered before! : )
Sounds good! I’ll try it!
Ahem. What if AGI won’t be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There’s also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can’t really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won’t be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn’t put in any effort in writing this comment, so I’m wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!
I think this is 100% wrong, but 100% the correct[1] way to reason about it!
I’m pretty sure water scarcity is a distraction wrt modelling AI futures; but it’s best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don’t have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can’t learn to generate better behaviours wrt to X.
Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range. You don’t have a sensor wrt to the evidence unless you plant your flag down.
One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It’s important to be willing to risk thinking&saying dumb things, but it’s also important that the mistakes are honest consequences of your best effort.
A failure-mode I’ve commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,
While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that “ah well, I only lost because I deliberately have an unusual playstyle; not because I’m bad or anything.”
Thanks a lot for your feedback!
Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?
To clarify, I’m mainly just sceptical that water-scarcity is a significant consideration wrt the trajectory of transformative AI. I’m not here arguing against water-scarcity (or data poisoning) as an important cause to focus altruistic efforts on.
Hunches/reasons that I’m sceptical of water as a consideration for transformative AI:
I doubt water will be a bottleneck to scaling
My doubt here mainly just stems from a poorly-argued & uncertain intuition about other factors being more relevant. If I were to look into this more, I would try to find some basic numbers about:
How much water goes into the maintenance of data centers relative to other things fungible water-sources are used for?
What proportion of a data center’s total expenditures are used to purchase water?
I’m not sure how these things work, so don’t take my own scepticism as grounds to distrust your own (perhaps-better-informed) model of these things.
Assuming scaling is bottlenecked by water, I think great-power conflict are unlikely to be caused by it
Assuming conflicts happen due to water-bottleneck, I don’t think this will significantly influence the long-term outcome of transformative AI
Note: I’ll read if you respond, but I’m unlikely to respond in turn, since I’m trying to prioritize other things atm. Either way, thanks for an idea I hadn’t considered before! : )