This is excellent. Personally, (3) does everything for me. I don’t need to think I’m especially clever if I think I’m ok being dumb. I’m not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don’t actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don’t need to be smart in order not to worry about it.
Exhibit A: Be conspicuously dumb as an act of altruism!
It must be ok to be dumber than average in a community, otherwise it will iteratively evaporate half its members until only one person remains. If a community is hostile to the left half of the curve, the whole community suffers. And the people who are safely in the top 10% are only “safe” because the dumber people stick around.
So if you’re worried about being too dumb for the community… consider that maybe you’re actually just contributing to lowering the debilitating pressure felt by the community as a whole. Perhaps even think of yourself as a hero, shouldering the burden of being dumber-than-average so that people smarter than you don’t have to. Be conspicuously safe in your own stupidity, and you’re helping others realise that they can be safe too. ^^
Exhibit B: Naive kindness perpetuates shame
Self-fulfilling norm tragedies. When the naive mechanism by which good people try to make something better, makes it worse instead.
1. No one wants intelligence to be the sole measure of a human’s worth. Everyone affirms that “all humans are created equal.”
2. Everyone worries that other people think dumb people are worth less because they’re dumb.
3. So everyone also worries that other people will think they think that dumb people are worth less. They don’t want to be seen as offensive, nor do they want to accidentally cause offense. They want to be good and be seen as good.
4. That’s why they’re overly cautious about even speaking about dumbness, to the point of pretending it doesn’t even exist. (Remember, this follows from their kind motivations.)
5. But by being overly cautious about speaking about dumbness, and by pretending it doesn’t exist, they’re also unwittingly reinforcing the impression that dumbness is shamefwl. Heck, it’s so shamefwl that people won’t even talk about it!
You can find similar self-reinforcing patterns for other kinds of discrimination/prejudices. All of it seems to share a common solution: break down barriers to talking openly about so-called “shamefwl” things. I didn’t say it was easy.
Exhibit C: Why I use the word “dumb”
I’m in favour of using the word “dumb” as a non-derogatory antonym of “smart”.
The way society is right now you’d think the sole measure of human worth is how smart you are. My goal here is to make it feel alright to be dumb. And a large part of the problem is that no one is willing to point at the thing (dumbness) and treat it as a completely normal, mundane, and innocuous part of everyday life.
Every time you use an obvious euphemism for it like “less smart” or “specialises in other things”, you are making it clear to everyone that being dumb is something so shamefwl that we need to pretend it doesn’t exist. And sure, when you use the word “dumb” instead, someone might misunderstand and conclude that you think dumb people are bad in some way. But euphemisms *guarantee* that people learn the negative association.
Compare it to how children learn social norms. The way to teach your child that being dumb is ok is to actually behave as if that’s true, and euphemisms are doing the exact opposite. We don’t use “not-blue” to refer to brown eyes, but if we did you can be sure your children will try to pretend their eyes are blue.
Exhibit D: You need a space where you can be dumb
Where’s the space in which you can speak freely, ask dumb questions, reveal your ignorance, display your true stupidity? You definitely need a space like that. And where’s the space in which you must speak with care, try to seem smarter and more knowledgeable than you are, and impress professionals? Unfortunately, this too becomes necessary at times.
Wherever those spaces are, keep them separate. And may the gods have mercy on your soul if you only have the latter.
Ahem. What if AGI won’t be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There’s also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can’t really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won’t be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn’t put in any effort in writing this comment, so I’m wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!
I think this is 100% wrong, but 100% the correct[1] way to reason about it!
I’m pretty sure water scarcity is a distraction wrt modelling AI futures; but it’s best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don’t have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can’t learn to generate better behaviours wrt to X.
Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range. You don’t have a sensor wrt to the evidence unless you plant your flag down.
One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It’s important to be willing to risk thinking&saying dumb things, but it’s also important that the mistakes are honest consequences of your best effort.
A failure-mode I’ve commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,
While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that “ah well, I only lost because I deliberately have an unusual playstyle; not because I’m bad or anything.”
Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?
To clarify, I’m mainly just sceptical that water-scarcity is a significant consideration wrt the trajectory of transformative AI. I’m not here arguing against water-scarcity (or data poisoning) as an important cause to focus altruistic efforts on.
Hunches/reasons that I’m sceptical of water as a consideration for transformative AI:
I doubt water will be a bottleneck to scaling
My doubt here mainly just stems from a poorly-argued & uncertain intuition about other factors being more relevant. If I were to look into this more, I would try to find some basic numbers about:
How much water goes into the maintenance of data centers relative to other things fungible water-sources are used for?
What proportion of a data center’s total expenditures are used to purchase water?
I’m not sure how these things work, so don’t take my own scepticism as grounds to distrust your own (perhaps-better-informed) model of these things.
Assuming scaling is bottlenecked by water, I think great-power conflict are unlikely to be caused by it
Assuming conflicts happen due to water-bottleneck, I don’t think this will significantly influence the long-term outcome of transformative AI
Note: I’ll read if you respond, but I’m unlikely to respond in turn, since I’m trying to prioritize other things atm. Either way, thanks for an idea I hadn’t considered before! : )
This is excellent. Personally, (3) does everything for me. I don’t need to think I’m especially clever if I think I’m ok being dumb. I’m not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don’t actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don’t need to be smart in order not to worry about it.
Exhibit A: Be conspicuously dumb as an act of altruism!
Exhibit B: Naive kindness perpetuates shame
Exhibit C: Why I use the word “dumb”
Exhibit D: You need a space where you can be dumb
Sounds good! I’ll try it!
Ahem. What if AGI won’t be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There’s also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can’t really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won’t be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn’t put in any effort in writing this comment, so I’m wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!
I think this is 100% wrong, but 100% the correct[1] way to reason about it!
I’m pretty sure water scarcity is a distraction wrt modelling AI futures; but it’s best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don’t have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can’t learn to generate better behaviours wrt to X.
Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range. You don’t have a sensor wrt to the evidence unless you plant your flag down.
One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It’s important to be willing to risk thinking&saying dumb things, but it’s also important that the mistakes are honest consequences of your best effort.
A failure-mode I’ve commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,
While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that “ah well, I only lost because I deliberately have an unusual playstyle; not because I’m bad or anything.”
Thanks a lot for your feedback!
Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?
To clarify, I’m mainly just sceptical that water-scarcity is a significant consideration wrt the trajectory of transformative AI. I’m not here arguing against water-scarcity (or data poisoning) as an important cause to focus altruistic efforts on.
Hunches/reasons that I’m sceptical of water as a consideration for transformative AI:
I doubt water will be a bottleneck to scaling
My doubt here mainly just stems from a poorly-argued & uncertain intuition about other factors being more relevant. If I were to look into this more, I would try to find some basic numbers about:
How much water goes into the maintenance of data centers relative to other things fungible water-sources are used for?
What proportion of a data center’s total expenditures are used to purchase water?
I’m not sure how these things work, so don’t take my own scepticism as grounds to distrust your own (perhaps-better-informed) model of these things.
Assuming scaling is bottlenecked by water, I think great-power conflict are unlikely to be caused by it
Assuming conflicts happen due to water-bottleneck, I don’t think this will significantly influence the long-term outcome of transformative AI
Note: I’ll read if you respond, but I’m unlikely to respond in turn, since I’m trying to prioritize other things atm. Either way, thanks for an idea I hadn’t considered before! : )