I’ve been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI’s that have some small chance of converging naturally into human-like cognitionn, like neuromorphic or WBE. Since those are already low probability (see e.g. Superintelligence) to begin with, my growing impression is that we are very, very likely doomed.
So far the arena of people doing control problem related activities has been dominated by pessimists (say people who think we have less than 8% chance of making through). Over time as more and more people join it is likely more optimists will join. How will that affect our outcomes? Are optimists more likely to underestimate important strategical considerations?
A separate question would be: what does EA look like in a doomed world? Suppose we knew for certain that AGI would destroy life on earth; what are the most altruistic actions we can take between now and then? Is postponing the end by a few days more valuable than donating to sub-saharan effective charities?
These thoughts are not fully formed, but I wanted people to give their own opinions on these issues.
It makes sense that the earliest adopters of the idea of existential risk are more pessimistic and risk-aware than average. It’s good to attract optimists because it’s good to attract anyone and also because optimistic rhetoric might help to drive political change.
I think it would be pretty hard to know with probability >0.999 that the world was doomed, so I’m not that interested in thinking about it.
The underlying assumption is that for many people working on probability shifts that are between 0 and 1 percent is not desirable. They would be willing to work for the same shift if it was betwen, say, 20 and 21, but not if it is too low. This is an empirical fact about people, I’m not issuing that it is a relevant moral fact.
I’ve been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI’s that have some small chance of converging naturally into human-like cognitionn, like neuromorphic or WBE. Since those are already low probability (see e.g. Superintelligence) to begin with, my growing impression is that we are very, very likely doomed.
So far the arena of people doing control problem related activities has been dominated by pessimists (say people who think we have less than 8% chance of making through). Over time as more and more people join it is likely more optimists will join. How will that affect our outcomes? Are optimists more likely to underestimate important strategical considerations?
A separate question would be: what does EA look like in a doomed world? Suppose we knew for certain that AGI would destroy life on earth; what are the most altruistic actions we can take between now and then? Is postponing the end by a few days more valuable than donating to sub-saharan effective charities?
These thoughts are not fully formed, but I wanted people to give their own opinions on these issues.
It makes sense that the earliest adopters of the idea of existential risk are more pessimistic and risk-aware than average. It’s good to attract optimists because it’s good to attract anyone and also because optimistic rhetoric might help to drive political change.
I think it would be pretty hard to know with probability >0.999 that the world was doomed, so I’m not that interested in thinking about it.
The underlying assumption is that for many people working on probability shifts that are between 0 and 1 percent is not desirable. They would be willing to work for the same shift if it was betwen, say, 20 and 21, but not if it is too low. This is an empirical fact about people, I’m not issuing that it is a relevant moral fact.
Yeah so if it started to look like the world was doomed, then less people would work on x-risk, true.