To the title question—no it’s not, for we have not observed any superintelligent behavior.
“Intelligence” does not necessarily need to have anything to do with “our” type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring “hedgehogs” (as compared to “foxes” in the hedgehogs v foxes compairson in Tetlock’s “superintelligence”)
Foxes absolutely build on historic knowledge. “Our” (i.e. human) intelligence can be either foxlike or hedgehoglike, after all both of them were featured in Tetlock’s research, and in any case this is not what FHI means by the idea of a unitary superintelligent agent.
- who are worse than random at predicting the future;
Foxes are not worse than random at predicting the future, they are only modestly less effective than hedgehogs.
AI has already far surpassed our own level of intelligence
Only in some domains, and computers have been better at some domains for decades anyway (e.g. arithmetic).
this represents a real deep and potentially existential threat that the EA community should take extremely seriously.
The fact that corporations maximize profit is an existential threat? Sure, in a very broad sense it might lead to catastrophes. Just like happiness-maximizing people might lead to a catastrophe, career-maximizing politicians might lead to a catastrophe, security-maximizing states might lead to a catastrophe, and so on. That doesn’t mean that replacing these things with something better is feasible, all-things-considered. And we can’t even talk meaningfully about replacing them until a replacement is proposed. AFAIK the only way to avoid profit maximization is to put business under public control, but that just replaces profit maximization with vote maximization, and possibly creates economic problems too.
It is also at the core of the increasingly systemic failure of politics
Is there any good evidence that politics is suffering an increasing amount of failure, let alone a systemic one?
Before you answer, think carefully about all the other 7 billion people in the world besides Americans/Western Europeans. And what things were like 10 or 20 years ago.
this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities
I don’t know of any evidence that EAs are irrational or biased judges of the capabilities of other people or software.
potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.
The prime purpose of politics is to ensure security, law and order. This cannot be disputed as every other policy goal is impossible until governance is achieved, and anarchy is the worst state of affairs for people to be in. Maybe you mean that the most important political activity for EAs, at the margin right now, is to improve corporation behavior. Potentially? Sure. In reality? Probably not, simply because there are so many other possibilities that must be evaluated as well. Foreign aid, defense, climate change, welfare, technology policy, etc.
To the title question—no it’s not, for we have not observed any superintelligent behavior.
Foxes absolutely build on historic knowledge. “Our” (i.e. human) intelligence can be either foxlike or hedgehoglike, after all both of them were featured in Tetlock’s research, and in any case this is not what FHI means by the idea of a unitary superintelligent agent.
Foxes are not worse than random at predicting the future, they are only modestly less effective than hedgehogs.
Only in some domains, and computers have been better at some domains for decades anyway (e.g. arithmetic).
The fact that corporations maximize profit is an existential threat? Sure, in a very broad sense it might lead to catastrophes. Just like happiness-maximizing people might lead to a catastrophe, career-maximizing politicians might lead to a catastrophe, security-maximizing states might lead to a catastrophe, and so on. That doesn’t mean that replacing these things with something better is feasible, all-things-considered. And we can’t even talk meaningfully about replacing them until a replacement is proposed. AFAIK the only way to avoid profit maximization is to put business under public control, but that just replaces profit maximization with vote maximization, and possibly creates economic problems too.
Is there any good evidence that politics is suffering an increasing amount of failure, let alone a systemic one?
Before you answer, think carefully about all the other 7 billion people in the world besides Americans/Western Europeans. And what things were like 10 or 20 years ago.
I don’t know of any evidence that EAs are irrational or biased judges of the capabilities of other people or software.
The prime purpose of politics is to ensure security, law and order. This cannot be disputed as every other policy goal is impossible until governance is achieved, and anarchy is the worst state of affairs for people to be in. Maybe you mean that the most important political activity for EAs, at the margin right now, is to improve corporation behavior. Potentially? Sure. In reality? Probably not, simply because there are so many other possibilities that must be evaluated as well. Foreign aid, defense, climate change, welfare, technology policy, etc.