The implicit framing of this post was that, if individuals just got smarter, everything would work out much better. Which is true to some extent. But I’m concerned this perspective is overlooking something important, namely that it’s very often the case it’s clear what should be done for the common good, but society doesn’t organise itself to do those things because many individuals don’t want to—for discussion, see the recent 80k podcast on institutional economics and corruption. So I’d like to see a bit more emphasis on collective decision-making vs just individuals getting smarter.
This tension is one reason why I called this “wisdom and intelligence”, and tried to focus on that of “humanity”, as opposed to just “intelligence”, and in particular, ’individual intelligence”.
I think that “the wisdom and intelligence of humanity” is much safer to optimize than “the intelligence of a bunch of individuals in isolation”.
If it were the case that “people all know what to do, they just won’t do it”, then I would agree that wisdom and intelligence aren’t that important. However, I think these cases are highly unusual. From what I’ve seen, in most cases of “big coordination problems”, there are considerable amounts of confusion, deception, and stupidity.
Your initial point reminds me in some sense the orthogonality thesis by Nick Bostrom, but applied to humans. High IQ individuals acted in history to pursue completely different goals, so it’s not automatic to assume that by improving humanity’s intelligence as a whole we would assure for sure a better future to anyone.
At the same time I think we could be pretty much confident to assume that an higher IQ-level of humanity could at least enable more individuals to find optimal solutions to minimize the risks of undesirable moral outcomes from the actions of high-intelligent but morally-questionable individuals, while at the same time working on solving more efficiently other and more relevant problems.
The implicit framing of this post was that, if individuals just got smarter, everything would work out much better. Which is true to some extent. But I’m concerned this perspective is overlooking something important, namely that it’s very often the case it’s clear what should be done for the common good, but society doesn’t organise itself to do those things because many individuals don’t want to—for discussion, see the recent 80k podcast on institutional economics and corruption. So I’d like to see a bit more emphasis on collective decision-making vs just individuals getting smarter.
This tension is one reason why I called this “wisdom and intelligence”, and tried to focus on that of “humanity”, as opposed to just “intelligence”, and in particular, ’individual intelligence”.
I think that “the wisdom and intelligence of humanity” is much safer to optimize than “the intelligence of a bunch of individuals in isolation”.
If it were the case that “people all know what to do, they just won’t do it”, then I would agree that wisdom and intelligence aren’t that important. However, I think these cases are highly unusual. From what I’ve seen, in most cases of “big coordination problems”, there are considerable amounts of confusion, deception, and stupidity.
Your initial point reminds me in some sense the orthogonality thesis by Nick Bostrom, but applied to humans. High IQ individuals acted in history to pursue completely different goals, so it’s not automatic to assume that by improving humanity’s intelligence as a whole we would assure for sure a better future to anyone.
At the same time I think we could be pretty much confident to assume that an higher IQ-level of humanity could at least enable more individuals to find optimal solutions to minimize the risks of undesirable moral outcomes from the actions of high-intelligent but morally-questionable individuals, while at the same time working on solving more efficiently other and more relevant problems.