I think this point could be basically seen as “we don’t want to make the appearance of powerseeking” worse
It’s not about appearances for me. A portion of EA is seeking power, by any normal definition of the words. In AI, we want to influence government policy on AI regulation, and to influence the direction of AI companies and AI company research. This has resulted in EA people having significant power in the AI space. I would guess that all the top AI companies have a significant EA presence in some form or another.
Seeking power is not a bad thing. There will always be power in the world, and I would rather it be in the hands of good people with ethical goals. I personally want people who share my political ideals to seek and gain power so they can implement their beneficial policies.
The problem is that if all the people actually gaining power are from a highly privileged, homogenous minority of the population, there is a high chance of blind spots, bias, and self-interest creeping in. This could easily lead to the power being deployed, even unintentionally, in service of the privileged minority at the expense of everyone else. This has happened so many times in history it’s hard to count, I think it’s naïve to pretend it couldn’t happen to us.
Another problem is that even if the elite were much more likely to have good ideas and other merits, there are just so many people that are non-elite, that by leaving them disempowered, we are just leaving so many potential allies in making for the best existence possible on the sideline.
And even where some of the disempowered masses get some funding or other power, it typically will be by ministering to the priorities of some of these elite. At best, some ideas will penetrate to the elite by some who are best capable of navigating their social spaces and speaking their languages, but most of the potential value of humanity is likely squandered when we the masses routinely sacrifice most of our time and energy to the man in our 9-5s.
EA has been great at identifying moral patients and determining cost-effective means to better their conditions.
I think, though, that EA has not been so great at empowering moral agents, across the spectrum of alignment.
It’s not about appearances for me. A portion of EA is seeking power, by any normal definition of the words. In AI, we want to influence government policy on AI regulation, and to influence the direction of AI companies and AI company research. This has resulted in EA people having significant power in the AI space. I would guess that all the top AI companies have a significant EA presence in some form or another.
Seeking power is not a bad thing. There will always be power in the world, and I would rather it be in the hands of good people with ethical goals. I personally want people who share my political ideals to seek and gain power so they can implement their beneficial policies.
The problem is that if all the people actually gaining power are from a highly privileged, homogenous minority of the population, there is a high chance of blind spots, bias, and self-interest creeping in. This could easily lead to the power being deployed, even unintentionally, in service of the privileged minority at the expense of everyone else. This has happened so many times in history it’s hard to count, I think it’s naïve to pretend it couldn’t happen to us.
Another problem is that even if the elite were much more likely to have good ideas and other merits, there are just so many people that are non-elite, that by leaving them disempowered, we are just leaving so many potential allies in making for the best existence possible on the sideline.
And even where some of the disempowered masses get some funding or other power, it typically will be by ministering to the priorities of some of these elite. At best, some ideas will penetrate to the elite by some who are best capable of navigating their social spaces and speaking their languages, but most of the potential value of humanity is likely squandered when we the masses routinely sacrifice most of our time and energy to the man in our 9-5s.
EA has been great at identifying moral patients and determining cost-effective means to better their conditions.
I think, though, that EA has not been so great at empowering moral agents, across the spectrum of alignment.