The lists were interesting on how they allude to the different psychology and motivations each EA has between the two camps. I hope someday I can have a civil discussion with someone not directly benefiting from AIA (such as being involved in the research). Aside, I have a friend who’s crazy about futurism, 2045 Initiative/propaganda, and in love with everything Musk says on Twitter.
“The idea of ‘helping the worst off’ is appealing.” Why wouldn’t it be? Copenhagen Consensus.
“Their reaction when they look about extinction risk or AI safety is nonsensical”, imaginary and completely unknown—zero tractability. No evidence to go off of since such technology does not exist. Why give to CS grad students? It’s like trying to fund a mission to Mars, not priority. It’s like funding time travel safety research, non sequitur.
“They are generally an unhappy person.” I just had to laugh and compare how one interested in AI safety matched up. A neo-Freudian Jung/MBTI type of deal. Almost like zodiac signs. Although, the Minnesota Multiphasic Personality Inventory (MMPI) is rigorous—so who am I to judge this informal inventory.
Anyway, I simply do not see that individual action or donation to AIA research has measurable outcomes. We’re talking about Strong AI here—it doesn’t even exist! Not that it couldn’t though. In the future, even the medium-term future, general standards of living could be significantly improved. Synthetic meat on a production scale is a much more realistic research area (or even anti-malaria mosquitoes) instead of making a fuss about imaginary-theoretical events. We’re at a unique sliver in time where it is extremely practical to help lessen the suffering of humans and animals in the near and medium-term future. (I.e., we have rapid transportation and instant information transfer.)
Just because an event is theoretical doesn’t mean that it won’t occur. An asteroid hitting the Earth is theoretical, but something I think you might realize is quite real when it impacts.
Some say that superintelligence doesn’t have precedence, but I think that’s overlooking a key fact. The rise of homo sapiens has radically altered the world—and all signs point toward intelligence as the cause. We think at the moment that intelligence is just a matter of information processing, and therefore, there should be a way that it could be done by our own computers some day, if only we figured out the right algorithms to implement.
If we learn that superintelligence is impossible, that means our current most descriptive scientific theories are wrong, and we will have learned something new. That’s because that would indicate that humans are somehow cosmically special, or at least have hit the ceiling for general intelligence. On the flipside, if we create superintelligence, none of our current theories of how the world operates must be wrong.
That’s why it’s important to take seriously. Because the best evidence we have available tells us that it’s possible, not that it’s impossible.
The lists were interesting on how they allude to the different psychology and motivations each EA has between the two camps. I hope someday I can have a civil discussion with someone not directly benefiting from AIA (such as being involved in the research). Aside, I have a friend who’s crazy about futurism, 2045 Initiative/propaganda, and in love with everything Musk says on Twitter.
“The idea of ‘helping the worst off’ is appealing.” Why wouldn’t it be? Copenhagen Consensus.
“Their reaction when they look about extinction risk or AI safety is nonsensical”, imaginary and completely unknown—zero tractability. No evidence to go off of since such technology does not exist. Why give to CS grad students? It’s like trying to fund a mission to Mars, not priority. It’s like funding time travel safety research, non sequitur.
“They are generally an unhappy person.” I just had to laugh and compare how one interested in AI safety matched up. A neo-Freudian Jung/MBTI type of deal. Almost like zodiac signs. Although, the Minnesota Multiphasic Personality Inventory (MMPI) is rigorous—so who am I to judge this informal inventory.
Anyway, I simply do not see that individual action or donation to AIA research has measurable outcomes. We’re talking about Strong AI here—it doesn’t even exist! Not that it couldn’t though. In the future, even the medium-term future, general standards of living could be significantly improved. Synthetic meat on a production scale is a much more realistic research area (or even anti-malaria mosquitoes) instead of making a fuss about imaginary-theoretical events. We’re at a unique sliver in time where it is extremely practical to help lessen the suffering of humans and animals in the near and medium-term future. (I.e., we have rapid transportation and instant information transfer.)
Just because an event is theoretical doesn’t mean that it won’t occur. An asteroid hitting the Earth is theoretical, but something I think you might realize is quite real when it impacts.
Some say that superintelligence doesn’t have precedence, but I think that’s overlooking a key fact. The rise of homo sapiens has radically altered the world—and all signs point toward intelligence as the cause. We think at the moment that intelligence is just a matter of information processing, and therefore, there should be a way that it could be done by our own computers some day, if only we figured out the right algorithms to implement.
If we learn that superintelligence is impossible, that means our current most descriptive scientific theories are wrong, and we will have learned something new. That’s because that would indicate that humans are somehow cosmically special, or at least have hit the ceiling for general intelligence. On the flipside, if we create superintelligence, none of our current theories of how the world operates must be wrong.
That’s why it’s important to take seriously. Because the best evidence we have available tells us that it’s possible, not that it’s impossible.