I’m worried that a lot of these “questions” seem like you’re trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief.
Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it’s traditional role in murder and genocide. Third, it’s all untested and based on questionable science and i suspect it wouldn’t actually work very well, if at all.
Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
Have you considered that the rest of EA is incentivised to pretend there aren’t problems in EA, for reputational reasons? If so, why shouldn’t community health be expanded instead of reduced?
This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can’t think of a major scandal in EA that was first raised by the community health team.
Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the “TESCREAL” conspiracy theory and antisemitic conspiracy theories?
Because this is a dumb and baseless parallel? There’s a lot more to antisemitic conspiracy theories than “powerful people controlling things”. In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale
Why aren’t there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?
First off, I specifically spoke to the LessWrong moderation team in advance of writing this, with the intention of rephrasing my questions so they didn’t sound like I was trying to make a point. I’m sorry if I failed in that, but making particular points was not my intention. Second of all, you seem to be taking a very adversarial tone to my post when it was not my intention to take an adversarial tone.
Now, on to my thoughts on your particular points.
I have in fact considered that the rest of EA is incentivized to pretend that there aren’t problems. In fact, I’d assume that most of EA has. I’m not accusing the Community Health team of causing any particular scandal; just of broadly introducing an atmosphere where comparatively minor incidents may potentially get blown out of proportion.
There seem to be clear and relevant parallels here. Seven of the fifteen people named as TESCREALists in the First Monday paper are Jewish, and many stereotypes attributed to TESCREALists in this conspiracy theory (victimhood complex, manipulating our genomes, ignoring the suffering of Palestinians) line up with antisemitic stereotypes and go far beyond just “powerful people controlling things.”
I want to do maximizing myself because I was under the impression that EA is about maximizing. In my mind, if you just wanted to do a lot of good, you’d work in just about any nonprofit. In contrast, EA is about doing the most good that you can do.
I’m worried that a lot of these “questions” seem like you’re trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief.
First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it’s traditional role in murder and genocide. Third, it’s all untested and based on questionable science and i suspect it wouldn’t actually work very well, if at all.
Have you considered that the rest of EA is incentivised to pretend there aren’t problems in EA, for reputational reasons? If so, why shouldn’t community health be expanded instead of reduced?
This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can’t think of a major scandal in EA that was first raised by the community health team.
Because this is a dumb and baseless parallel? There’s a lot more to antisemitic conspiracy theories than “powerful people controlling things”. In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale
Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?
First off, I specifically spoke to the LessWrong moderation team in advance of writing this, with the intention of rephrasing my questions so they didn’t sound like I was trying to make a point. I’m sorry if I failed in that, but making particular points was not my intention. Second of all, you seem to be taking a very adversarial tone to my post when it was not my intention to take an adversarial tone.
Now, on to my thoughts on your particular points.
I have in fact considered that the rest of EA is incentivized to pretend that there aren’t problems. In fact, I’d assume that most of EA has. I’m not accusing the Community Health team of causing any particular scandal; just of broadly introducing an atmosphere where comparatively minor incidents may potentially get blown out of proportion.
There seem to be clear and relevant parallels here. Seven of the fifteen people named as TESCREALists in the First Monday paper are Jewish, and many stereotypes attributed to TESCREALists in this conspiracy theory (victimhood complex, manipulating our genomes, ignoring the suffering of Palestinians) line up with antisemitic stereotypes and go far beyond just “powerful people controlling things.”
I want to do maximizing myself because I was under the impression that EA is about maximizing. In my mind, if you just wanted to do a lot of good, you’d work in just about any nonprofit. In contrast, EA is about doing the most good that you can do.
Don’t forget that maximizing is perilous.
I understand that it’s perilous, but so is donating a kidney, and a large number of EAs have done that anyway.