Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
Roko
The ELYSIUM Proposal
“group differences discourse is worth dying over,” … implicitly signalling that this is an important topic. But, as I argued, I don’t think it is
Human group differences is probably the most important topic in the world outside of AI/Singularity. The reason people are so keen to censor the topic is because it is important.
I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think ofMaking dialogue saner is a nice goal, but people can unilaterally make dialogue insane by demanding that a topic is banned, or that certain opinions are immoral, etc.
I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t.
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently
- affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact
- there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment
- we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits—
there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy
- all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
Would you trust an AI alignment researcher who supported Lysenkoism in the era when it was popular in the soviet union?
it’s more like maybe HBD and wokeism are two sides of a toxic dynamic where it would be better if we could get back to other concerns.
Unfortunately this “toxic dynamic” is also known as truth-based versus consequences-based epistemology and that is a dynamic that you absolutely cannot escape because the ability to alter social consensus beliefs for the benefits of special interests is just a generic problem. It will also pop up in the AI debate.
If someone is interested in (3), they’ll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we’re all dead, or they’ll be more easily solvable with the help of aligned powerful AIs
Actually I think that people’s thinking about AI has become somewhat broken and I’m starting to see the same dark-side epistemology that gave us HBD-denialism seep into the AI community.
But, take a step back.
Suppose you have a community of biologists who all believe in Lysenkoism. Then, despite their repeated failures to improve crop yields, the country is bailed out by a large external food source.
Would you be willing to overlook their belief in Lysenkoism and have these people start working on cancer biology, aging and other areas?
Or, look at another example. You ask a self-professed mathematician whether he thinks that all continuous functions are differentiable. He says they are, and it’s so obvious that it requires no proof. Do you trust this mathematician to generally provide good advice?
My point is that process matters in science and epistemology. You can’t sweep the bad process of creationists under the carpet and expect them to continue to produce good results on other issues. Their process is broken.
How is HBD action-relevant for EA in a pre-AGI world?
I don’t think it is, because I think AI will replace humans in all economic roles within 5-15 years. But I think the same dark-side intellectual tactics that gave rise to HBD-denialism will contaminate our thinking about AI, just in different ways.
If the Effective Altruism movement turns into a human biodiversity denial movement that will definitely outstrip all the potential good it could do, with the possible exception of AI alignment.
Human biodiversity and human capital is the hidden variable that drives almost all outcomes in our world. It’s not really something that you can adopt beliefs about in an unrigorous, signaling-driven manner and expect to be right about the world. I would encourage people to read about Lysenkoism to see what happens when ideology overrides science as the basis for epistemology.
https://en.wikipedia.org/wiki/Lysenkoism
Apparently nobody else can do any better. Anyway, the community seems somewhat insane about this, like it’s a sacred subject that we dare not do a quick summary of.
ok. So the total comp really is 75k.
But it includes accommodation within that?
would naturally include integration with existing social systems
I wouldn’t limit AI goalcraft to integration with existing social systems. It may be better to use the capabilites of AI to build fundamentally better preference aggregation engines. That’s the idea of CEV and its ilk.
“AI Alignment” is a Dangerously Overloaded Term
Apparently 77 people chose to downvote this without offering an alternative 100 word summary. .
In what way, specifically?
Does someone have a 100 word summary of the whole affair?
My impression is that two nonlinear employees were upset that they weren’t getting paid enough and had hurt feelings about some minor incidents like not getting a veggieburger, so they wrote some mean blog posts about the Nonlinear leadership, and the Nonlinear leadership responded that actually they were getting paid enough (seems to amount to something like $100k/yr all in) and that they’d mostly made it up.
Is that accurate?
Long post on eugenics, −1 points right now and lots of comments disagreeing.
Looks like this is a political battle; I’ll skip actually reading it and note that these kinds of issues are not decided rationally but politically, EA is a left-wing movement so eugenics is axiomatically bad.
From a right-wing point of view one can even see it as a good thing that the left is irrational about this kind of thing, it means that they will be late adopters of the technology and fall behind.
There’s lots of helium in the solar system. You could harvest the atmosphere of Jupiter or Saturn for it. Obviously this has an energy cost, but s stated in the article energy is the only real resource that we use up. Everything else is renewable.
You would miss out on the “Crossposted from LessWrong. Click to view X comments.” button at the bottom, but you can also add an identical comment manually.
Oh, I see. So there isn’t much difference.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.