I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
I think jailtime counts as social sanction!
I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.
1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc.
2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics.
3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources.
4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.
5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.
6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc
Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don’t think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far.
I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front.
I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.
That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don’t voluntarily give up. “AI will be very capable” is not the same thing as “AI will be capable of 100% guaranteed conquering all of humanity”, you need a joining argument in the middle.
Conferences are pretty great. In particular chatting to people in person gives you a way of finding the information that isn’t optimised for in the journal publication system, such as the things someone tried that didn’t work out, or didn’t end up publishable.
I like encouraging outsiders to go to conferences, but I would strongly caveat that you should be an outsider who at least has some related expertise knowledge. If you go to a chemistry conference with no knowledge of chemistry (or overlapping fields like physics and material science), the vast majority of talks and posters will be incomprehensible to you, and you won’t know enough to ask insightful questions. Even for an experienced insider, talks from a different subfield can be completely useless because you don’t have the necessary background knowledge to make sense of them.
I find the most interesting/valuable talks/posters are the ones that are in my field and share a bit with my research, but are off in a different direction, so I’m being exposed to very new ideas, but still have the background to engage.
Interesting! I’m glad to see engagement with Thorstadt’s work, this area is one I found myself less convinced about.
Interstellar colonisation is insanely difficult and resource intensive, so I expect any widespread dispersal of humanity beyond our solar system to be extremely far off in the future. If you think that that existential risk is high, there may be an extremely small chance we survive to that point.
I’m also not sure about your point on “misaligned AI’s”. Firstly, this should be “extinctionist AI’s” or something, as it seems very unlikely that all misaligned AI’s would actively want to hunt down tiny remnants of humanity. But if they were out to kill us, why would they need a reciever? It’s far easier to send an automated killer probe long distances than to send a human colony, so it seems they’d be able to hunt down colonies physically if they need to.
If you don’t think misalignment automatically equals extinction, then the argument doesn’t work. The neutral world is now competing with “neutral world where the software fucks up and kills people sometimes”, which seems to be worse.
In the 90′s and 2000′s, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the “gray goo” scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI.
It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000′s and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible.
A lot of people here donate to givedirectly.org, with the philosophy that we should let the worlds poorest decide where money needs to be spent to improve their lives. Grassroots projects like this seem like a natural extension of this, where a community as a whole decides where they need resources in order to uplift everyone. I’m no GHD expert, and I would encourage an in depth analysis, but it’s at least plausible that this could be more effective than givedirectly, as this project is too large to be paid for under that model.
Grassroots organising seems like a good idea in general: by cutting most of the westerners out of the process, the money goes into the third world economy. We could also see knock-on effects: maybe altruistic philosophy becomes more popular throughout Uganda, and they are more receptive to, say, animal rights later on in their development.
I think more estimates around cost effectiveness is a good idea, but EA had funded far more speculative and dubious projects in recent memory. I would encourage EA funders to give the proposal a fair shot.
I’m fine with CEA’s, my problem is that this seems to have been trotted out selectively in order to dismiss Anthony’s proposal in particular, even though EA discusses and sometimes funds proposals that make the supposed “16 extra deaths” look like peanuts by comparison.
The Wytham abbey project has been sold, so we know it’s overall impact was to throw something like a million pounds down the drain (when you factor in stamp duty, etc). I think it’s deeply unfair to frame Anthony’s proposal as possibly letting 16 people die, while not doing the same for Wytham, which (in this framing) definitively let 180 people die.
Also, the cost effectiveness analysis hasn’t even been done yet! I find it kind of suspect that this is getting such a hostile response when EA insiders propose ineffective projects all the time with much less pushback. There are also differing factors here worth considering, like helping EA build links with grassroots orgs, indirectly spreading EA ideas to organisers in the third world, etc. EA spends plenty of money on “community building”, would this not count?
The HPMOR thing is a side note, but I vehemently disagree with your analysis, and the initial grant, because the counterfactual in this case is not doing nothing, it’s sending them a link to the website where HPMOR is hosted for free for everybody, which costs nothing. Plus HPMOR only tangentially advocates for EA causes anyway! A huge number of people have read HPMOR, and only a small proportion have gone on to become EA members. Your numbers are absurdly overoptimistic.
Okay, that makes a lot more sense, thank you.
I think the talk of transition risks and sail metaphors aren’t actually that relevant to your argument here? Wouldn’t a gradual and continuous decrease to state risk, like Kuznets curve shown in Thorstadt’s paper here, have the same effect?
I guess at a very high level, I think: either there are accessible arrangements for society at some level of technological advancement which drive risk very low, or there aren’t. If there aren’t, it’s very unlikely that the future will be very large. If there are, then there’s a question of whether the world can reach such a state before an existential catastrophe.
This reasoning seems off. Why would it have to drive thing to very low risk, rather than to a low but significant level of risk, like we have today with nuclear weapons? Why would it be impossible to find arrangements that keep the level of state risk at like 1%?
AI risk thinking seems to have a lot of “all or nothing” reasoning that seems completely unjustified to me.
I don’t like that this “converting to lives” thing is being done on this kind of post and seemingly nowhere else?
Like, if we applied it to the wytham abbey purchase (I don’t know if the 15 mill figure is accurate but whatever), that’s 2700 people EA let die in order to purchase a manor house. Or what about the fund that gave $28000 dollars to print out harry potter fanfiction and give it to math olympians? That’s 6 dead children sacrificed for printouts of freely available fiction!
I hope you see why I don’t like this type of rhetoric.
Instead it’s saying that it may be more natural to have the object-level conversations about transitions rather than about risk-per-century.
Hmm, I definitely think there’s an object level disagreement about the structure of risk here.
Take the invention of nuclear weapons for example. This was certainly a “transition” in society relevant to existential risk. But it doesn’t make sense to me to analogise it to a risk in putting up a sail. Instead, nuclear weapons are just now a permanent risk to humanity, which goes up or down depending on geopolitical strategy.
I don’t see why future developments wouldn’t work the same way. It seems that since early humanity the state risk has only been increasing further and further as technology develops. I know there are arguments for why it could suddenly drop, but I agree with the linked Thorstadt analysis that this seems unlikely.
I think this sail metaphor is more obfucatory than revealing. If you think that the risk will drop orders of magnitude and stay there, then it’s fine to say so, and you should make your object-level arguments for that. Calling it a transition doesn’t really add anything: society has been “transitioning” between different states for it’s entire lifetime, why is this one different?
Thorstadt has previously written a paper specifically addressing the time of perils hypothesis, summarised in seven parts here.
One of the points is that just being in a time of perils is not enough to debunk his arguments, it has to be a short time of perils, and the time of perils ending has to drop the risk by many orders of magnitude. These assumptions seem highly uncertain to me.
I’m actually finishing up an article on this exact topic!
I’ll explain more there, but I think the major reason is this: If Leif Weinar didn’t hate EA, he wouldn’t have bothered to write the article. You need a reason to do things, and hatred is one of the most motivating ones.
I think this is just naive. People pay money and spend their precious time to go to these conferences. If you invite a racist, the effect will be twofold:
More racists will come to your conference.
more minorities, and people sympathetic to minorities, will stay home.
When this second group stays home (as is their right), they take their bold and unusual ideas with them.
By inviting a racist, you are not selecting for “bold and unusual ideas”. You are selecting for racism.
And yes, a similar dynamic will play out with many controversial ideas. Which is why you need to exit the meta level, and make deliberate choices about which ideas you want to keep, and which groups of people you are okay with driving away. This also comes with a responsibility to treat said topics with appropriate levels of care and consideration, something that, for example, Bostrom failed horribly at.
I do not think the rise of Nazi germany had much to do with social “shunning”. More it was a case of the economy being in shambles, both the far-left and far-right wanting to overthrow the government, and them fighting physical battles in the street over it, until the right-wing won enough of the populace over. I guess there was left-wing infighting between the communists and the social democrats, but that was less over “shunning” than over murdering the other sides leader.
I think intent should be a factor when thinking about whether to shun, but it should not be the only factor. If you somehow convinced me that a holocaust denier genuinely bore no ill intent, I still wouldn’t want them in my community, because it would create a massively toxic atmosphere and hurt everybody else. I think it’s good to reach out and try to help well-intentioned people see the errors of their ways, but it’s not the responsibility of the EA movement to do so here.
I think this is an indication that the EA community may have hard a hard time seeing through tech hype. I don’t think this this is a good sign now we’re dealing with AI companies who are also motivated to hype and spin.
The linked idea is very obviously unworkable. I am unsurprised that Elon rejected it and that no similar thing has taken off. First, as usual, it could be done cheaper and easier without a blockchain. second, twitter would be giving people a second place to see their content where they don’t see twitters ads, thereby shooting themselves in the foot financially for no reason. Third, while facebook and twitter could maybe cooperate here, there is no point in an interchange between other sites like tiktok and twitter as they are fundamentally different formats. Fourth, there’s already a way for people to share tweets on other social media sites: it’s called “hyperlinks” and “screenshots”. Fifth, how do you delete your bad tweets that are ruining your life is they remain permanently on the blockchain?