An eccentric dreamer in search of truth and happiness for all. I formerly posted on Felicifia back in the day under the name Darklight and still use that name on Less Wrong. I’ve been loosely involved in Effective Altruism to varying degrees since roughly 2013.
Joseph_Chu
I’ll admit that a first strike is still probably in the calculus of any serious strategic considerations by China. I’m just suggesting there are political/cultural reasons why they might hesitate to take such a drastic action. There’s also the historical fact that the last time someone (the Imperial Japanese) tried this on the U.S. (Pearl Harbor), it ended up being disastrous for them.
Also, given that your own estimates put U.S. intervention at about 50%, assuming Chinese planners have similar estimates, they might be thinking about rolling the dice on this with something like a limited blockade to test the waters of a U.S. response, and try to avoid getting the U.S. actively involved (similar to how Ukraine is currently playing out). An outright first strike would remove this possibility, guaranteeing U.S. involvement from the get go.
I should clarify that I don’t think we should abandon deterrence entirely. My thoughts are more that we need to balance the tradeoffs and consider the strategic situation carefully. In the limit, having an obviously insurmountable defensive force to deter an attack would be ideal, but we realistically can’t get there without a massive effort that will alarm China and likely accelerate the schedule for an attack. What we probably would prefer is to something along the lines of tit-for-tat increases in military strength that keep the gap from either closing or opening up more (and potentially offer the possibility of mutual reductions and de-escalation). This I think encourages China to wait for an opportunity that may (hopefully) never come.
I also think, given China’s industrial capacity, that trying to outpace them is unlikely to succeed anyway. China has been building ships far faster than the U.S., much less Taiwan, is able to. The U.S. especially has aging shipyards and ships that are getting older every day, with overbudget projects like the Zumwalt and the cancelled Littoral Combat Ship showing how problematic things have become.
There are things like the Porcupine Strategy proposal to abandon offensively capable weapons like F-16s and Abrams tanks for Taiwan in favour of more defensively oriented and far cheaper weapons like man portable Stinger and Javelin missiles and lots of drones. I do think there’s some merit to this idea, particularly since it wouldn’t require as much buildup time (you might even be able to smuggle them in before anyone realizes it), which makes it less likely that China will see a “window of opportunity” before the buildup is complete. (Edit: I double checked and realized you do mention this idea already, so apologies for not noticing that earlier.)
In general regarding EA involvement in this though, I think a lot of thought has already been put into these concerns by the people within the NatSec establishment, such that I’m not sure what EAs can actually add to the equation. As you mentioned, it certainly isn’t a neglected cause area. The gain from adding some EA affiliated money or people into this equation seems to be unlikely to me to be worth the potential alienation of China. Again, I’m thinking about wanting to coordinate on other risks and also EAs and their orgs in China, who already have a hard enough time as is. Encouraging some EAs to get involved in the NatSec stuff is already somewhat done (notably, past EA Global conferences have been held in Washington D.C.). The people who don’t like EA already think we’re too cozy with the establishment, and this would likely add yet more fodder to the critics.
Edit:
Just wanted to add, I do appreciate that your analysis was very thorough and probably took a lot of work to put together. Thanks for putting this together! Even if I’m somewhat critical in some parts, I think, overall, it’s a thoughtful and well presented set of arguments.
Also, I mentioned the “good guys” thing in part because a lot of people I’ve debated about these issues with in the past (particularly on the sordid place that was Twitter), had a kind of caricatured view of China as this mindless dystopia of sorts. I also appreciate that you discussed things like the Chinese Civil War and historical causes and the KMT, which are often left out when talking about the Taiwan situation (which in news media and social media arguments is often framed in a way that makes it seem like China is just being expansionist).
I remember hearing from my dad who served in the ROC army some time around the 1970s that conscription used to be two years of service. According to Wikipedia this got reduced because they were trying to switch to an all-volunteer force in the 2010s and only restored due to fears that an invasion was becoming imminent recently.
There is actually already a new TV drama that resembles this.
I’m not sure about it being equivalent to a boy scout camp, but there are some historical reasons why the Taiwan military has relatively low morale.
The main one is that the military has strong historical ties to the Chinese Nationalist Party aka the Kuomintang (KMT) that previously ruled China and then Taiwan as the Republic of China (ROC). Taiwan is still officially the ROC, and the military forces are still technically called the “Republic of China Armed Forces” and such. Generalissimo Chiang Kai-shek basically ran these forces as his personal army, or at least, the army of the KMT party (similar to how the PLA is the army of the CCP). As such, the leadership of the Taiwan military is often considered very pro-KMT, and have close ties with the party.
Given that the current government of Taiwan is the pro-independence Democratic Progressive Party (DPP), and the KMT are in opposition, this puts the generals who feel loyal to the KMT and the Chinese nationalism they stand for, in an awkward position. The DPP has apparently tried to purge some of these generals, but the historical ties would make this difficult to achieve completely.
The KMT today is probably friendlier to the CCP than they are to the DPP, to the point that a retired general previously encouraged reunification with the mainland and the overthrow of the DPP government. This kind of “fifth column” behaviour is unique to the historical circumstances of Taiwan, and probably contributes greatly to low morale within the armed forces.
Also, unlike Ukraine, where people speak Ukrainian, which is distinct from Russian, the people of Taiwan speak a mix of Mandarin and Taiwanese (a Chinese dialect), which in particular, makes them arguably more vulnerable to Mainland Chinese propaganda, as well as infiltration by Chinese spies.
While I agree that it’s not “elitist” in the sense that anyone can put forward ideas and be considered by significant people in the community (which I think is great!), I would say there’s still some expectations that need to be met in that the “good idea” generally must accept several commonly agreed up premises that represent what I’d call the “orthodoxy” of EA / rationality / AI safety.
For instance, I noticed way back when I first joined Less Wrong that the Orthogonality Thesis and Instrumental Convergence are more or less doctrines, and challenging them tends to receive a negative or at best lukewarm reception, or at least, demand a very strong case to be made.
In that sense, there is still a bit of elitism in the sense that some of the ideas of the sorta co-founders of the movements, like Eliezer Yudkowsky, Nick Bostrom, Will MacAskill and such, are likely to be treated with notably more deference. It used to be the case, for instance, that on Less Wrong, posts that challenged these views would lead to somewhat dismissive “Read The Sequences” responses, although I admit I haven’t seen that particular refrain in quite a while, so the community might have improved in this regard.
And admittedly, I do think compared to say, most political ideological movements and religions, the EA / rationality / AI safety communities are MUCH more open to dissenting views and good productive debates. This is one thing I do like about us.
Regarding EAs already working on this kind of thing, there are several EAs or people associated with us at the RAND corporation, including the CEO, Jason Matheny. CSET is also associated with EA, as they received a lot of money from Open Philanthropy in the past.
Personally, I would be somewhat hesitant to have EA openly supporting the deterrence policy for Taiwan like you suggest. EA is already seen as somewhat biased towards western liberalism, and taking such a provocative move could hurt the few EA orgs and people that exist in China, not to mention make it harder for us to dialogue and cooperate with Chinese orgs that are sympathetic to things like AI safety and governance.
A while back, 80,000 Hours included China experts among its recommended career paths. I think, cooperation and coordination with China on various other existential risks, particularly trying to coordinate to avoid AI arms race dynamics, is very, very important. I don’t know how you’d weigh that against the risks associated with Taiwan though.
(Full Disclosure: I have relatives living in Taiwan right now, my parents are from there, my paternal grandparents left China when the Communists took over, and I’m married to a Chinese national. I am almost certainly not biased about this, but I also follow the situation very closely for obvious reasons.)
Edit:
Some further analysis and comment:
Right now, my impression from speaking to Chinese nationals is that China thinks of itself as on the rise and American power on the decline (whether this is actually true or propaganda is beside the point). For them, waiting longer increases their odds of success, so they are incentivised to wait and bide their time. If Taiwan/US power suddenly surged and showed signs of outpacing China, this would create alarm and possibly push the timetable for an invasion forward. A military buildup in Taiwan could, instead of ensuring deterrence, lead to an arms race.
Chinese nationalism has a narrative of the “Century of Humiliation” and wanting to restore former glory. But with regards to Taiwan, the Chinese see them as brothers/sisters who have fallen into an American hegemony, not as an enemy to be destroyed. For Xi Jinping, the calculus is that a successful takeover of Taiwan needs to have a minimum of casualities. The One China Policy means many families have only one son, and many of them dying in battle could be quite destabilizing for Xi’s government. The military has not been tested in a very long time. A failed invasion would be another historical humiliation, so they have an incentive to be cautious and only act in the best possible circumstances for them.
The CCP also relies heavily on continued prosperity for its legitimacy among the populace. Even just a blockade could severely curtail global trade and disrupt the livelihoods of most Chinese citizens in a way not seen in generations. You may be cynical about the regime, but there are likely at least some people within the party and the system who earnestly believe in ensuring China prospers in the long run, and many of them could be doves who don’t want war with the west. Arguably, we should be giving those people more fodder rather than the hawks, who could spin a military buildup as a threat to China’s strategic interests.
The only really good solution to the Taiwan situation is probably for Chinese leadership to have a change of heart and de-escalate by finally signing a peace treaty with the Taiwan government (they are technically in a frozen conflict and the civil war never actually ended officially). This is currently extremely unlikely due to the politics involved. But some kind of agreement between Beijing and Taipei to accept the status quo would be the next best thing. This direction would involve rapprochement. The longer we can stall things out, the more chance for the old guard to be replaced by a younger generation who might be more open to closing the door on past grievances.
What China really wants is peaceful reunification. Making it seem like that might be possible, would give the CCP a reason they can give their domestic audience to continue to wait as they negotiate. Admittedly, the Taiwanese as polled are currently are not interested in any such arrangement. What could happen in the future though, is that China could offer an ultimatum for peaceful or violent reunification in a way that makes the peaceful option at least somewhat enticing (admittedly hard after Hong Kong). A friendlier KMT Taiwanese government might even take that option. It would not be ideal for the liberal democracy there, but at least there wouldn’t be war (assuming no insurgency).
My preferred but super unlikely to happen option would be for China and Taiwan to agree to a referrendum of the Taiwan people to decide whether to join China or have true independence, respecting their Right to Self-Determination. In reality, an attempt to do this without Chinese support would almost certainly trigger a military response, so I’m not hopeful this will ever happen, at least with the current generation of leadership.
Edit 2:
A further point regarding the likelihood of a Chinese first strike on the U.S. I see this as very unlikely, as the Chinese do not see themselves as aggressors. The mentality of most Chinese I meet is that China is generally peaceful and mostly interested in their internal affairs. Taiwan, for them, has a casus belli in the unfinished business of the Chinese Civil War. It would be much harder to justify a pre-emptive strike on the U.S.
While there’s a tendency in the west to view the CCP as this monolithic power seeking entity, my observation from afar is that there seems to be a mixture of ideological and pragmatic reasoning within the party, and many people still see themselves as part of the socialist “good guys” against western imperialism. It doesn’t fit their narrative to attack first. China has a no nuclear first use policy. Even the U.S. doesn’t have that. If I had to predict the most likely action by China, it would be to start with a cautious blockade of Taiwan, designed carefully such that the Americans would have to “strike first” by attempting to break the blockade, giving China a casus belli of sorts.
Of course, the U.S. will see the blockade itself as casus belli, so both powers will be satisfied they have the moral high ground. It’s my view that the former WWII allies all think of themselves as the “good guys”, so chances are neither side wants to fire the first shot, even if it might be strategically advantageous to do so, and see confrontation as inevitable.
I would be a bit hesitant to follow Less Wrong’s lead on this too closely. I find the EA Forum, for lack of a better term, feels much friendlier than Less Wrong, and I wouldn’t want that sense of friendliness to go away.
So, I have two possible projects for AI alignment work that I’m debating between focusing on. Am curious for input into how worthwhile they’d be to pursue or follow up on.
The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to multi-class situations (though logistic regression can be as well). I was thinking of extending the idea to try to create an activation vector based “mind reader” that calculates the cosine similarity with various words embedded in the model’s activation space. This would, if it works, allow you to get a bag of words that the model is “thinking” about at any given time.
The second project is a less common game theoretic approach. Earlier, I created a variant of the Iterated Prisoner’s Dilemma as a simulation that includes death, asymmetric power, and aggressor reputation. I found, interestingly, that cooperative “nice” strategies banding together against aggressive “nasty” strategies produced an equilibrium where the cooperative strategies win out in the long run, generally outnumbering the aggressive ones considerably by the end. Although this simulation probably requires more analysis and testing in more complex environments, it seems to point to the idea that being consistently nice to weaker nice agents acts as a signal to more powerful nice agents and allows coordination that increases the chance of survival of all the nice agents, whereas being nasty leads to a winner-takes-all highlander situation, which from an alignment perspective could be a kind of infoblessing that an AGI or ASI could be persuaded to spare humanity for these game theoretic reasons.
Oh, woops, I totally confused the two. My bad.
If it’s anything like the book Going Infinite by Michael Lewis, it’ll probably be a relatively sympathetic portrayal. My initial impression from the announcement post is that it at least sounds like the angle they’re going for is misguided haphazard idealists (which Lewis also did), rather than mere criminal masterminds.
Graham Moore is best known for the Imitation Game, the movie about Alan Turing, and his portrayal was a classic “misunderstood genius angle”. If he brings that kind of energy to a movie about SBF, we can hope he shows EA in a positive light as well.
Another possible comparison you could make would be with the movie The Social Network, which was inspired by real life, but took a lot of liberties and
interestingly made Dustin Moskovitz (who funds a lot of EA stuff through Open Philanthropy) a very sympathetic character.(Edit: Confused him and Eduardo Saverin).I also think there’s lots of precedence for Hollywood to generally make dramas and movies that are sympathetic to apparent “villains” and “antiheroes”. Mindless caricatures are less interesting to watch than nuanced portrayals of complex characters with human motivations. The good fiction at least tries to have that kind of depth.
So, I’m cautiously optimistic. When you actually dive deeper into the story of SBF, you realize he’s more complex than yet another crypto grifter, and I think a nuanced portrayal could actually help EA recover a bit from the narrative that we’re just a TESCREAL techbro cult.
I do also agree in general that we should be louder about the good that EA has actually done in the world.
Hey, so I’m a game dev/writer with Twin Earth. The founder of our team is an EA and former moral philosophy lecturer, and coincidentally he actually asked me earlier to explore the possibility of a web-based card game that would be pretty much exactly the type of game you describe.
I.e. the player is the CEO of the AI company Endgame Inc / Race Condition Inc (we never decided which name to use), various event cards involving similar to real world and speculative events, project cards that you had to prioritize between (i.e. alignment or product), and many, many bad ends and a few good ones where you get aligned AGI. We also were planning things like having Shareholder Support and Public Opinion be stats that can go too low and cause you to also lose the game. Stuff like that.
The game, which is still in its very early stages, has been on hiatus for over a year due to my having a baby and the rest of the team being focused on another unrelated game (which recently went into Early Access but the team is still pretty busy with it). When I was still working on Endgame Inc (again, tentative title), it was voluntarily on the side, as we didn’t expect to sell the game, but rather release it for free to get as wide an audience as possible.
I’m not sure if making this game is still planned, but it might be something I can go back to working on when I have the time to spare.
Thank you for your years of service!
I’m sure a lot of regular and occasional posters like myself appreciate that building and maintaining something like this is a ton of often underappreciated work, the kind that often only gets noticed on the rare occasion when something actually goes wrong and needs to be fixed ASAP.
You gave us a place to be EAs and be a part of a community of like-minded folk in a way that’s hard to find anywhere else. For that I’m grateful, and I’m sure others are as well.
Again, thank you.
And, best of luck with wherever your future career takes you!
I agree it shouldn’t be decided by poll. I’d consider this poll more a gauge of how much interest or support the idea(s) could have within the EA community, and as a starting point for future discussion if sufficient support exists.
I mostly just wanted to put forward a relatively general form of democratization that people could debate the merits of and see with the poll what kind of support such ideas could have within the EA community, to gauge if this is something that merits further exploration.
I probably could have made it even more general, like “There Should Be More Democracy In EA”, but that statement seems too vague, and I wanted to include something at least a little more concrete in terms of a proposal.
I was primarily aiming at something in the core of EA leadership rather than yet another separate org. So, when I say new positions, I’m leaning towards them being within existing orgs, although I also mentioned earlier the parallel association idea, which I’ll admit has some problems after further consideration.
I actually wrote the question to be ambiguous as to whether the positions in leadership to be made elected already existed or not, as I wanted to be inclusive to the possibilities of either existing or new positions.
You could argue that Toby’s contribution is more what the commissioner of an artwork does than what an artist does.
On the question of harm, a human artist can compete with another human artist, but that’s just one artist, with limited time and resources. An AI art model could conceivably be copied extensively and used en masse to put all or many artists out of work, which seems like a much greater level of harm possible.
That link has to do with copyright. I will give you that pastiche isn’t a violation of copyright. Even outright forgeries don’t violate copyright. Forgeries are a type of fraud.
Again, pastiche in common parlance describes something that credits the original, usually by being an obvious homage. I consider AI art different from pastiche because it usually doesn’t credit the original in the same way. The Studio Ghibli example is an exception because it is very obvious, but for instance, the Greg Rutkowski prompted AI art is very often much harder to identify as such.
I admit this isn’t the same thing as a forgery, but it does seem like something unethical in the sense that you are not crediting the originator of the style. This may violate no laws, but it can still be wrong.
Can you cite a source for that? All I can find is that the First Amendment covers parody and to a lesser extent satire, which are different from pastiche.
Also, pastiche usually is an obvious homage and/or gives credit to the style’s origins. What AI art makers often do is use the name of a famous artist in the prompt to make an image in their style, and then not credit the artist when distributing the resulting image as their own. To me, even if this isn’t technically forgery (which would involve pretending this artwork was actually made by the famous artist), it’s still ethically questionable.
AGI by 2028 is more likely than not
My current analysis, as well as a lot of other analysis I’ve seen, suggests AGI is most likely to be possible around 2030.
I admit it’s possibly more about optics towards both domestic and foreign audiences than necessarily a principled moral position. No doubt there’s also the question of if they’d actually keep their word if faced with an existential situation.