An eccentric dreamer in search of truth and happiness for all. I formerly posted on Felicifia back in the day under the name Darklight and still use that name on Less Wrong. I’ve been loosely involved in Effective Altruism to varying degrees since roughly 2013.
Joseph_Chu
We tried earlier. Carrick Flynn received substantial support from EA and the result was mediocre, with criticisms of EA actually having a negative effect on his campaign, as people pointed out the connection to the “billionaires and techbros” who apparently fund EA and such.
Also, the head of RAND, Jason Matheny, is an EA, and there’s some connections between EA and the American NatSec establishment. CSET for instance was funded partly by OpenPhil. There is a tendency among a lot of EAs is to try not to be partisan and mostly support effective governance and policy kind of things.
That being said, Dustin Moskovitz, the billionaire who is the main donor behind what was previously called Open Philanthropy and is now Coefficient Giving, has donated significantly and repeatedly to Democrats. OpenPhil has historically been by far the largest funder of EA stuff, particularly since SBF fell from grace, so Dustin’s contributions can be seen tacitly as EA support for the Dems.
So, I don’t think it’s accurate to say EAs have made absolutely no effort on this front. We have, and it has stupidly backfired before and we’re in this very awkward position politically where the whole TESCREAL controversy makes the EA brand tarnished to the Left, even though past surveys have shown that most rank and file EAs are centre-left to left. It’s a frustrating situation.
Oh man, I remember the days when Eliezer still called it Friendly and Unfriendly AI. I actually used one of those terms in a question when I was at a Q&A after a tutorial by the then less famous Yoshua Bengio at the 27th Canadian Conference on AI in 2014. He jokingly replied by asking if I was a journalist, before giving a more serious answer saying we were so far away from having to worry about that kind of thing (AI models back then were much more primitive, it was hard to imagine an object recognizer being dangerous). Fun times.
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I’ve seen in quite a while, which was refreshing for my peace of mind.
That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn’t seem that great, but it seems plausible at least?
Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.
If I recall correctly, the old Felicifia forums (archive) had a lot of debates between negative and other utilitarians about this exact thing. There are also lots of other thought experiment-like “repugnant conclusions” that go with various forums of utilitarianism, including the “Hedonium Shockwave” idea, where you tile the universe with happiness generating computronium as the most efficient way to maximize utility.
The reality is, it’s very hard to avoid weird hypothetical conclusions when you take as your ethics a simple rule like “minimize suffering” or “maximize happiness”. This is a known problem with consequentialist ethics, and it’s up to you if you want to bite the bullet or follow your moral intuitions.
I’ve thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While it’s possible that you or I are alone in the simulation, we can’t, realistically, know this. We can’t know with much certainty that the apparently sentient beings who share our world aren’t actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming you’re a Utilitarian), or have rights/autonomy to be respected, etc.
We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for people’s minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, they’ll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.
Given all this, I don’t see the point in trying to defy them or doing really anything differently than what you’d do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.
If we’re alone in the sim, then it doesn’t matter what we do anyway, so I focus on the possibility that we aren’t alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.
At least, that’s the way I see things right now. Your mileage may vary.
One thing we could do to help EA seem more cool without compromising at all on truth and intellectual integrity is to emphasize that what we’re doing is actually heroic. Like, we are literally saving lives (bednets) and protecting the helpless (animals) and trying to save the world from potential doom (AI safety).
That leans into our altruist angle. I think we could also lean into the effectiveness angle by comparing ourselves to heroic characters in fiction who use their intelligence to outwit the bad guys. I’m thinking BBC’s Sherlock, Spock from Star Trek, Lelouch from Code Geass, Tony Stark aka Iron Man, HPMOR, etc. In fact, EAs are kinda like combining Tony Stark’s genius with the sense of morality and decency of Steve Rogers aka Captain America.
We are like Lawful Good D&D Paladins in the sense of championing a righteous cause, and D&D Wizards in the sense of using our intelligence to solve the problems.
So, I think we should lean into the idea that being EA is heroic. We’re trying to save the world. Many of us make real sacrifices (10% to charity, veganism, career pivots, etc.) to make the world a better place.
As for villains, I mean, there are many we could point to other than just Altman. Elon Musk is basically a caricature at this point. Not only is he racing to ASI with the least safety of any of the frontier competitors, but as leader of DOGE he cut USAID and essentially killed or at least abandoned all the people depending on that. Another obvious choice would be an unaligned ASI itself.
But I think, it’s actually more important to show us as the heroes we are, than to name villains. People get mad at villains. People connect with heroes.
You might argue with AI safety in particular that it already sounds too sci-fi. I think, we can’t avoid that, and we may as well take advantage of the tropes that our culture has to make the connections that can be made that resonate with people. Heroes saving the world is a lot more exciting and cool a frame than maximizing impact through targeted donations and direct work sounds, but in effect, in the real world, they are the same thing.
This is not PR or spinning facts. At the risk of sounding cheesy, our efforts really are heroic, and we deserve for our society and culture to appreciate that, and recognize that they too, can become heroes in our world.
Speaking of coolness, this may be a very obscure thing, but I remember there was a series of Japanese light novels called Durarara that got turned into an anime. In the story, there’s a group of online do-gooder vigilantes known as “The Dollars” who basically are weaponized 4chan (sorta like Anonymous but sillier and doing things offline) except for good instead of evil. The Dollars would secretly help people and coordinate to fight these IRL gangs in the story, using their numbers and anonymity (unlike the other gangs with colours, The Dollars were “colourless”).
Interestingly, the relative success of the anime led several fans to create copycat websites including this one (password is: baccano), based on the chat website in the story, and fans coalesced around some of them and attempted to mimick The Dollars for a while (mostly while the anime was still airing). Basically, this consisted of mostly idealistic, half-hearted, and not very effective attempts at anonymous acts of kindness called “missions” that would be posted on the Dollars forum. But the fact that this even happened at all, and that the website forum was frequented by fans from all over the world was, to me at least, quite interesting.
I think, in some ways, the EA movement resembles this in the sense of being sorta united around a forum, and consisting of people all over the world trying to do good. The difference is that rather than being an emotional, fun thing based on a silly pop culture reference, EA is very, very serious and focused on real world effectiveness (and is also more top-down).
Perhaps, having some of the stylish fun of “The Dollars” group could help EA reach a crowd that we’d normally never touch. I don’t really know how we’d go about this, but it’s an idea anyway.
Like, I could imagine something along the lines of there being some kind of work of fiction (i.e. a novel, a TV show, maybe a web serial?) that has a bunch of EA characters doing cool things that save the world, that if done well, could be a great recruitment tool of sorts.
Oh, thanks for the clarification! I totally missed that difference.
Given how the “bottom half” of China’s population is, to my admittedly cursory knowledge, mostly the poor rural farmers and migrant workers who have benefited a lot less from China’s recent economic growth, and are likely a big reason why China’s GDP per capita is still a fair bit lower than most western developed countries despite the shiny new city skylines, it makes sense that including that segment would make a big difference in the evaluation.
Thanks again! That actually makes me update on my earlier evaluation of the utilitarian impact of China a lot.
This post somewhat resonates with me, as I’m also sort of an old hand, albeit I’ve always been more on the periphery of EA, and sometimes consider myself EA-adjacent rather than full on EA (even though I’ve done a bunch of EA-ish things like donate to AMF/Give Directly and attend an EA Global).
I’ve been around long enough to see a bunch of the early EAs who were part of the old Felicifia forums become more or less leaders in the movement (i.e. Peter Wildeford), as well as some sorta fade into obscurity (i.e. Brian Tomasik?). It’s interesting to see, and I’m happy for the former, and a bit sad about the latter.
Weirdly, I’ve also moved a bit further leftish on the political spectrum in recent years, and this has led me to feel conflicted about EA, as it’s very much a western liberal movement, and my sympathy for socialism seems to be an awkward fit nowadays. Though, admittedly I tend to oscillate at times, so this may be temporary.
And yeah, as I’ve mentioned before in other comments, I do feel like the movement is more geared towards the young university elite as well.
Just some thoughts, I guess.
As I mentioned in another comment, while China ranks in the middle on the World Happiness Report, it actually ranked highest on the IPSOS Global Happiness Report from 2023, which was the last year that China was included in the survey.
I’m curious what you think of Geoffrey Hinton’s recent comments during his interview with Jon Stewart, where he said that in a recent trip to China, he met with a member of the Politburo and found that this person was very serious about the concerns of AI safety and AI takeover and that Hinton felt that China was more likely to do things about it than the U.S.
Also, while it’s definitely true that China hasn’t embraced most western liberal values like multiparty democracy, rule of law, and human rights, you can debate some of the finer points and argue that, for instance, the Marxist intellectual tradition is western in origin, and that China’s alternative to western liberalism is a strange mixture of Marxism and Confucianism.
And, it might be noted regarding ethnic minorities that while separatism is severely punished, minorities that conform to the existing system are often rewarded with, for instance, extra points on the university entrance examination system (Gaokao), as a form of affirmative action.
Back to moral philosophy, the nature of Chinese moral philosophy seems to be more practical than analytical. Probably the most analytical moral philosophy to come out of China was Mohism, which considering how much it predates it, is very, very similar to Utilitarianism in being an overall consequentialist framework with an emphasis on human equality and the greatest good. Interestingly, some of the CCP literature in the past has tried to emphasize Mohism as some kind of forerunner to modern Marxism.
In terms of the future going well, I think the strongest argument for a CCP aligned AGI being beneficial would be that some kind of post-scarcity communism is likely to achieve more human flourishing than the techno-feudalism that western capitalism could potentially devolve into with the AGI company leaders owning everything and the rest of us surviving on basic income that exists at the whim of these AGI owners.
The CCP, for all its faults, is nominally still a communist party, and so is more likely to, given an actual chance to succeed at it, introduce post-scarcity communism that spreads the benefits of AGI in a generally egalitarian way. Though, obviously a possible failure state is that the party instead monopolizes AGI’s benefits and we still get techno-feudalism, albeit state-run instead of private.
Also, while China ranks in the middle on the World Happiness Report, it actually ranked highest on the IPSOS Global Happiness Report from 2023, which was the last year that China was included in the survey.
As for the lack of charitable donations, there are probably a number of reasons for this. Certain scandals involving the Red Cross have in the past made people weary of donating. And, probably more significantly, Chinese cultural expectations mean that a lot of what would be charitable work in the west is expected to be done by either family or the government. I personally have tried to convince some Chinese nationals to donate to, for instance, AMF, and their response is usually along the lines of this being the local government’s responsibility. There is definitely a strain of collectivism in China that contrasts with the individualism of western liberal democracies.
So, I think, a CCP led AI future would probably be notably different than a western led one, but I’m unclear on whether this would actually be that much worse. At the end of the day, both would, ideally, be led by humans and human-aligned ASI.
As an EA and a Christian… I find Thiel’s apparent views and actions to me resemble what the Bible says an Antichrist is, more than EA by far. He is hypocritically calling EA totalitarian while simultaneously, deeply supporting what amounts to technofascism in the U.S.
It is bizarre to me how unchristian his version of libertarianism is, with what seems like a complete indifference, if not utter disdain, towards the poor and downtrodden who Jesus sought to help. Thiel seems to be so far from the spirit of Christian values (at least as I understand them) that I have a hard time imagining what could be further from it.
I could go on, but people like this, who call themselves Christian and yet appear to be the polar opposite of what a good Christian ought to be (again, in my opinion) infuriate me to the point that I have trouble expressing things without getting angry, so I’ll stop here.
The percentage of EAs earning to give is too low
I’m not very confident in this view, but I’m philosophically somewhat against encouraging Earning-To-Give as it can justify working at what I see as unethical high paying jobs (i.e. finance, the oil industry, AI capabilities, etc.) and pretending you can simply offset it with enough donations. I think actions like this condone the unethical, making it more socially acceptable and creating negative higher order effects, and that we shouldn’t do this. It’s also a slippery slope and entails ends justifies the means thinking, like what SBF seems to have thought, and I think we should be cautious about potentially following such an example.
I also, separately, think that we should respect the autonomy of the people making decisions about their careers, and that those who want to EtG and who have the personal fit for it are likely already doing that, and suggesting more people should do so is somewhat disrespectful of the autonomy and ability to make rational, moral decisions of those who choose otherwise.
Quick question! What’s the best way to handle having long gaps on your resume?
So, I used to be a research scientist in AI/ML at Huawei Canada (circa 2017-2019), which on paper should make me a good candidate for AI technical safety work. However, in recent years I pivoted into game development, mostly because an EA friend and former moral philosophy lecturer pitched the idea of a Trolley Problem game to me and my interviews with big tech had gone nowhere (I now have a visceral disdain for Leetcode). Unfortunately, the burn rate of the company now means I can’t be paid anymore, so I’m looking around at other things again.
Back in 2022, I went to EA Global Washington DC and got some interviews with AI safety startups like FAR and Generally Intelligent, but couldn’t get past the technical interviews. As such, I’m not sure I’m actually qualified to be an AI safety technical researcher. I also left Huawei in part due to mental health issues making it difficult to work in such a high stress environment.
I’ve also considered doing independent AI safety research, and applied to the LTFF before and been rejected without feedback. I also applied to 80,000 Hours a while back and was also rejected.
Regularly reading the EA Forums and Less Wrong makes me continue to think AI safety work is the most important thing I could do, but at the same time, I have doubts I won’t mess up and waste people’s time and money that could go to more capable people and projects. I also have a family now, so I can’t just move to the Bay Area/London and burn my life for the cause either.
What should I do?
I should point out that the natural tendency for civilizations to fall appears to apply to subsets of the human civilization, rather than the entirety of humanity historically. While locally catastrophic, these events were not existential, as humanity survived and recovered.
I’d also argue that the collapse of a civilization requires far more probabilities to go to zero and has greater and more complex causal effects than all time machines just failing to work when tried.
And, the reality is that at this time we do not know if the Non-Cancel Principle is true or false, and whether or not the universe will prevent time travel. Given this, we face the dilemma that if we precommit to not developing time travel and time travel turns out to be possible, then we have just limited ourselves and will probably be outcompeted by a civilization that develops time travel instead of us.
Ah, that makes sense! Thanks for the clarification.
Why would the only way to prevent timeline collapse be to prevent civilizations from achieving black hole-based time travel? Why not just have it so that whenever such time travel is attempted, any attempts to actually change the timeline simply fail mysteriously and events end up unfolding as they did regardless?
Like, you could still go back as a tourist and find out if Jesus was real, or scan people’s brains before they die and upload them into the future, but you’d be unable to make any changes to history, and anything you did would actually end up bringing about the events as they originally occurred.
I also don’t see how precommitting to anything will escape the “curse”. The universe isn’t an agent we can do acausal trade with. Applying the Anthropic Principle, we either are not the type of civilization that will ever develop time travel, or there is no “curse” that prevents civilizations like ours from developing time travel. Otherwise, we already shouldn’t exist as a civilization.
So, it seems like most of the existential risks from time travel are only if the Non-Cancel Principle you described is false? It also seems like the Non-Cancel Principle also prevents most time paradoxes, so that seems like strong evidence towards it being true?
It seems like the Non-Cancel Principle would lead to only two possible ways time travel could go about. Either everything “already happened” and so time travel can only cause events to happen as they did (i.e. Tenet), meaning no actual changes or new timelines are possible (no free will), or alternatively, time travel branches the timeline, creating new timelines in a multiverse of possible worlds (in which case, where did the energy for this timeline come from if Conservation of Energy holds?).
I find the latter option more interesting for science fiction, but I think the former probably makes more sense from a physics perspective. I would really like to be wrong on this though, because useful time travel would be really cool and possibly the most important and valuable technology that one could have (that or ASI).
Anyway, interesting write up! I’ve personally spent a lot of time thinking about time travel and its possible mechanics, as it’s a fascinating concept to me.
P.S. This is Darklight from Less Wrong.
I’ve explored very similar ideas before in things like this simulation based on the Iterated Prisoner’s Dilemma but with Death, Asymmetric Power, and Aggressor Reputation. Long story short, the cooperative strategies do generally outlast the aggressive ones in the long run. It’s also an idea I’ve tried to discuss (albeit less rigorously) before as The Alpha Omega Theorem and Superrational Signalling. The first of those was from 2017 and got downvoted to oblivion, while the second was probably too long-winded and got mostly ignored.
There are a bunch of random people like James Miller and A.V. Turchin and Ryo who have had similar ideas that can broadly be categorized under Bostrom’s concept of Anthropic Capture, or Game Theoretic Alignment, or possibly a subset of Agent Foundations. The ideas are mostly not taken very seriously by the greater LW and EA communities, so I’d be prepared for a similar reception.