My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that ‘hey, going extinct is an even biggler deal’, but the name still stuck, because names are sticky things.
timunderwood
Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.
Based on my memory of how people thought while growing up in the church, I don’t think increasing the number of saveable souls is something that makes sense for a Christian—or even any sort of long termist utilitarian framework at all.
Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.
Summoning a benevolent AI god to remake the world for good is the real systemic change.
No, but seriously, I think a lot of the people who care about making processes that make the future good in important ways are actually focused on AI.
A very nitpicky comment, but maybe it does point towards something about something: “What if every person in low-income countries were cash-transferred one years’ wage?”
There is a lot of money in the EA space, but at most 5 percent of the sort of money that would be required for doing that (quick google of ‘how many people live in low income countries’ tells me there are 700 million people in countries with a per capita income below roughly 1000 usd a year, so your suggestion would have a 700 billion dollar bill. No individual, including Elon Musk or Jeff Bezos has more than a quarter of that amount of money, and while very rich, the big EA funders are no where near that rich). Also, of course, give directly is actually giving people in low income countries the equivalent of a year’s wage to let them figure out what they want to do with the money. Of course they are operating on a small enough scale that is affordable within the funding constraints of the community.
I don’t know, the on-topic thing that I would maybe say is that it is important to have a variety of people working in the community, people with a range of skills and experiences (ie we want to have some people who have an intuitive feel for big economic numbers and how they relate to each other—but it is not at all important for everyone, or even most people to have that awareness). But at the same time, not everyone is in a place to be part of the analytic research oriented part of the EA community, and I simply don’t think that decision making will become better at achieving the values I care about if the decision making process is spread out.(But of course the counter point is that decision makers who ignore the voices of the people they are claiming to help often do more harm than good, and usually are maximizing something they care about, which is true).
Also, and I’m not sure how relevant this is, but I think it is likely that part of the reason why X-risks is the area of the community that is closest to being fully funded is because it is the cause area that people can care about for purely selfish reasons—ie spending enough on X-risk reduction is more of a coordination problem than an altruism problem.
I’m writing a novel to promote EA
The main thing I think is to keep trying lots of different things (probably even if something is working really well relative to expectations). The big fact of trying to get traction with a populat audience is that you simply cannot tell ahead of time what is good.
I don’t think the technical context is the only, or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky’s gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).
“Being on an island, Impact Island is naturally a safer location in case of a large scale pandemic. In addition, as part of the program, we plan to host talI ks and discussions about the most creative and deadly potential bioweapons and biological information hazards on live TV, helping to raise awareness of this very important cause area.”
I am really looking forward to those episodes.
Perhaps this is a bit tangential to the essay, but we ought to make an effort to actually test the assumptions underlying different public relations strategies. Perhaps the EA community ought to either build relations with marketing companies that work on focus grouping idea, or develop its own expertise in this way to test out the relative success of various public facing strategies (always keeping in mind that having just one public facing strategy is a really bad idea, because there is more than one type of person in the ‘public’.)
This is nice, but I feel like it is trying to have good production values for normal people to be impressed, but it doesn’t justify caring about the septillions of humans in a way that will actually appeal to normal people. Perhaps sticking that sort of number and the distant future as an issue at the back of the video rather than in the front—I really like though that this was produced, and it seems to me that working on this sort of project is potentially really important and valuable, but the group doing it should be looking for ways to get feedback from people outside of the community (maybe recruiting through some sort of survey website, reddit, facebook groups, whatever), testing metrics, and systematically experimenting with other styles of videos and rhetoric (while at the same time, of course, keeping in mind that the goal is to make videos that convince people to act for the sake of the long term future, and that making videos that people actually watch and listen to is only useful to the extent that it actually leads them to help the long term future).
But a good job.
“I have received many heartwarming emails from my readers who tell me they are also choosing to be part of making this world a better, safer and healthier place for everyone. “
Thanks, I particularly like this line
I think if it leads to a shift in altruistic expenses away from local charities, or actually from 95% of international charities, to DWB I don’t see that as a bad outcome, but the goal is more to increase the total altruistic giving.
What were the assumptions that were challenged about DWB for you?
I think I’ll add a line with a link to both OFTW and GWWC, and also I’ve removed the $100 and the $5.
“The nice thing here is that you don’t need to worry about driving people away with a big pitch (as long as you’re nice about it), since they’ve already bought and finished your book.”
I actually got negative reviews on my first two books about the donation appeal which had more guilt based / ‘let me describe the suffering’ arguments, and since then I’ve systematically tried to make them very positive.
It definitely is possible. And perhaps more than 1 percent, but I don’t think I’d put a credence at more than 2-3 percent.
Also, I think, and I think this is a dangerous error, that a lot of people confuse the Chinese elites not supporting democracy with them not wanting to create good lives for the average person in their country.
In both the US and China the average member of the equivalent of congress has a soulless power seizing machine in their brain, but they also have normal human drives and hopes.
I suppose I just don’t think that with infinite power that Xi would create a world that is less pleasant to live in than Nancy Pelosi, and he’d probably make a much pleasanter one than Paul Ryan.
My real point in saying this is that while I’d modestly prefer a well aligned American democratic ai singleton to a well aligned communist Chinese one, both are pretty good outcomes in my view relative to an unaligned singleton, and we basically should do nothing that increases the odds of an aligned American singleton relative to a Chinese one at the cost of increasing the odds of an unaligned singleton relative to an aligned one.
Even if the person/ group who controls the drone security forces can never be forcibly pushed out of power from below, that doesn’t mean that there won’t be value drift over long periods of time.
I don’t know if a system that stops all relevant value drift amongst its elites forever is actually more likely than 1 percent.
Also, my possibly irrelevant flame thrower comment is that China today really looks pretty good in terms of values and ways of life on a scale that includes extinction and S-risks. I don’t think the current Chinese system (as of the end of 2021) being locked in everywhere, forever, would qualify in any sense as an existential risk, or as destroying most value (though that would be worse from the pov of my values than the outcomes I actually want).
This isn’t really a reply to the article, but where are you making the little causal diagrams with the arrows? I suddenly am having a desire to use little tools like that to think about my own problems.
I agree with you that ‘good now’ gives us in general no reason to think it increases P(Utopia), and I’m hoping someone who disagrees with you replies.
As a possible example, that may or may not have reduced P(Utopia), I have a pet theory, that may be totally wrong, that the Black Death, by making capital far more valuable in Europe for a century and a half was an important part of triggering the shifts that caused Europe to be clearly ahead of the rest fo the world in the tech tree leading to industrialization by 1500 (claiming that Europe was clearly ahead by 1500 is also a disputed claim).
Assuming we think an earlier industrialization is a bigger good thing than the badness of the black death, then the black death was a good thing under this model.
Which line of thinking is how I learned to be highly skeptical of ‘good now’ = ‘good in the distant future’.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.