I think though the way the purpose of this exercise is understood is more about characterizing an utopia, and not about trying to explain how to solve alignment in a world where a singularity is in the cards.
timunderwood
I think creating a system to contradict misunderstandings, is the important and difficult question (which I will do nothing to solve at this moment). I read the essay sampling the research papers, so I’ve known at least since then that actual ‘bio-ethicists’ are not the group we are talking about. But in my head angry rants about bioethicists would still sometimes pop up. And certainly the general discourse in the community didn’t digest that result.
I’d very much like to see a system that helps for us to call out these sort of issues.
An idea I encountered in a different discussion recently that might get at that is encouraging funding groups to fund research into the Devil’s advocate case against ideas popular in the community. That would obviously not be sufficient but it could be a good step in the correct direction.
Avoid catostrophic industrial/research accidents?
Assuming that some people respond to these memetic tools by reducing the amount of children they have more than other people do, the next generation of the population will have an increased proportion of people who ignore these memetic tools. And then amongst that group, those who are most inclined to have larger numbers of children will be the biggest part of the following generation, and so on.
The current pattern of low fertility due to cultural reasons seems to me to be very unlikely to be a stable pattern. Note: There are people who think it can be stable, and even if I’m right that it is intrinsically unstable, there might be ways to plan out the population decline to make it stable without the substantial use of harsh coercive measures.But really, fewer people being a really, really bad thing is the core of my value structure, and promoting any sort of anti natalism is something I’d only do if I was convinced there was no other path to get the hoped for good things.
The Phil Torres essay in Aeon attacking Longtermism might be good
The really big con which is that people are awesome, and 1/70th of the people is way, way less awesome than the current number of people. Far, far fewer people reading fan fiction, falling in love, watching sports, creating weird contests, arguing with each other, etc is a really, really big loss.
Assuming that if it could be done, that it would be an efficient in utility loss/gain terms way to improve coordination, I think it probably goes way too slow to be relevant to the current risks from rapid technological change. It seems semi-tractable, but in the long run I think you’d end up with the population evolving resistance to any memetic tools used to encourage population decline.
I feel like trying to be charitable here is missing the point.
It mostly is Moloch operating inside of the brains of people who are unaware that Moloch is a thing, so in a Hansonian sense they end up adopting lots of positions that pretend to be about helping the world, but are actually about jockeying for status position in their peer groups.
EA people also obviously are doing this, but the community is somewhat consciously trying to create an incentive dynamic where we get good status and belonging feelings from conspicuously burning resources in ways that are designed to do the most good for people distant in either time or space.
Possibly the solution should be to not try to integrate everything you are interested in.
By analogy, both sex and cheese cake are god, but it is not troubling that for most people there isn’t much overlap between sex and cheese cake. EA isn’t trying to be a political movement, it is trying to be something else, and I don’t think this is a problem.
I think the survey is fairly strong evidence that EA has a comparative advantage in terms of recruiting left and center left people, and should lean into that.
The other side though is that the numbers show that there are a lot of libertarians (around 8 percent) and more ‘center left’ people who responded to the survey than there are ‘left’ people. There are substantial parts of SJ politics that are extremely disliked amongst most libertarians, and lots of ‘center left’ people. So while it might be okay from a recruiting and community stability pov to not really pay attention to right wing ideas, it is likely essential for avoiding community breakdown to maintain the current situation where this isn’t a politicized space vis a vis left v center left arguments.
Probably the idea approach is some sort of marketing segmentation where the people in Yale or Harvard EA communities use a different recruiting pitch and message that emphasizes the way that EA is a way to fulfill the broader aim of attacking global oppression, inequity and systemic issues, while people who are talking to Silicon Valley inspired earn-to-give tech bros should keep with the current messages that seem to strongly resonate with them.
More succinctly: Scott Alexander shouldn’t change what he’s saying, but a guy trying to convince Yale Law students to join up shouldn’t sound exactly like Scott.
Epistemologically this suggests we should spend more time engaging with the ideas of people who identify as being on the right, since clearly this is very likely to a bigger blindspot than ideas popular with people who are ‘left wing’.
I feel like this would end up like microloans: Interesting, inspiring, and useful for some people, but from the pov of solving the systemic issue a dead end. The obvious question being: Why doesn’t this already exist? And the answer presumably being that it cannot be done profitably.
Still, it is the sort of thing that if someone who has the skills and resources to do so is directly trying to set up specific systems like this, their efforts likely have a very high probability of being way more useful than anything else they could do.
Thanks for the links, which definitely include things I wish I’d managed to find earlier. Also I loved the special containment procedures framing of the story objects.
I wonder if there is any information on whether very many people’s minds actually are changed by The Ones Who Walk Away from Omelas, my experience of reading it was very much like what I claimed the standard response of people exposed to fiction they already strongly disagree with was: Not getting convinced. I did think about it a bunch, and I realized that I have this weird non-utilitarian argument inside my head for why it is legitimate to subject someone to that sort of suffering whether or not they volunteer ‘for the greater good’. But on the whole I thought the same after reading the story as before.
When can Writing Fiction Change the World?
Okay, I suppose that’s vaguely legit. They are in broadly the same space. And also the new name is definitely better.
Does anyone know about research on the influence of fiction on changing elite/public behaviors and opinions?
The context of the question is that I’m a self published novelist, and I’ve decided that I want to focus the half of my time that I’m focusing on less commercial projects on writing books that might be directly useful in EA terms, probably by making certain ideas about AI more widely known. I at some point decided it might be a good idea to learn more about examples of literature actually making an important difference beyond the examples that immediately came to my mind—which were Uncle Tom’s Cabin, Atlas Shrugged, Methods of Rationality and the way the LGBTQ movement probably gained a lot of its present acceptance through fictional representation.
I’ve found some stuff through academia.edu searches (like this journal article describing the results of a survey of readers of climate change fiction), but it seems like there is a good chance that the community might be able to point me in useful directions that I won’t quickly find on my own.
timunderwood’s Quick takes
I think the standard assumption is that with any task you can create an expert system that is cheaper to power and run than it is to feed humans. Though I was talking with someone during EAG Virtual who was worried that humans might be one of the most efficient tools if you are only thinking about needing to feed them, and then it would be efficient for malevolent AI to enslave them.
I think the basic issue with the argument is that we are dealing with a case that Tiger Woods can just create a new copy of himself to mow the lawn while another copy is filming a commercial. So the question is whether creating the processors and then feeding them electricity to get the compute to run the process is cheaper than paying a human, and the most a human could be worth to pay is the amount that it costs to build compute that could replicate the performance of the human.
My intuition has always been that humans are unlikely to be at the actual optimum for energy efficiency of compute, but even if we are, I highly doubt that we’d be worth much more in the long run working for the AGI than it costs to feed us.
The solution to technological unemployment following AGI is to set everything up so that we make moving to a world in which there are no jobs a good thing, not to try to keep jobs by figuring out a way to compete with tools that can do literally everything better than we can.
A post employment society, where everyone has a right to their fraction of mankind’s resources.
Again asking for more clarification on what dignity means.
I do think though things that intuitively seem to me to be similar what you are probably talking about with dignity could be important considerations, though I suspect they are unlikely to be cost competitive with mosquito nets and vaccines if you are making direct benefit calculations.
Perhaps we mean something like: Being respected by your community, and treated with respect by the system as a whole, having direct control over your life and what you do day to day, ie being able to meaningfully choose are important components of the good life that an intervention ideally should support rather than oppose.
At the same time, if I was an individual who both was disrespected by the people around him, and dying of malaria, I’d probably strongly prefer to get anti malarial drugs than respect, so unless the respect is much cheaper to provide than DALY, focusing on DALY probably makes more sense.
I suspect a large part of the value of large direct cash transfers is that it makes the person who receives it, because they have more resources, automatically become more respected in their community, and feel more in control of their own choices. So in that sense we might already be pushing interventions that support dignity.
The dignity of the poor being better protected on a large scale is the sort of thing which would require actual systemic change (possibly opposed to systemic change just being a synonym for ‘boo capitalism’), and we don’t know about robust ways to achieve most types of systemic change which don’t have a high chance of backfiring and causing more problems than they fix.
I do think this is an important thing to think about, and that it is at least plausible if you could improve access to and respect for dignity it could lead to a large improvement in well being, comparable possibly to a large increase in income (though probably not comparable to a substantial increase in life expectancy).
“They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses.”
What I’ve so far read in this essay is very good, however I’d note the foundation has spent almost 30 billion, a large fraction of it on vaccines (I can’t find how much with a simple search). The numbers suggest the cost per life saved is in the 1-2k range, or at least the high three digits. Which is in the same range as the AMF estimates.
“First, the approach of multiplying many parameter intervals with an upper bound at one, but no corresponding lower bound, predisposes the resulting distribution of the number of alien civilisations to exhibit a very long negative tail, which drives the reported result.”
I sort of thought this was the logical structure underlying why the paradox was dissolved—specifically that given what we know, it is totally plausible that one of the factors has a really, really low value.
There only is a paradox if we can be confindentally lower bound all of the parameters in the equation. But if given what we know there is nothing weird (ie the odds of it happening are at least 1/1000) about one of the parameters being sufficiently close to zero to make a nothing else in the visible universe likely, then we should not be surprised that we are living in such a world.
Or alternatively the description I once saw of the paper, that if god throws dice a bunch of times in creating the universe, it isn’t surprising that one of the rolls came up one.
What would actually resurrect the paradox is if we could actually create lower bounds for more of the parameters, rather than simply pointing out that there isn’t very good evidence that the probability is really, really low for any given one of them—which of course there isn’t.