Have you heard of Harry Potter and the Methods of Rationality (http://www.hpmor.com/) and/or http://unsongbook.com ? I think they serve some of this role for the community already.
It’s interesting they are both long-form web fiction; we don’t have EA tv shows or rock bands that I know of.
Thanks for posting about this! The experiences I’ve had with art feel like a big part of what motivates my altruism.
One of the ways art can encourage altruism is by rendering real the life of another person, making you experience their suffering or joy as your own. Many pieces of art have this effect on me, too many to name—indeed I think of it as a defining quality of good art.
Another way art can encourage altruism is by taking a zoomed-out perspective and engaging with moral ideals in the abstract. This you might call “humanistic”. I’ve listed mostly these below, as art of the other type is too numerous to name.
- The Dispossessed by Ursula K. LeGuin is very meaningful to me as a vision of what a society where we cared “sufficiently” about others might look like.
- All Kurt Vonnegut, a very humanistic writer. God Bless You, Mr. Rosewater is explicitly about a philosophically-minded billionaire who decides to give his wealth away to the poor, and the consequences of that decision.
- George Saunders, another very humanistic writer. Tenth of December is great. https://www.newyorker.com/magazine/2012/10/15/the-semplica-girl-diaries is a great one of his about the banality of evil.
- https://www.newyorker.com/magazine/2008/08/11/trouble-poem-matthew-dickman (Content warning: suicide)
- https://en.wikipedia.org/wiki/In_Jackson_Heights (a long, quiet, slice-of-life documentary that jumps between people)
- https://en.wikipedia.org/wiki/Death_by_Hanging (the Japanese police botch an execution, causing the criminal to lose all his memories of the crime; the police, panicking, try to jog his memory so they can execute him like they’re supposed to)
You write “I don’t know how much of our time this is worth”, but to me it seems clear that this is worth a *lot* of our time.
I have a model of human motivation. One aspect of my model is that it is very hard for most people (myself very included) to remain motivated to do something that does not get them any social rewards from the people around them.
Others on this forum have written about “values drift” (https://forum.effectivealtruism.org/posts/eRo5A7scsxdArxMCt/concrete-ways-to-reduce-risks-of-value-drift) and the role community plays in it.
I like the idea of using food scares as a proxy! Very cool.
It sounds like you are saying that knowing “how will kg of chicken sold change given change in price” will let you answer “how will kg of chicken sold change given me not buying chicken.” I don’t see quite how to do this, could you give me a pointer? (for concreteness, what does the paper’s estimate of elasticity of poultry at 0.68 mean for “kg of chicken sold given I don’t buy the chicken”)
Perhaps more importantly, it sounds like you might disagree that one person abstaining from eating chicken has a meaningful impact on the number of chickens raised + killed. If so I’m quite interested, as this is something I have become convinced against by sources like https://reducing-suffering.org/does-vegetarianism-make-a-difference/.
My current model is that if I buy the meat of one chicken at a supermarket, that *in expectation* causes about one chicken to be raised + killed.
Thanks for finding this paper. But I think they are answering the question “If I change price, what happens to demand?”, while I am asking “If demand drops (me not buying any chicken), what happens to total quantity sold?”
It doesn’t seem consistent to me to say “I’m too small of an actor to affect price, but not to affect quantity sold.”
Thank you for the small education in economics of consideration 2, though. I’ve read the Wikipedia article and found it helpful, although I have further questions. Are there goods that economists think do work like what my friend is describing? Is there a name for goods like this?
Thanks, Samara. I found the paper you’re talking about here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804646/pdf/216.pdf
I’m out of my depth here, but it looks like the paper is answering the question: “if the price of chicken changes from $X/kg to $(X + Y)/kg, how will kg of chicken sold change?” While the question I’m asking is “if I don’t buy chicken, how will kg of chicken sold change?”.
This is how it feels to me to be mentally fatigued.
I’ve done management (of software engineers in a startup) and decided to move away from it for now, but can see a future in which I do more of it.
I am quite interested but am in Boston. Do you know of similar events in my area or in the US?
I love “What sets us against one another...” and feel this is the best expression of an idea which is powerful to me. I had not found such a short expression of it before. Thank you for it.
Thanks, this and particularly the Medium post was helpful.
So to restate what I think your model around this is, it’s “the efficiency gap determines how tractable social solutions will be (if < 10% they seem much more tractable), and technical safety work can change the efficiency gap.”
Thanks for the link. So I guess I should amend what Paul and OpenAI’s goal seems like to me, to “create AGI, make sure it’s aligned, and make sure it’s competitive enough to become widespread.”
OK, this is what I modeled AI alignment folks as believing. But doesn’t the idea of first-past-the-post-is-the-winner rely on a “hard takeoff” scenario? This is a view I associate with Eliezer. But Paul in the podcast says that he thinks a gradual takeoff is more likely, and envisions a smooth gradient of AI capability such that human-level AI comes into existence in a world where slightly stupider AIs already exist.
The relevant passage:
and in particular, when someone develops human level AI, it’s not going to emerge in a world like the world of today where we can say that indeed, having human level AI today would give you a decisive strategic advantage. Instead, it will emerge in a world which is already much, much crazier than the world of today, where having a human AI gives you some more modest advantage.
So I get why you would drop everything and race to be the first to build an aligned AGI if you’re Eliezer. But if you’re Paul, I’m not sure why you would do this, since you think it will only give you a modest advantage.
(Also, if the idea is to build your AGI first and then use it to stop everyone else from building their AGIs—I feel like that second part of the plan should be fronted a bit more! “I’m doing research to ensure AI does what we tell it to” is quite a different proposition from “I’m doing research to ensure AI does what we tell it to, so that I can build an AI and tell it to conquer the world for me.”)
Thanks Ozzie, this is helpful!
The former. To your other comment—yes, I’ve gotten a number of emails! :)
Thanks very much for the comment Ozzie.
I share the idea that U.S. educational issues are not the most efficient ones to be working on, all else equal. My question arises because it’s not obvious to me that all else is equal in my case. (Though I think the burden of proof should be on me here.) For example, I have a pretty senior role in the organization, and therefore presumably have higher leverage. How should I factor considerations like that in? (Or is it misguided to do so?)
I’m curious also about your statement that it’s hard to have much counterfactual impact in the for-profit world. I’ve been struggling with similar questions. Why do you think so?