This seems reasonable. I changed it to say “ethical”.
I was given a student loan by an EA, which I think was likely a major factor in me being able to work on the things I am working on now.
We have basically all of the technology to do that on the EA Forum as soon as CEA activates the sequences and recommendations features, which I expect to happen at some point in the next few weeks.
Hmm, I don’t think so. Though I am not fully sure. Might depend on the precise definition.
It feels metaethical because I am responding to a perceived confusion of “what defines moral value?”, and not “what things are moral?”.
I think “adding up people’s experience over the course of their life determines whether an act has good consequences or not” is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.
However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either “ethics” or “metaethics”.
I used the word ‘relegate’, because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong.
Also not how it is intended to work on LessWrong. There is some (around 30%) loss in average visibility but there are many important posts that are on personal blogposts on LessWrong. The distinction is more nuanced and being left on personal blogpost is definitely not primarily a signifier of quality.
I was responding to this section, which immediately follows your quote:
While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people’s values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
Ahh! No! Let’s be very clear about this: the robot is predicting which outcomes would follow from which actions, and it’s ranking them, and it’s taking the actions that lead to the best outcomes. Actions are rated according to what they achieve. Actions do not themselves have intrinsic worth!
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can’t explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as “good” if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn’t contain reality, but it doesn’t need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It’s not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it’s just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
“How can this be?” the philosophers asked, “Humans are pleasure-maximizers!” They thought for a few minutes, and then said, “Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money.”
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said “I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we’re trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
And you can steer it, too. If you want to.
For whatever it’s worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn’t be very excited about someone optimizing for them.
Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).
This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the “reducing suffering” part as an alternative focus):
At about the same time as these studies were being conducted, the Gallup polling company (which has a relationship with Princeton) began surveying various indicators among the global population. Kahneman was appointed as a consultant to the project.
“I suggested including measures of happiness, as I understand it – happiness in real time. To these were added data from Bhutan, a country that measures its citizens’ happiness as an indicator of the government’s success. And gradually, what we know today as Gallup’s World Happiness Report developed. It has also been adopted by the UN and OECD countries, and is published as an annual report on the state of global happiness.
“A third development, which is very important in my view, was a series of lectures I gave at the London School of Economics in which I presented my findings about happiness. The audience included Prof. Richard Layard – a teacher at the school, a British economist and a member of the House of Lords – who was interested in the subject. Eventually, he wrote a book about the factors that influence happiness, which became a hit in Britain,” Kahneman said, referring to “Happiness: Lessons from a New Science.”
“Layard did important work on community issues, on improving mental health services – and his driving motivation was promoting happiness. He instilled the idea of happiness as a factor in the British government’s economic considerations.
“The involvement of economists like Layard and Deaton made this issue more respectable,” Kahneman added with a smile. “Psychologists aren’t listened to so much. But when economists get involved, everything becomes more serious, and research on happiness gradually caught the attention of policy-making organizations.
“At the same time,” said Kahneman, “a movement has also developed in psychology – positive psychology – that focuses on happiness and attributes great importance to internal questions like meaning. I’m less certain of that.
Kahneman studied happiness for over two decades, gave rousing lectures and, thanks to his status, contributed to putting the issue on the agenda of both countries and organizations, principally the UN and the OECD. Five years ago, though, he abandoned this line of research.
“I gradually became convinced that people don’t want to be happy,” he explained. “They want to be satisfied with their life.”
A bit stunned, I asked him to repeat that statement. “People don’t want to be happy the way I’ve defined the term – what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.
“Meanwhile, awareness of happiness has progressed in the world, including annual happiness indexes. It seems to me that on this basis, what can confidently be advanced is a reduction of suffering. The question of whether society should intervene so that people will be happier is very controversial, but whether society should strive for people to suffer less – that’s widely accepted.
I don’t fully agree with all of the above, but a lot of the gist seems correct and important.
[Made this into a top-level comment]
Hacker news has downvotes, though they are locked behind a karma threshold, though overall I see more comments downvoted on HN than on LW or the EA Forum (you can identify them by the text being more greyish and harder to read).
The problem is that if your post got downvoted and displayed in chronological order, this often means you will get even more downvotes (in parts because having things in chronological order means people vote more harshly because people want to directly discourage people posting bad content, and also because your visibility doesn’t reduce, which means more people have the opportunity to downvote)
Huh, that’s particularly weird because I don’t have any of that problem with LessWrong.com, which runs on the same codebase. So it must be something unique to the EA forum situation.
Hmm, good point. We should generally clean up that user edit page.
(Note, I am currently more time-constrained than I had hoped to be when writing these responses, so the above was written a good bit faster and with less reflection than my other pieces of feedback. This means errors and miscommunication is more likely than usual. I apologize for that.)
I ended up writing some feedback to Jeffrey Ladish, which covered a lot of my thoughts on ALLFED.
My response to Jeffrey
Building off of that comment, here are some additional thoughts:
As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED’s work to be of comparatively lower importance than the other areas I tend to recommend grants in
Most of ALLFED’s work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path.
I have some hesitations about the structure of ALLFED as an organization. I’ve had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I’ve had with Finan who used to work at ALLFED, but left because of it being located in Alaska).
I generally think ALLFED’s work is of decent quality, helpful to many and made with well-aligned intentions, I just don’t find it’s core value proposition compelling enough to be excited about grants to it
While I agree with a lot of the critiques in this comment, I do think it isn’t really engaging with the core point of Ben’s post, which I do think is actually an interesting one.
The question that Ben is trying to answer is “how large is the funding gap for interventions that can save lives for around $5000?”. And for that, the question is not “how much money would it take to eliminate all communicable diseases?”, but instead is the question “how much money do we have to spend until the price of saving a life via preventing communicable diseases becomes significantly higher than $5k?”. The answer to the second question is upper-bounded by the first question, which is why Ben is trying to answer that one, but that only serves to estimate the $5k/life funding gap.
And I think he does have a reasonable point there, in that I think the funding gap on interventions at that level of cost-effectiveness does seem to me to be much lower than the available funding in the space, making the impact of a counterfactual donation likely a lot lower than that (though the game theory here is complicated and counterfactuals are a bit hard to evaluate, making this a non-obvious point).
I think, though I have very high uncertainty bounds around all of this, is that the true number is closer to something in the space of $20k-$30k in terms of donations that would have a counterfactual impact of saving a life. I don’t think this really invalidates a lot of the core EA principles as Ben seems to think it implies, but it does make me unhappy with some of the marketing around EA health interventions.
I have a bunch of complicated thoughts here. Overall I have been quite happy with the reception to this, and think the outcomes of the conversations on the post have been quite good.
I am a bit more time-strapped than usual, so I will probably wait on writing a longer retrospective until I set aside a bunch of time to answer questions on the next set of writeups.
Feedback that I sent to Jeffrey Ladish about his application:
Excerpts from the application
I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe—that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.
The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.
Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.
I expect my feasibility investigation to cover the following questions:
- Impact: what would it take for such a project to actually impact the far future?
- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?
- Neglectedness: what similar projects already exist?
- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?
- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?
- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?
- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?
- What funding might be available for projects aimed at civilizational resilience?
- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?
- What are the benefits of starting a non profit vs. other project structures?
I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.
If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.
I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.
Since I’ve initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.
I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.
I think my biggest cruxes in this space are something like the following:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.
I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like “covering all the bases” in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don’t think that is sufficient for me to recommend funding to it.
Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.
Yeah, it’s definitely unchecked by default. We are currently working on an editor rework that should get rid of this annoyance. We currently need to allow users to switch to markdown to make it possible for mobile users to properly edit stuff, but that shouldn’t be a problem anymore after we are done with the rework.
On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.
As someone who is quite familiar with what drives traffic to EA and Rationality related websites, 2015 marks the end of Harry Potter and the Methods of Rationality, which (whatever you might think about it) was probably the single biggest recruitment device that has existed in at least the rationality community’s history (though I also think it was also a major driver to the EA community). It is also the time Eliezer broadly stopped posting online, and he obviously had a very outsized effect on recruitment.
I also know that during 2015 (which is when I started working at CEA), CEA was investing very heavily in trying to grow the community, which included efforts of trying to get people like Elon Musk to talk at EAG 2015, which I do think was also a major draw to the community. A lot of the staff responsible for that focus on growth left over the following years and CEA stopped thinking as much in terms of growth (I think whether that was good or bad is complicated, though I mostly think that that shift was good).