Economist specializing in cost-benefit analysis of complex public health and biorisk policies. https://www.centerforhealthsecurity.org/our-people/bruns/ https://allegedwisdom.blogspot.com/
Richard Bruns
Mortality Cost of Taxation
Sadly no, although many EA-style shallow dives use a similar approach. The target audience for this is people like industrial hygienists or nurse managers who want to do an analysis of an operational change. I posted this because a couple people at a recent conference asked me for a guide like this and there was nothing like it available.
The official framing is that a DALY is valued at 2 to 4 times GDP per capita, so given uncertainty, it’s probably good if you’re buying a DALY for less than GDP per capita and probably bad if you’re paying 5x.
My framing is that the disutility of working a job, holding income constant, is probably between 0.2 and 1 DALY.
Doing a Basic Life-Focused Cost-Benefit Analysis
Your interpretation of Standard 241 is not misguided. I was one of the people who developed the standard; we designed it to be technology-agnostic and with UV in mind. It will incentivize UV, once the standards have been developed to say how much equivalent clean air a UV fixture provides. I suspect that in a few years when the tech matures, it will be one of the most cost-effective ways of meeting 241′s requirements.
This is not a dumb question; you have just described upper room uv, which is an established, successful, and worthwhile tech. But it requires skilled labor to install (to avoid accidentally getting the angle wrong and blasting people in the head) and it does not have as much potential upside as newer tech.
“Imagine a new technology that allowed subsystems to report their conscious states! But we don’t have that evidence and, unfortunately, may forever lack it. ”
We already have this technology. It is called Internal Family Systems therapy. Mindfulness meditation also results in awareness of the brain processes that were formerly shielded from conscious awareness, and knowledge of the fact that they have valenced experiences of their own, separately and often in conflict with the valence that the conscious mind reports.
Denying the existence of conscious subsystems in the human brain is like denying the existence of jhana. The lived experience of thousands of people is that they exist. We have watched them, and talked to them, and watched them talk to each other. We have seen that the ‘I’ of our assumed personal identity is actually a process that results from ‘passing a microphone’ from one subsystem to another as they take turns reacting to various stimuli.
Humans are vast, we contain multitudes. If you hurt me, you are hurting a lot of things. Theories of a singular consciousness are based on a narrow and limited sense of identity that anyone with meditation attainment will tell you is a delusion.
I got on a different train long ago. I am not a utilitarian; I am a contractualist who wants ‘maximize utility’ to be a larger part of the social contract (but certainly not the only one).
I strongly agree with Derek’s point about measuring the nonmonetary costs to the recipients and their families. If your benefits are driven mainly by the differences in costs, then omitting potentially relevant costs can invalidate the entire analysis. You absolutely must account for the time that recipients spent in the program, and traveling to and from the program, and any other money or time costs that they or their families incurred as a result of program participation. At minimum, this time should be valued at the local wage rate. Until this is addressed, I will assume that your analysis is junk, and say so to anyone who asks me about it.
Given that technical AI alignment is impossible, we should focus on political solutions, even though they seem impractical. Running any sufficiently powerful computer system should be treated as launching a nuclear weapon. Major military powers can, and should, coordinate to not do this and destroy any private actor who attempts to do it.
This may seem like an unworkable fantasy now, but if takeoff is slow, there will be a ‘Thalidomide moment’ when an unaligned but not super-intelligent AI does something very bad and scary but is ultimately stopped. We should be ready to capitalize on that moment and ride the public wave of techno-phobia to put in sensible ‘AI arms control’ policies.
Also, in my preferred specification, I do not assume that average and marginal values are the same. An average value of $70 (relative to nonexistence) is perfectly compatible with the marginal value of the last hour of leisure (relative to working) to be equal to take-home pay. Assuming equality was just an extreme estimate to set a lower bound on things.
Worldview Diversity and Welfare Analysis
I sense that there is some kind of deep confusion or miscommunication here that may take a while to resolve. Have you read the Life-Valuation Variability post? In it, I explain why “The Value of a Statistical Life in a country” should be understood very narrowly and specifically as “The exchange rate between lives and money when taking money away from, or giving money to, people in that country”.
This post is not meant to tell individuals how to live their lives. There is a huge variation in individual preferences for leisure vs buying nice things. However, I do observe that most of the smart people are either FIREing or working at high-status jobs that give them utility. And there are reasons to believe that social pressures and monkey-brain instincts cause people to value consumption much more than they should if they were actually optimizing for happiness.
I think that it would be useful for people to think that their leisure is valued at $60 an hour, and adjust things accordingly. This is most useful for giving yourself permission to say no to unpleasant time-costing obligations that generate less than $60/hr of value for the world. And If you have the ability to do so, you should experiment with working less and consuming less, and enjoying more leisure, to see what that does for you.
I do believe that, after we have already maxed out all giving opportunities with a better payoff (i.e. we have solved all x-risk problems, ended global poverty, and put humanity on a path to filling the universe with flourishing life), and if there is enough automation of the production of basic needs to support it, then it would be optimal to pay everyone enough of a basic income so that nobody who earns less than $60 an hour ever has to work for a living if they don’t want to.
Analytical EA types often tie themselves into knots trying to make a Grand Unified Theory to base all decisions on. This does not and will not work. All models are wrong, but some models are useful. You can, and should, use different heuristics in different situations. I am not trying to program an AI that I put in charge of the world. I am merely justifying treating all people’s time the same for the purpose of EA cause prioritization with donor money.
Clearly it would break the economy to base all government policy on the assumption that consumption has no social value, and optimize hard on that assumption. Although yes, I do believe that a world where only 10% of people are operating critical infrastructure in exchange for high social status, and the rest get a basic income and (maybe) do ‘hobby jobs’, is both possible and desirable. That flows not from the leisure time valuation, but from a rather strong intuition that most current GDP goes to things that are either positional or an addiction.
Valuing Leisure Time
Questions in order:
I never meant to make a statement that a year is better than other time units. I said year because it is the existing standard in the field. The statement was about using a life/health measurement rather than money. As the 102 post hints at, my goal is not to create ‘the best’ system ex nihilo; it is to build off of the precedent set in the field. So whenever an arbitrary choice has already become the standard, and it is not obviously worse than something else, I stick with it.
This will inevitably be handwavey, fuzzy, and based on surveys. I imagine something like the WELBY, where we set the value of an ideal life to 1, and ask people how bad it would be for various things to happen to them, and assign ‘disability weights’ to everything based on their responses.
Because it is easy for everyone to understand intuitively. See the 102 post; anything we use will need to be very approachable, so we have society-wide buy-in for the metric.
I agree with this; thank you for replying. (I thought I would get email alerts if anyone commented, but I guess I didn’t set that up right.)
There are standard approaches for valuing the loss of consumer surplus from price changes. Traditionally, moving money from one entity to another is just a transfer, not a cost, but there is a deadweight loss associated with price changes, and we measure that as a cost. But you have to have an estimate for how many trades will not happen as a result of the price change.
There are no existing metrics for valuing loss of freedom in DALY terms. You’d basically have to do a proper survey, using similar methodology to the one that generates the DALY losses of various health states.