See also The person-affecting value of existential risk reduction by Gregory Lewis.
MichaelStJules
Liberty in North Korea, quick cost-effectiveness estimate
Biases in our estimates of Scale, Neglectedness and Solvability?
Hedging against deep and moral uncertainty
Welfare Footprint Project—a blueprint for quantifying animal pain
Solution to the two envelopes problem for moral weights
Project for Awesome 2021: Early coordination
My own skepticism of longtermism stems from a few main considerations:
I often can’t tell longtermist interventions apart from Play Pumps or Scared Straight (an intervention that actually backfired). At least for these two interventions, we measured outcomes of interest and found they they didn’t work or were actively harmful. By the nature of many proposed longtermist interventions, we often can’t get good enough feedback to know we’re doing more good than harm or much of anything at all.
Many specific proposed longtermist interventions don’t look robustly good to me, either (i.e. their expected value is either negative or it’s a case of complex cluelessness, and I don’t know the sign). Some of this may be due to my asymmetric population ethics. If you aren’t sure about your population ethics, check out the conclusion in this paper (although you might need to read some more or watch the talk for definitions), which indicates quite a lot of sensitivity to population ethics.
I’m not convinced that we can ever identify robustly positive longtermist interventions, essentially due to 1, or that what I could do would actually support robustly positive longtermist interventions according to my views (or views I’d endorse upon reflection). GPI’s research is insightful, impressive and has been useful to me, but I don’t know that supporting it further is robustly positive, since I am not the only one who can benefit from it, and others may use it to pursue interventions that aren’t robustly positive to me.
Tentatively, I’m hopeful we can hedge with a portfolio of interventions, shorttermist or longtermist or both. If you’re worried about population effects of AMF, you could pair it with a family planning charity. If you’re worried about economic effects, too, I don’t know what to do for that. I don’t know that it’s always possible to come up with a portfolio that manages side effects and all these different considerations well enough that you should be confident it’s robustly positive. I wrote a post about this here.
A portfolio containing animal advocacy, s-risk work and research on and advocacy for suffering-focused views seems like it would be my best bet.
- 12 Feb 2021 13:45 UTC; 60 points) 's comment on Complex cluelessness as credal fragility by (
- 16 Nov 2020 3:06 UTC; 14 points) 's comment on some concerns with classical utilitarianism by (
- 16 Nov 2020 5:40 UTC; 11 points) 's comment on some concerns with classical utilitarianism by (
- 23 Dec 2020 18:47 UTC; 4 points) 's comment on A case against strong longtermism by (
[Question] Why does (any particular) AI safety work reduce s-risks more than it increases them?
The responsiveness of aquatic animal supply
One of my main high-level hesitations with AI doom and futility arguments is something like this, from Katja Grace:
My weak guess is that there’s a kind of bias at play in AI risk thinking in general, where any force that isn’t zero is taken to be arbitrarily intense. Like, if there is pressure for agents to exist, there will arbitrarily quickly be arbitrarily agentic things. If there is a feedback loop, it will be arbitrarily strong. Here, if stalling AI can’t be forever, then it’s essentially zero time. If a regulation won’t obstruct every dangerous project, then is worthless. Any finite economic disincentive for dangerous AI is nothing in the face of the omnipotent economic incentives for AI. I think this is a bad mental habit: things in the real world often come down to actual finite quantities. This is very possibly an unfair diagnosis. (I’m not going to discuss this later; this is pretty much what I have to say.)
“Omnipotent” is the impression I get from a lot of the characterization of AGI.
Another recent specific example here.
Similarly, I’ve had the impression that specific AI takeover scenarios don’t engage enough with the ways they could fail for the AI. Some are based primarily on nanotech or engineered pathogens, but from what I remember of the presentations and discussions I saw, they don’t typically directly address enough of the practical challenges for an AI to actually pull them off, e.g. access to the materials and a sufficiently sophisticated lab/facility with which to produce these things, little or poor verification of the designs before running them through the lab/facility (if done by humans), attempts by humans to defend ourselves (e.g. the military) or hide, ways humans can disrupt power supplies and electronics, and so on. Even if AI takeover scenarios are disjunctive, so are the ways humans can defend ourselves and the ways such takeover attempts could fail, and we have a huge advantage through access to and control over stuff in the outside world, including whatever the AI would “live” on and what powers it. Some of the reasons AI could fail across takeover plans could be common across significant shares of otherwise promising takeover plans, potentially placing a limit on how far an AI can get by considering or trying more and more such plans or more complex plans.
I’ve seen it argued that it would be futile to try to make the AI more risk-averse (e.g. sharply decreasing marginal returns), but this argument didn’t engage with how risks for the AI from human detection and possible shutdown, threats by humans or the opportunity to cooperate/trade with humans would increasingly disincentivize such an AI from taking extreme action the more risk-averse it is.
I’ve also heard an argument (in private, and not by anyone working at an AI org or otherwise well-known in the community) that AI could take over personal computers and use them, but distributing computations that way seems extremely impractical for computations that run very deep, so there could be important limits on what an AI could do this way.
That being said, I also haven’t personally engaged deeply with these arguments or read a lot on the topic, so I may have missed where these issues are addressed, but this is in part because I haven’t been impressed by what I have read (among other reasons, like concerns about backfire risks, suffering-focused views and very low probabilities of the typical EA or me in particular making any difference at all).
Have you guys considered rebranding like the Effective Altruism Foundation has to the Center on Long-term Risk or just updating the organization’s description to better reflect your priorities and key ideas?
I look at 80,000 Hours’ front page, and I see
You have 80,000 hours in your career.
How can you best use them to help solve the world’s most pressing problems?
We’re a nonprofit that does research to answer this question. We provide free advice and support to help you have a greater impact with your career.
But this doesn’t mention anything about longtermism, which seems to be one of the major commitments of 80,000 Hours, one which people coming to 80,000 Hours will often disagree with, and probably the one most responsible for the “bait-and-switch” perception. Possibly also population ethics, although I’m not sure how committed 80,000 Hours is to particular views or to ruling out certain views, or how important this is to 80,000 Hours’ recommendations, anyway. It seemed to have a big impact on the problem quiz (which I really like, by the way!).
I’d imagine rebranding has significant costs, and of course 80,000 Hours still provides significant value to non-longtermist causes and to the EA community as a whole, so I expect it not to make sense. Even updating the description to refer to longtermism might turn away people who could otherwise benefit from 80,000 Hours.
EDIT: Looks like this was mentioned by NunoSempere.
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]
A single user with a decent amount of karma can unilaterally decide to censor a post and hide it from the front page with a strong downvote. Giving people unilateral and anonymous censorship power like this seems bad.
Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It’s always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers… but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.
I consider this to be a pretty weak argument, so it doesn’t contribute much to my priors, which although weak (and so the particulars of a company matter much more), are probably centered near neutral on net welfare effects (in the short to medium term). I think a large share of goods people buy and things they do are harmful to themselves or others before even considering the loss of income/time as a result, or worse for them than the things they compete with. It’s enough that I wouldn’t have a prior strongly in favour of what profitable companies are doing being good for us. Here are reasons pushing towards neutral or negative impacts:
A lot of goods are mostly for signaling, especially signaling wealth, which often has negative externalities and I’d guess little positive value for the individual. Brand name versions of things, clothing, jewelry, cars.
Many modern ways people spend their time (enabled by profitable companies) have probably made us less active, more indoor-bound, less close with others, and less pursuant of meaning and meaningful goals, which may conflict with people’s reflective preferences, as well as generally be bad for health, mental health and other measures of wellbeing. Basically a lot of the things we do on our computers and phones.
Many things are stimulating and addictive, and companies are optimizing for want, not welfare. Want and welfare can come apart when we optimize for want. So we get cigarettes, addictive video games, junk food, algorithms optimizing for clicks when we’d be better off stepping away from the internet or doing more substantial things online, and lots of salt, sugar and calories in our foods.
Media companies may optimize for revenue over accurate reporting. This includes outrage, playing to our fears, demonizing and polarization.
Some companies make us want their stuff for fear of missing out or social pressure, so it can be closer to coercion than providing a valuable opportunity.
I’d guess relatively little is spent on advertisement for things that we have good evidence for improving our welfare, because most of those things are hard to profit from: basic healthy foods, exercise (although there are certainly exercise products and programs that get advertised, but less so just gym memberships, joining sports leagues, running outside), just spending more time with your friends and family (in cheap ways, although travel and amusement parks are advertised), pursuing meaning or meaningful goals, helping others (even charity ads are relatively rare). So, advertisement seems to push us towards things that are worse for us than the alternatives we’d have gone with. To capitalize on the things that do make us substantially better off, companies may sell us more expensive versions that aren’t (much) better or things to go with them that don’t substantially help.
I’d expect a lot of hedonic adaptation for many goods and services, but not mental health (almost by definition), physical pain and to a lesser extent general health and mobility, which are worsened by a lot of the things companies provide, directly or indirectly by competing with the things that are better for health.
Company valuations don’t usually substantially reflect their externalities, and shorting companies is riskier and more costly than buying and holding shares, so this biases markets towards positively valuing companies even if their overall value for the world is negative.
There are often negative externalities on nonhuman animals in particular, although the overall effects on nonhuman animals may be complicated when you also consider the effects on wild animals.
I do think it’s plausible McKinsey and Goldman have done and do more good than harm for humans in the short term, based on the arguments you give, but I don’t have a strong view either way. It could depend largely on whether raising people’s consumption levels makes them better off overall (and how much) in the places where people are most affected by these companies. Measures of well-being do seem to positively correlate with income/wealth/consumption at the individual level, and I’d guess also at the aggregate level for developing countries, but I’d guess not for developed countries, or at best weakly so. There are negative externalities for increasing an individual’s income on others’ life satisfaction, although it’s possible a large share is due to rescaling, not actually thinking your life is worse absolutely than otherwise. See:
Haushofer, J., Reisinger, J., & Shapiro, J. (2019). Is your gain my pain? Effects of relative income and inequality on psychological well-being.
Based on GiveDirectly in Kenya. They had multiple measures of wellbeing, but negative effects were only observed for life satisfaction for non-recipient households of cash transfers in the same village. See Table A5.
This table from Veenhoven, R. (2019). The Origins of Happiness: The Science of Well-Being over the Life Course., reproduced in this post.
This graph, reproduced in this post.
Other writing on the Easterlin Paradox.
Some companies may also contribute to relative inequality or even counterfactually make the median or poor person absolutely poorer through their political activities.
The categories of things I’m optimistic about for human welfare in the short to medium term are:
Things that save us time, so we can spend more time on things that actually make us better off.
Things that improve or protect our health (including mental health).
Things that make us (feel) safer/more secure (physically, financially, etc.).
Things that make us more confident, but without substantially net negative externalities (negative externalities may come from positional goods, costly signaling, peer pressure).
Things that help us make better decisions, without important negative effects.
I’m neutral to optimistic about these (possibly neutral because they just replace cheaper versions of themselves that would be just as good):
In-person activities with friends/family.
Things for hobbies or projects.
Restaurants.
I’m about neutral and pretty uncertain about screen-based entertainment (TV, movies, video games), and recreational substances that aren’t extremely addictive or harmful (alcohol, marijuana).
I’m pessimistic about:
Social media.
Status-signaling goods/positional goods/luxuries.
Processed foods.
Cigarettes.
I also wish all the EA Funds and Open Phil would do this/make their numbers more accessible.
[Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand
Types of subjective welfare
This is my first year donating, and I was earning to give until now. I welcome feedback.
My general plan is to support animal welfare, specifically intervention and (sub-)cause prioritization research, international movement growth and the current best-looking interventions, filtered through the judgment of full-time researchers/grantmakers.
I donated $7K (Canadian) to the EA Animal Welfare Fund about a month ago. I think they’re the best-positioned to identify otherwise neglected animal welfare funding opportunities when evidence is relatively scarce, given grantmakers working at several different animal protection orgs, and Lewis Bollard’s years of experience in grantmaking.
I’m looking at donating another $30-40K (Canadian) to be split primarily between the following groups, roughly in decreasing order of proportion of funding, although I haven’t decided on the exact amounts:
1. ACE’s Recommended Charity Fund. I think the EAA community’s research supporting corporate campaigns and ACE’s research specifically have improved considerably in the past while, so I’m pretty confident in their choices working on these. I’m also happy to see expansion to countries previously neglected by EAA funding and support for further research.
2. Rethink Priorities. I’ve been consistently impressed by their research for animals so far, and I’m keen to see further research, especially on ballot initiatives, for which I’m pretty optimistic. Also, it looks like they’ve got a lot of room for funding, and it would be pretty cool if they hired Peter Hurford full-time. Btw, they have an AMA going on now.
3. Charity Entrepreneurship. Also very impressed by their research for animals so far, both exploratory and in-depth, including a cluster-thinking approach. I hope to see more of it, and any new animal welfare charities they might start.
4. Possibly the EA Animal Welfare Fund again.
5. RC Forward. Both for my own donations and as a public good for EAs in Canada, since they allow Canadians to get tax credits for donations to EA charities. More here and here.
It’s worth noting that Rethink Priorities and Charity Entrepreneurship have each received funding from Open Philanthropy Project (Farm Animal Welfare) and EA Funds recently; RP from the Animal Welfare Fund and CE most recently from the Meta Fund (and previously from the Animal Welfare Fund).
I have a few other research orgs in mind, and I might also donate to Sentience Politics, for their campaign to support the referendum to end factory farming in Switzerland (some discussion here on Facebook). I’m also wondering about Veganuary, but I’m not in a good position to judge their counterfactual impact from the numbers they present.
What are your main takeaways and ways forward from the pretty pessimistic report on cultivated meat Open Phil commissioned?