I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it’s important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone’s time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1⁄50,000 that has ridiculous outsized impact—It’s high expected value even if comparatively all the other events have low impact. And again, that’s comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.
It can’t take more that ~50 events for every AI Safety researcher to get to know each other.
This is true, much of the networking impact of events is frontloaded.
Nope, 1⁄50,000 seems like a realistic ratio for very high impact events to normal impact events.
Would you say that events are low impact?
I think most events will be comparatively low impact compared to the highest impact events. Let’s say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the events based on their expected impact prior to the events, so the money evens out.
2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.
3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?
I’m not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1⁄1,000,000 of the people who start such projects can have high impact?
For an impact purchase the amount of money is decided based on how good impact of the project was
I’m curious about how exactly this would work. My prior is that impact is clustered at the tails.
This means that there will frequently be small impact projects, and very occasionally be large impact projects—My guess is that if you want to be able to incentivize the frequent small impact projects at all, you won’t be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of their impact, but in practice this means that there’s a cap on how much you can receive from impact purchase.
Maybe a cap is fine, and you know that all you’re ever get from an impact purchase is for instance $50,000, and the prestige comes with what % of impact they bought at that price.
Perhaps Dereke Bruce had the right of it here:
“In order to keep a true perspective of one’s importance, everyone should have a dog that will worship him and a cat that will ignore him.”
I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people’s overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.
Therefore, I propose a new top EA cause for 2020: Distributing Puppies
Puppies decrease individual loneliness, allowing a more global worldview.
Puppies model unconditional love and altruism, creating a flowthrough to their owners.
Puppies with good owners on their own are just sources of positive utility, increasing global welfare.
You might be interested in this same question that was asked last June:
Something else in the vein of “things EAs and rationalists should be paying attention to in regards to Corona.”
There’s a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a “buy any book you want” rule that a company has—so you make it so that you can no longer get any free books.
This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.
This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they’re safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they’re worth.
These types of rules are very hard to repeal after the fact because of absence blindness—someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.
Curious about what you think is weird in the framing?
The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.
Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.
No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you’re using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.
the post focuses on human welfare,
It seems to me that there’s a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
I’m also very interested in how increased economic growth impacts existential risk.
At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).
I’ve since added a constraint into my innovation acceleration efforts, and now am basically focused on “asymmetric, wisdom-constrained innovation.”
Let’s say you believe two things:
1. Growth will have flowthrough effects on existential risk.
2. You have a comparative advantage effecting growth over x-risk.
You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I’m working on.
I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.
It’s been pointed out to me on Lesswrong that depressions actually save lives. Which makes the “two curves” narrative much harder to make.
This argument has the same problem as recommending people don’t wear masks though, if you go from “save lives save lives don’t worry about economic impacts” to “worry about economics impacts it’s as important as quarantine” you lose credibility.
You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.
This was the source of my “two curves” narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.
Here’s an analysis by 80k. https://80000hours.org/problem-profiles/improving-institutional-decision-making/
Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.
Was thinking about finding a simple good enough correlation between economic depression and death, then creating a “flattening the curve” graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.
On the other hand, I think it’s quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we’ll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.
But getting the time of this right is everything. Striking the right balance of “deaths from economic freefall” and “deaths from an overloaded medical system” is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.
Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is
1. Mostly in non-profits
2. Orders of magnitude smaller than funding for AI capabilities
It’s quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren’t speaking cogently about the tradeoffs between depression and lives saved from Corona—they have gone through this same train of thought, and decided that preventing a depression is an information hazard.
I think this is actually quite a complex question. I think it’s clear that there’s always a chance of value drift, so you can never put the chance of “giving up” at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.
If we take the data from here with 0 grains of salt, you’re actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).