Because of this heavy tailed distribution of interventions
Is it actually heavy-tailed? It looks like an ordered bar chart, not a histogram, so it’s hard to tell what the tails are like.
Because of this heavy tailed distribution of interventions
Is it actually heavy-tailed? It looks like an ordered bar chart, not a histogram, so it’s hard to tell what the tails are like.
What do you think of the Bayesian solution, where you shrink your EV estimate towards a prior (thereby avoiding the fanatical outcomes)?
The three groups have completely converged by the end of the 180 day period
I find this surprising. Why don’t the treated individuals stay on a permanently higher trajectory? Do they have a social reference point, and since they’re ahead of their peers, they stop trying as hard?
Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)?
I wonder if we can back out what assumptions the ‘peace pact’ approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.
I get the weak impression that worldview diversification (partially) started as an approximation to expected value, and ended up being more of a peace pact between different cause areas. This peace pact disincentivizes comparisons between giving in different cause areas, which then leads to getting their marginal values out of sync.
Do you think there’s an optimal ‘exchange rate’ between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it?
Have you seen this?
If we don’t kill ourselves in the next few centuries or millennia, almost all humans that will ever exist will live in the future.
The idea is that, after a few millenia, we’ll have spread out enough to reduce extinction risks to ~0?
Nice work! Sounds like movement building is very important.
Do you disagree with FTX funding lead elimination instead of marginal x-risk interventions?
I happen to disagree that possible interventions that greatly improve the expectation of the long-term future will soon all be taken.
What do you think about MacAskill’s claim that “there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear.”?
Do you think FTX funding lead elimination is a mistake, and that they should do patient philanthropy instead?
Also, how are you defining “longtermist” here? You seem to be using it to mean “focused on x-risk”.
I think that these factors might be making it socially harder to be a non-longtermist who engages with the EA community, and that is an important and missing part of the ongoing discussion about EA community norms changing.
Although note that Will MacAskill supports lead elimination from a broad longtermist perspective:
Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It’s only when the probabilities are tiny that you get differences in optimal policy.
Appreciate your support!
Using in is assuming constant returns to scale. If you have , you get diminishing returns.
Messing around with some python code:
from scipy.stats import norm
import numpy as np
def risk_reduction(K,L,alpha,beta):
print(‘risk:‘, norm.cdf(-(K**alpha)*(L**beta)))
print(‘expected value:‘, 1/norm.cdf(-(K**alpha)*(L**beta)))
print(‘risk (2x):‘, norm.cdf(-((2*K)**alpha)*(L**beta)))
print(‘expected value (2x):‘, 1/norm.cdf(-((2*K)**alpha)*(L**beta)))
print(‘ratio:’,(1/norm.cdf(-((2*K)**alpha)*(L**beta)))/(1/norm.cdf(-(K**alpha)*(L**beta))))
K,L = 0.5, 0.5
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)
K,L = 0.5, 0.5
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)
K,L = 0.5, 20
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)
K,L = 0.5, 20
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)
Are you using ?
I don’t find this framing very useful. The importance-tractability-crowdedness framework gives us a sophisticated method for evaluating causes (allocate resources according to marginal utility per dollar), which is flexible enough to account for diminishing returns as funding increases.
But the longtermist framework collapses this down to a binary: is this the best intervention or not?