Even if you’re in an Anglophone country, you’ll need to be “bilingual” between local and tech-startup norms. At Wave, our internal culture emphasizes honesty, transparency and autonomy, which is very different from a typical, say, Senegalese work environment.
I’m curious to hear more about this. Can you give some examples of how the norms differ?
More generally, how feasible is it to export Silicon Valley’s high product standards?
This China scholar is pessimistic about the recent pivot to more state intervention.
I don’t see that IMR poses any challenge to the standard EA cause prioritization method. IMR can be easily modeled as a tractability function that is increasing for some part of its domain. Depending on funding levels, causes with IMR can have the highest marginal utility per dollar, and hence would be prioritized according to the standard framework.
Yes, the difficult part is applying the ITC framework in practice; I don’t have any special insight there. But the goal is to estimate importance and the tractability function for different causes.
You can see how 80k tries to rank causes here.
The google docs method worked, but you can’t control image size.
I’m now using imgur, which should be recommended somewhere here for authors.
Okay, photos uploaded to Dropbox instead of Google Photos.
For future reference, this is what worked for me, using Dropbox:
Share → Create link
Open in incognito browser (regular browser doesn’t work)
Copy image address
Load into post
Note that 80k sometimes takes a softer tone, eg here:
An individual can only focus on one or two areas at a time, but a large group of people working together should most likely spread out over several.
When this happens, there are additional factors to consider when choosing a problem area. Instead of aiming to identify the single most pressing issue at the margin, the aim is to work out:
1. The ideal allocation of people over issues, and which direction that allocation should move in.
2. Where your comparative advantage lies compared to others in the group.
We call this the ‘portfolio approach’.
Yes, you’re right that altruists have a more encompassing utility function, since they focus on social instead of individual welfare. But even if altruists will invest more in elections than self-interested individuals, it doesn’t follow that it’s a good investment overall.
Sorry for being harsh, but my honest first impression was “this makes EAs look bad to outsiders”.
To add to Ben’s argument, uncertainty about which cause is the best will rationalize diversifying across multiple causes. If we use confidence intervals instead of point estimates, it’s plausible that the top causes will have overlapping confidence intervals.
From an AI policy standpoint, having the leader of the free world on board would be big.
Can you elaborate on this?
This opportunity is potentially one that makes AI policy money constrained rather than talent constrained for the moment.
Is your claim that AI policy is currently talent-constrained, and having Yang as president would lead to more people working on it, thereby making it money-constrained?
It also seems surprisingly easy to have an outsize influence in the money-in-politics landscape. Peter Thiel’s early investment in Trump looks brilliant today (at accomplishing the terrible goal of installing a protectionist).
This is naive. The low amount of money in politics is presumably an equilibrium outcome, and not because everyone has failed to consider the option of buying elections. And the reasonable conclusion is that Thiel got lucky, given how close the election was, not that he single-handedly caused Trump’s victory.
Oops, I was wrong. I had skipped the intro section and was looking at the definitions later in the article.
Importance = good done / % of problem solved
Neglectedness = % increase in resources / extra $
I don’t see how you get this from the 80k article. On my reading, their definition of importance is just the amount of good done (rather than good done per % of problem solved), and their definition of neglectedness is just the level of resources (rather than the percentage change per dollar). You should be clear that you’re giving an interpretation of their model, and not just copying it.
This is how I think about the ITN framework:
What we ultimately care about is marginal utility per dollar, MU/$ (or marginal cost-effectiveness). ITN is a way of proxying MU/$ when we can’t easily estimate it directly.
Importance = utility gained from solving the entire problem.
Tractability = percent of problem solved per dollar.
Neglectedness = amount of resources allocated to the problem.
Note that tractability can be a function of neglectedness: the amount of the problem solved per dollar will likely vary depending on how many resources are already allocated. This is to capture diminishing returns, as we expect the first dollar spent on a problem to be more effective in solving it than the millionth dollar.
Then to get MU/$ as a function of neglectedness, we multiply importance and tractability:
MU/$ = utility(total problem) * % solved/$ (=f(resources)). Now we have MU/$ as a function of resources, so to figure out where we are on the MU/$ curve, we plug in the value of resources (neglectedness).
Here’s an example without diminishing returns: suppose solving an entire problem increases utility by 100 utils, so importance = 100 utils. And suppose tractability is 1% of the problem solved per dollar. Note that this doesn’t vary with resources spent, so there aren’t diminishing returns. Then MU/$ = 100 utils * 0.01/$ = 1 util/$. Here, neglectedness (defined as resources spent) doesn’t matter, except when spending hits $100 and the problem is fully solved.
Now let’s introduce diminishing returns. Let’s denote resources spent by x. As before, importance = 100 utils. But now, suppose tractability is (1/x)% of the problem solved per dollar. Now we have diminishing returns: the first dollar solves 1% of the problem, but the tenth dollar solves 0.1%. Here MU/$ = 100 utils * (1/x)%/$ = 1/x utils/$. To evaluate the MU/$ of this problem, we need to know how neglected it is, captured by how many resources, x, have already been spent.
Hence, importance and tractability define MU/$ as a function of neglectedness, and neglectedness determines the specific value of MU/$.
The intuition behind the cost effectiveness of charter cities is that economic growth compounds, improving standards of living. Therefore, over a sufficiently long time horizon, any growth change will dwarf a level change, like those attributable to deworming or anti-malaria efforts.
I think this framing is misleading. A “growth change” just is repeated (increasing) level changes. The figure on p.14 says that constant 6.5% growth over 50 years will increase GDP per capita to $90k. This is an accounting identity—there’s no new information in “6.5% growth over 50 years” that’s not in “GDP per capita increased from $4k to $90k over 50 years”.
I’d prefer to have the discussion purely in levels, with much more detail on what specifically is increasing GDP. For example: “GDP will increase by $X million over the first five years, driven by increases of $A, $B, $C in sectors 1, 2, 3; there will be N1 new firms and N2 new residents...” If you can assume a growth rate, you can fill in these details. Also, I think the assumption of a constant growth rate over fifty years is too strong.
A 35 percent marginal contribution by CCI to the success of a charter city project is also a conservative estimate. CCI is uniquely positioned to bring together government officials, developers, and other interested parties and offer the expertise to plan a charter city and implement a new legal system.
I’d like to see a lot more discussion of what CCI’s contribution is. This sounds like a political slogan.
The 100% P(success) is especially unreasonable given the failed attempts by Paul Romer in Honduras and Madagascar.
Some quick points:
The scatterplots would like nicer with hollow circles.
we see how the output depends on a particular input even in the face of variations in all the other inputs—we don’t hold everything else constant. In other words, this is a global sensitivity analysis.
I’m a bit confused. In the GiveDirectly case for ‘value of increasing consumption’, you’re still holding the discount rate constant, right?
To address the recurring caveat, I wonder if we could plot the posterior mode/stdev against the input confidence interval length. Basically, taking GiveWell’s point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters.
More to come!
Yes, how does the posterior mode differ from GiveWell’s point estimates, and how does this vary as a function of the input uncertainty (confidence interval length)?
Yes, some symbolic activities will turn out to be high-impact, but we have to beware survivorship bias (ie, think of all the symbolic activities that went nowhere).