Building bespoke quantitative models to support decisionmakers in AI and bio. Right now that means: forecasting capabilities gains due to post-training enhancements on top of frontier foundation models, and estimating the annual burden of airborne disease in the US.
Joel Becker
Nice post!
One possibly obvious implication that I think is missing: when processes are multiplicative rather than additive, it is much more important to avoid zeros at some point in the process. There’s an analogy here with (e.g.) the O-ring theory of economic development.
Apply to join SHELTER Weekend this August
Sorry about that Ankush! Could you possibly email the form entries + CV to joelhbkr@gmail.com?
More detail to come! We’re expecting to start on evening of the 4th until evening of the 8th; 5th-7th would be mandatory (except for exceptional circumstances, we can discuss), 4th and 8th encouraged but optional.
The google doc link doesn’t seem to be working for me.
Ah! Thank you! Didn’t think you needed to copy literal text, rather than “Copy Link Address.”
Reslab Request for Information: EA hardware projects
Thank you, changed!
Sorry about that! Changed in TL;DR to “physical engineering projects.”
(Note that these prototypes could plausibly use electronics etc.. So might not make sense to rule out computer hardware, although of course we want to be clear that the scope is broader.)
Quantifying the impact of grantmaking career paths
Thank you! On your points:
Probability of getting any Open Philanthropy-equivalent job
probability_get_any_job depends a lot on the person.
Totally agree! For reference, here’s what impact looks like if you set
probability_get_any_job
to 0.01or 0.3
Broader value due to trying for grantmaking jobs
note that if you attempt to do grantmaking but don’t get a job, you still get most of the impact of your default career. So the “try to do grantmaking” action is worth a lot more than P(become a grantmaker)*EV(grantmaking).
Yes! This is what I am trying to get at by: “There might be over- or under-counting issues. For instance, I guess that the average number of years employed will be 11. This corresponds to me excluding the impact of the grantmaker’s post-grantmaking career from my definition of career path.” But modeling out the value of career paths attempted instead and their respective probabilities sounds awful :P
Should be fine as long as you don’t (1) compare different career paths using very dissimilar career path lengths, and (2) think exit options have different EV ex ante, conditional on the person’s characteristics. (2) looks fishy—I’m sure there’s much room for improvement there.
Sensitivity to $/x-risk basis point
I suspect that usd_per_basis_point_xrisk is doing a lot of work, and different people will have pretty different beliefs about it
Agree again! Although I am more pessimistic! I think that we can’t disagree about this parameter by much more than 1 order of magnitude, whereas there might be other parameters that have even wider scope for disagreement (or error).
For reference, here’s what impact looks like if you set
usd_per_basis_point_xrisk
to 100Mor 1B
- 31 Oct 2022 10:56 UTC; 3 points) 's comment on Quantifying the impact of grantmaking career paths by (
Thank you for flagging!
For interested onlookers, note that my use of Linch’s numbers leads to:
~6x increase in grantmaker impact (due to non-grantmaking activities). Probably tonnes of room for disagreement.
~0.13x multiplier due to the proportion of grants where grantmakers can make a counterfactual difference. Probably not much more than 0.5-1 orders of magnitude available for disagreement.
~300M $/x-risk basis point estimate. Probably not much more than 1-2 order of magnitude available for disagreement.
Makes sense! Would add another 0.5 orders of magnitude vs. the unconditional estimate, so ~2 x-risk basis points.
A basis point is 0.01% absolute risk, so 0.12 basis points corresponds to 0.0012% change in absolute risk, or 1 in (1 / (0.0012/100)) = 1 in 83,000 change in absolute risk.
Answering in turn:
A 7% probability of getting an OP-equivalent grantmaking job seems high to me
It comes from this number of relevant organizations
this level of Open Philanthropy-ness of those organizations
and this probability of succeeding in any particular job application
I’m sure all of these are wrong/up for debate!
How many (impact-weighted?) roles are there, and at what orgs?
Impact weighting is done at the org-level, above. The number of applicants is the reciprocal of the fraction of grants the grantmaker is responsible for (so, on average, 1⁄0.082 = 12, although this is driven by 25% of Open Philanthropy-equivalent organizations with smaller effective staffs).
Lots to debate here too! You can look into this yourself by copy-pasting the code into squiggle playground.
I’d also worry about self-selection and invitations (to apply) to become a grantmaker skewing things.
“Skewing” undersells the issue! It would totally change the calculations. You can see more in the first part of this comment.
Please submit! :)
Yes, interested in taking the over!
removed earlier today. well done!
I’m sure it’s going to be a challenging time for community health. Thank you so much for all the amazing work you guys do. (Evergreen, but especially pertinent this week.)
What prevents researchers from prioritising x-risk?
Questions
This proposal aims to answer the following questions:
What are researchers’ existing beliefs about existential risks?
How are their actions concerning existential risk mitigation dependent on their beliefs?
What factors might explain researchers not prioritising existential risk?
Background
Most longtermist EA-inspired organisations advocate for and support research on existential risk (among other topics). They do so, presumably, in the hope that, by providing information and resources to researchers, researchers’ views and efforts will shift in favour of more impactful topics.
Yet, little is known about several factors that appear critical to this theory of change. I am not aware of work concerning:
Researchers’ existing beliefs about the prevalence of and relative concern caused by existential risks.
The extent to which providing information and research support related to existential risk affect researchers’ beliefs and downstream actions.
The barriers to researchers’ prioritising work related to existential risk.
(That said, I am sure that research exists on questions broadly analogous to those above.)
Methods
I first need some context with many researchers. I do not yet have a clear idea of what form this context might take. Academic conferences seem most fitting, but are not practical in the short term. The obvious alternative is to conduct surveys over the internet following personal emails; in this case, one concern is that selection in to and out of the data could be a significant issue.
Regardless of the exact context chosen, I will first design surveys to characterise beliefs about existential risk. In eliciting beliefs, some or all participants could be incentivised to answer with estimates as close to expert opinion as possible. (This is standard procedure in economics, although usually one is trying to elicit a participant’s best guess of something closer to ‘ground truth’.) The survey might also ask participants to estimate small but mundane risks—e.g. the probability that a randomly chosen person might be struck by lightning that day—so that survey responses can be more easily compared, and perhaps even filtered or reweighted.
I will also estimate participant’s preferences over trade-offs related to existential risk. This part of the survey might ask, for example: how many statistical lives saved with certainty is equivalently good to a 1% reduction in existential risk next year? (See slides 20-27 on Alsan et al. (2020)).
Next, I envision a field experiment with researchers. I will test the degree to which researchers (1) value information about existential risk and (2) respond to evidence and support in their beliefs, preferences, and actions relating to existential risk. This “evidence and support” might take the form of risk estimates from Ord (2020), and suggestions from research priorities organisations on how to reorient research towards existential risk reduction.
In the experiment, I will first measure participants’ willingness to pay (WTP) for evidence and support. Second, I will evaluate participants’ response to receiving evidence and support on (a) survey outcomes, immediately after receiving information and again some years in the future, (b) measurable non-survey outcomes. I have yet to settle on which outcomes to include in (b), but suggestions include: quantity of articles produced relating to existential risk, donations to charities relevant to existential risk, and so on.
Finally, upon analysing the data, I will explore mechanisms which might play a role in successfully shifting researchers’ focus towards existential risk. This process may include conducting follow-up experiments.
Uncertainties
I welcome feedback on anything/everything above, but some uncertainties that immediately stand out to me:
Which contexts might be most appropriate and amenable for experiments?
Which mechanisms seem most likely to be important ahead of time? (Such that I could include considerations for them in the main experiment.)
Which other trade-off questions might I include? I want this section to get at: if existential risks are made more salient and/or made to appear more tractable, how do perceptions of trade-offs (to be related to research inputs) change? But maybe there is a clearer question to be asked here.
How might my big-picture questions be misspecified? Relatedly, which big-picture questions might be more interesting and might therefore lead to different experiments/analysis?
Session
EDIT: Happy with either session!