Software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.
Full list of EA posts: jefftk.com/news/ea
Software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.
Full list of EA posts: jefftk.com/news/ea
I think it depends what sort of risks we are talking about. The more likely Dustin is to turn out to be perpetrating a fraud (which I think is very unlikely!) the more the marginal person should be earning to give. And the more projects should be taking approaches that conserve runway at the cost of making slower progress toward their goals.
Are the high numbers of deaths in the 1500s old world diseases spreading in the new world? If so, that seems to overestimate natural risk: the world’s current population isn’t separated from a larger population that has lots of highly human-adapted diseases.
In the other direction, this kind of analysis doesn’t capture what I personally see as a larger worry: human-created pandemics. I know you’re extrapolating from the past, and it’s only very recently that these would even have been possible, but this seems at least worth noting.
other cities across the U.S. (like Boston) … regularly build subway lines for less than $360 million per kilometer
Huh? Boston hasn’t built a subway line in decades, let alone regularly builds them.
It did recently finish a light rail extension in an existing right of way, expanding a trench with retaining walls, but (a) that’s naturally much cheaper than digging a subway and (b) it took 12y longer than planned.
The NAO ran a pilot where we worked with the CDC and Ginkgo to collect and sequence pooled airplane toilet waste. We haven’t sequenced these samples as deeply as we would like to yet, but initial results look very promising.
Militaries are generally interested in this kind of thing, but primarily as biodefense: protecting the population and service members.
As I tried to communicate in my previous comment, I’m not convinced there is anyone who “will have their plans changed for the better by seeing OpenAI safety positions on 80k’s board”, and am not arguing for including them on the board.
EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!
I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAI’s safety roles where anyone who is the “right person” almost certainly already knows about the role, there’s not much value in listing it but also not much harm.
I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and I’d change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who don’t pay much attention to disclaimers and interpret listing a role as saying “someone who takes this role will have a large positive impact in expectation”.
If there did turn out to be a lot of people in that category I’d recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role you’ll have high positive impact in expectation (I’d biasedly put the NAO’s current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.
Possibly! That would certainly be a convenient finding (from my perspective) if it did end up working out that way.
[I] am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?
That’s one of the main questions, yes.
The core idea is that our efficacy simulations are in terms of cumulative incidence in a monitored population, but what people generally care about is cumulative incidence in the global (or a specific country’s) population.
online tool
Thanks! The tool is neat, and it’s close to the approach I’d want to see.
I think this is almost never … would surprise me
I don’t see how you can say both that it will “almost never” be the case that NYC will “hit 1% cumulative incidence after global 1% cumulative incidence” but also that it would surprise you if you can get to where your monitored cities lead global prevalence?
I haven’t done or seen any modeling on this, but intuitively I would expect the variance due to superspreading to have most of its impact in the very early days, when single superspreading events can meaningfully accelerate the progress of the pandemic in a specific location, and to be minimal by the time you get to ~1% cumulative incidence?
I think this is probably far along you’re fine
I’m not sure what you mean by this?
(Yes, 1% cumulative incidence is high—I wish the NAO were funded to the point that we could be talking about whether 0.01% or 0.001% was achievable.)
I don’t object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles don’t need the job board to learn about them, especially when combined with the other factors we’ve been discussing.
In this subthread I’m pushing back on your broader “I think a job board shouldn’t host companies that have taken already-earned compensation hostage”.
the bigger issue is that OpenAI can’t be trusted to hold to any deal
I agree that’s a big issue and it’s definitely a mark against it, but I don’t think that should firmly rule out working there or listing it as a place EAs might consider working.
Thanks!
Expanded (b) into a full post: Sample Prevalence vs Global Prevalence
I agree that was pretty terrible behavior, but there are lots of anti-employee things an organization could do which are orthogonal (especially if you know this going in, which OpenAI employees previously didn’t but we’re talking about new ones here) to whether the work is impactful. There are lots of hard lines that seem like they would make sense, but I’m not in favor of them: at some point there will be a job worth listing where it really is very impactful despite serious downsides.
For example, I think good employers pay you enough for a reasonably comfortable life, but if, say, some key government role is extremely poorly paid it may still make sense to take it if you have savings you’re willing to spend down to support yourself.
Or, I think graduate school is often pretty bad for people, where PIs have far more power than corporate world bosses, but while you should certainly think hard about this before going to grad school it’s not determinative.
I don’t read those two quotes as in tension? The job board isn’t endorsing organizations, it’s endorsing roles. An organization can be highly net harmful while the right person joining to work on the right thing can be highly positive.
I also think “endorsement” is a bit too strong: the bar for listing a job shouldn’t be “anyone reading this who takes this job will have significant positive impact” but instead more like “under some combinations of values and world models that the job board runners think are plausible, this job is plausibly one of the highest impact opportunities for the right person”.
(About 85% confident, not a tax professional)
At least in the US I’m pretty sure this has very poor tax treatment. The company match portion would be taxable to the employee while also not qualifying for the charitable tax deduction. The idea is they can offer a consistent match as policy, but if they’re offering a higher match for a specific employee that’s taxable compensation. And the employee can only deduct contributions they make, which this isn’t quite.
I do think this is a useful comparison, but if you want to be able to detect something before ~0.05% of the people in any region are infected you need to scale up by a lot more than a factor of 20 ;) The issue is that (a) you’ll get up 0.05% in some region far before you get to 0.05% globally and (b) the detection system samples only some sewersheds and so in the likely futures where pandemic does not start in a monitored sewershed the global incidence is higher than the incidence you can measure.
Personally, I’m skeptical that with current or near future technology and costs we will see sufficiently widespread monitoring to provide the initial identification of a non-stealth pathogen: BOTECing it, you need a truly huge system.
EDIT: rereading the post, the initial version wasn’t clear enough that this was an estimate of what it would cost to flag a pandemic before a specific fraction of people in the monitored sewersheds had been infected. Edited the post to bring this limitation up into the summary.
We have been talking to people in the defense space, and this is something they’ve publicly expressed interest in. For example, in January the US Defense Innovation Unit put out a solicitation in this direction:
… The system must provide the capability to monitor known threats as well as new variants and unknown pathogens via untargeted methods (e.g., shotgun metagenomic and metatranscriptomic sequencing). Companies are encouraged to identify and characterize evidence of genetic engineering. …
We applied, but were not selected. I think they were looking for something more developed. But we’re continuing to talk to people!
I don’t think ‘responsible’ is the right word, but the consequences to the effective altruism project of not catching on earlier were enormous, far larger than to other economic actors exposed to FTX. And I do think we ought to have realized how unusual our situation was with respect to FTX.