Boston-based, NAO Co-Lead, GWWC board member, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/ānews/āea
Jeff Kaufman šø
As I tried to communicate in my previous comment, Iām not convinced there is anyone who āwill have their plans changed for the better by seeing OpenAI safety positions on 80kās boardā, and am not arguing for including them on the board.
EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!
I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAIās safety roles where anyone who is the āright personā almost certainly already knows about the role, thereās not much value in listing it but also not much harm.
I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and Iād change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who donāt pay much attention to disclaimers and interpret listing a role as saying āsomeone who takes this role will have a large positive impact in expectationā.
If there did turn out to be a lot of people in that category Iād recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role youāll have high positive impact in expectation (Iād biasedly put the NAOās current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.
Possibly! That would certainly be a convenient finding (from my perspective) if it did end up working out that way.
[I] am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?
Thatās one of the main questions, yes.
The core idea is that our efficacy simulations are in terms of cumulative incidence in a monitored population, but what people generally care about is cumulative incidence in the global (or a specific countryās) population.
online tool
Thanks! The tool is neat, and itās close to the approach Iād want to see.
I think this is almost never ⦠would surprise me
I donāt see how you can say both that it will āalmost neverā be the case that NYC will āhit 1% cumulative incidence after global 1% cumulative incidenceā but also that it would surprise you if you can get to where your monitored cities lead global prevalence?
I havenāt done or seen any modeling on this, but intuitively I would expect the variance due to superspreading to have most of its impact in the very early days, when single superspreading events can meaningfully accelerate the progress of the pandemic in a specific location, and to be minimal by the time you get to ~1% cumulative incidence?
I think this is probably far along youāre fine
Iām not sure what you mean by this?
(Yes, 1% cumulative incidence is highāI wish the NAO were funded to the point that we could be talking about whether 0.01% or 0.001% was achievable.)
I donāt object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles donāt need the job board to learn about them, especially when combined with the other factors weāve been discussing.
In this subthread Iām pushing back on your broader āI think a job board shouldnāt host companies that have taken already-earned compensation hostageā.
the bigger issue is that OpenAI canāt be trusted to hold to any deal
I agree thatās a big issue and itās definitely a mark against it, but I donāt think that should firmly rule out working there or listing it as a place EAs might consider working.
Thanks!
Expanded (b) into a full post: Sample Prevalence vs Global Prevalence
SamĀple Prevalence vs Global Prevalence
I agree that was pretty terrible behavior, but there are lots of anti-employee things an organization could do which are orthogonal (especially if you know this going in, which OpenAI employees previously didnāt but weāre talking about new ones here) to whether the work is impactful. There are lots of hard lines that seem like they would make sense, but Iām not in favor of them: at some point there will be a job worth listing where it really is very impactful despite serious downsides.
For example, I think good employers pay you enough for a reasonably comfortable life, but if, say, some key government role is extremely poorly paid it may still make sense to take it if you have savings youāre willing to spend down to support yourself.
Or, I think graduate school is often pretty bad for people, where PIs have far more power than corporate world bosses, but while you should certainly think hard about this before going to grad school itās not determinative.
I donāt read those two quotes as in tension? The job board isnāt endorsing organizations, itās endorsing roles. An organization can be highly net harmful while the right person joining to work on the right thing can be highly positive.
I also think āendorsementā is a bit too strong: the bar for listing a job shouldnāt be āanyone reading this who takes this job will have significant positive impactā but instead more like āunder some combinations of values and world models that the job board runners think are plausible, this job is plausibly one of the highest impact opportunities for the right personā.
- Jul 12, 2024, 1:47 AM; 3 points) 's comment on 80,000 hours should reĀmove OpenAI from the Job Board (and similar EA orgs should do similarly) by (
(About 85% confident, not a tax professional)
At least in the US Iām pretty sure this has very poor tax treatment. The company match portion would be taxable to the employee while also not qualifying for the charitable tax deduction. The idea is they can offer a consistent match as policy, but if theyāre offering a higher match for a specific employee thatās taxable compensation. And the employee can only deduct contributions they make, which this isnāt quite.
I do think this is a useful comparison, but if you want to be able to detect something before ~0.05% of the people in any region are infected you need to scale up by a lot more than a factor of 20 ;) The issue is that (a) youāll get up 0.05% in some region far before you get to 0.05% globally and (b) the detection system samples only some sewersheds and so in the likely futures where pandemic does not start in a monitored sewershed the global incidence is higher than the incidence you can measure.
Personally, Iām skeptical that with current or near future technology and costs we will see sufficiently widespread monitoring to provide the initial identification of a non-stealth pathogen: BOTECing it, you need a truly huge system.
EDIT: rereading the post, the initial version wasnāt clear enough that this was an estimate of what it would cost to flag a pandemic before a specific fraction of people in the monitored sewersheds had been infected. Edited the post to bring this limitation up into the summary.
We have been talking to people in the defense space, and this is something theyāve publicly expressed interest in. For example, in January the US Defense Innovation Unit put out a solicitation in this direction:
⦠The system must provide the capability to monitor known threats as well as new variants and unknown pathogens via untargeted methods (e.g., shotgun metagenomic and metatranscriptomic sequencing). Companies are encouraged to identify and characterize evidence of genetic engineering. ā¦
We applied, but were not selected. I think they were looking for something more developed. But weāre continuing to talk to people!
If we were spending $10M a year instead of $1.5M on sequencing, how much less than 0.2% of people would have to be infected before an alert was raised?
Itās pretty close to linear: do 10x more sequencing and it goes from 0.2% to 0.02%. You can play with our simulator here: https://āādata.securebio.org/āāsimulator/āā
How should I feel about 0.2%? Where is 0.2% on the value spectrum from no alert system and an alert system that triggered on a single infection?
Thatās an important question that I donāt have the answer to, sorry!
How many peopleās worth of wastewater can be tested with $1.5M of sequencing?
This isnāt a question of limits, but of diminishing returns to sampling from additional sewersheds. Which also depends a lot on how different the sewersheds are from each other.
Here you go: Detecting Genetically Engineered Viruses With Metagenomic Sequencing
But this was already something I was going to put on the Forum ;)
ļDeĀtectĀing GeĀnetĀiĀcally EngĀineered Viruses With MeĀtageĀnomic Sequencing
Very little biorisk content here, perhaps because of info-hazards.
When I write biorisk-related things publicly Iām usually pretty unsure of whether the Forum is a good place for them. Not because of info-hazards, since that would gate things at an earlier stage, but because they feel like theyāre of interest to too small a fraction of people. For example, I could plausibly have posted Quick Thoughts on Our First Sampling Run or some of my other posts from https://āādata.securebio.org/āājefftk-notebook/āā here, but that felt a bit noisy?
It also doesnāt help that detailed technical content gets much less attention than meta or community content. For example, three days ago I wrote a comment on @Conrad K.ās thoughtful Three Reasons Early Detection Interventions Are Not Obviously Cost-Effective, and while I feel like itās a solid contribution only four people have voted on it. On the other hand, if you look over my recent post history at my comments on Manifest, far less objectively important comments have ~10x the karma. Similarly the top level post was sitting at +41 until Mike bumped it last week, which wasnāt even high enough that (before I changed my personal settings to boost biosecurity-tagged posts) I saw it when it came out. I see why this happensāthere are a lot more people with the background to engage on a community topic or even a general āgood newsā postābut it still doesnāt make me as excited to contribute on technical things here.
The NAO ran a pilot where we worked with the CDC and Ginkgo to collect and sequence pooled airplane toilet waste. We havenāt sequenced these samples as deeply as we would like to yet, but initial results look very promising.
Militaries are generally interested in this kind of thing, but primarily as biodefense: protecting the population and service members.