Good catch, thanks.
jacobpfau
Lifeextension cites this https://pubmed.ncbi.nlm.nih.gov/24715076/ claiming “The results showed that when the proper dose of zinc is used within 24 hours of first symptoms, the duration of cold miseries is cut by about 50%” I’d be interested if you do a dig through the citation chain. The lifeextension page has a number of further links.
QALY/$ for promoting zinc as a common cold intervention
Epistemic status: Fun speculation. I know nothing about public health, and grabbed numbers from the first source I could find for every step of the below. I link to the sources which informed my point estimates.
Here’s my calculation broken down into steps:
-
Health-related quality of life effect for one year of common cold −0.2
-
Common cold prevalence in the USA 1.2/yr
-
Modally 7 days of symptoms having −0.2
-
~1.5 million QALY burden per year when aggregated across the US population
This is the average of estimating from the above (1e6) with what I get (2e6) when deriving the US slice of the total DALY burden from global burden of disease data showing 3% global DALYs come from URI
There’s probably a direct estimate out there somewhere
-
50% probability the right zinc lozenges with proper dosing can prevent >90% of colds. This comes from here, here, and my personal experience of taking zinc lozenges ~10ish occasions.
-
15% best case adoption scenario, from taking a log-space mean of
Masks 5%
General compliance rate 10-90%
100,000 QALYs/year is my estimate for the expected value of taking some all-or-nothing action to promote zinc lozenges (without the possibility of cheaply confirming whether they work) which successfully changes public knowledge and medical advice to promote our best-guess protocol for taking zinc.
$35 million is my estimate for how much we should be willing to spend to remain competitive with Givewell’s roughly 1 QALY/$71. This assumes a 5 year effect duration. I have no idea how much such a thing would cost but I’d guess at most 1 OOM of value is being left on the table here, so I’m a bit less bullish on Zinc than I was before calculating.
EDIT: I calculated the cost of supplying the lozenges themselves. Going off these price per lozenge, this 5 year USA supply of lozenges costs ~35 million alone. Presumably this doesn’t need to hit the Givewell spending bar, but just US government spending on healthcare.
-
Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I’d put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn’t capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you’d need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
My deeply concerning impression is that OpenPhil (and the average funder) has timelines 2-3x longer than the median safety researcher. Daniel has his AGI training requirements set to 3e29, and I believe the 15th-85th percentiles among safety researchers would span 1e31 +/- 2 OOMs. On that view, Tom’s default values are off in the tails.
My suspicion is that funders write off this discrepancy, if noticed, as inside-view bias i.e. thinking safety researchers self-select for scaling optimism. My, admittedly very crude, mental model of an OpenPhil funder makes two further mistakes in this vein: (1) Mistakenly taking the Cotra report’s biological anchors weighting as a justified default setting of parameters rather than an arbitrary choice which should be updated given recent evidence. (2) Far overweighting the semi-informative priors report despite semi-informative priors abjectly failing to have predicted Turing-test level AI progress. Semi-informative priors apply to large-scale engineering efforts which for the AI domain has meant AGI and the Turing test. Insofar as funders admit that the engineering challenges involved in passing the Turing test have been solved, they should discard semi-informative priors as failing to be predictive of AI progress.
To be clear, I see my empirical claim about disagreement between the funding and safety communities as most important—independently of my diagnosis of this disagreement. If this empirical claim is true, OpenPhil should investigate cruxes separating them from safety researchers, and at least allocate some of their budget on the hypothesis that the safety community is correct.
In my opinion, the applications of prediction markets are much more general than these. I have a bunch of AI safety inspired markets up on Manifold and Metaculus. I’d say the main purpose of these markets is to direct future research and study. I’d phrase this use of markets as “A sub-field prioritization tool”. The hope is that markets would help me integrate information such as (1) methodology’s scalability e.g. in terms of data, compute, generalizability (2) research directions’ rate of progress (3) diffusion of a given research direction through the rest of academia, and applications.
Here are a few more markets to give a sense of what other AI research-related markets are out there: Google Chatbot, $100M open-source model, retrieval in gpt-4
Seems to me safety timeline estimation should be grounded by a cross-disciplinary, research timeline prior. Such a prior would be determined by identifying a class of research proposals similar to AI alignment in terms of how applied/conceptual/mathematical/funded/etc. they are and then collecting data on how long they took.
I’m not familiar with meta-science work, but this would probably involve doing something like finding an NSF (or DARPA) grant category where grants were made public historically and then tracking down what became of those lines of research. Grant-based timelines are likely more analogous to individual sub-questions of AI alignment than the field as a whole; e.g. the prospects for a DARPA project might be comparable to the prospects for working out the details of debate. Converting such data into a safety timelines prior would probably involve estimating how correlated progress is on grants within subfields.
Curating such data, and constructing such a prior would be useful both in terms of informing the above estimates, but also for identifying factors of variation which might be intervened on—e.g. how many research teams should be funded to work on the same project in theoretical areas? This timelines prior problem seems like a good fit for a prize, where entries would look like recent progress studies reports (c.f. here and here).
Do you have a sense of which argument(s) were most prevalent and which were most frequently the interviewees crux?
It would also be useful to get a sense of which arguments are only common among those with minimal ML/safety engagement. If basic AI safety engagement reduces the appeal of a certain argument, then there’s little need for further work on messaging in that area.
A few thoughts on ML/AI safety which may or may not generalize:
You should read successful candidates’ SOPs to get a sense of style, level of detail, and content c.f. 1, 2, 3. Ask current EA PhDs for feedback on your statement. Probably avoid writing a statement focused on an AI safety/EA idea which is not in the ML mainstream e.g. IDA, mesa-optimization, etc. If you have multiple research ideas, considering writing more than one (i.e. tailored) SOP and submit the SOP which is most relevant to faculty at each university.
Look at groups’ pages to get a sense of the qualification distribution for successful applicants, this is a better way to calibrate where to apply than looking at rankings IMO. This is also a good way to calibrate how much experience you’re expected to have pre-PhD. My impression is that in many ML programs it is very difficult to get in directly out of undergraduate if you do not have an exceptional track-record e.g. top publications, or Putnam high scores etc.
For interviews, bringing up concrete ideas on next steps for a professor’s paper is probably very helpful.
My vague impression is that financial security and depression are less relevant than in other fields here, as you can probably find job opportunities partway through if either becomes problematic. Would be interested to hear disagreement.
On-demand Software Engineering Support for Academic AI Safety Labs
AI safety work, e.g. in RL and NLP, involves both theoretical and engineering work, but academic training and infrastructure does not optimize for engineering. An independent non-profit could cover this shortcoming by providing software engineers (SWE) as contractors, code-reviewers, and mentors to academics working on AI safety. AI safety research is often well funded, but even grant-rich professors are bottlenecked by university salary rules and professor hours which makes hiring competent SWE at market rate challenging. An FTX Foundation funded organization could get around these bottlenecks by doing independent vetting of SWE and offering industry-competitive salaries and then having hired SWE collaborate with academic safety researchers at no cost to the lab. If successful, academic AI safety work ends up faster in terms of researcher hours and higher impact because papers are accompanied by more legible and standardized code bases—i.e. AI safety work ends up looking more like distill. Estimating potential impact of this proposal could be done by soliciting input from researchers who moved from academic labs to private AI safety organizations.
EDIT: This seems to already exist at https://alignmentfund.org/
Re: feasibility of AI alignment research, Metaculus already has Control Problem solved before AGI invented . Do you have a sense of what further questions would be valuable?
Ok, seems like this might have been more a terminological misunderstanding on my end. I think I agree with what you say here, ‘What if the “Inner As AGI” criterion does not apply? Then the outer algorithm is an essential part of the AGI’s operating algorithm’.
Ok, interesting. I suspect the programmers will not be able to easily inspect the inner algorithm, because the inner/outer distinction will not be as clear cut as in the human case. The programmers may avoid sitting around by fiddling with more observable inefficiencies e.g. coming up with batch-norm v10.
Good clarification. Determining which kinds of factoring are the ones which reduce valence is more subtle than I had thought. I agree with you that the DeepMind set-up seems more analogous to neural nociception (e.g. high heat detection). My proposed set-up (Figure 5) seems significantly different from the DM/nociception case, because it factors the step where nociceptive signals affect decision making and motivation. I’ll edit my post to clarify.
Your new setup seems less likely to have morally relevant valence. Essentially the more the setup factors out valence-relevant computation (e.g. by separating out a module, or by accessing an oracle as in your example) the less likely it is for valenced processing to happen within the agent.
Just to be explicit here, I’m assuming estimates of goal achievement are valence-relevant. How generally this is true is not clear to me.
Thanks for the link. I’ll have to do a thorough read through your post in the future. From scanning it, I do disagree with much of it, many of those points of disagreement were laid out by previous commenters. One point I didn’t see brought up: IIRC the biological anchors paper suggests we will have enough compute to do evolution-type optimization before the end of the century. So even if we grant your claim that learning to learn is much harder to directly optimize for, I think it’s still a feasible path to AGI. Or perhaps you think evolution like optimization takes more compute than the biological anchors paper claims?
Certainly valenced processing could emerge outside of this mesa-optimization context. I agree that for “hand-crafted” (i.e. no base-optimizer) systems this terminology isn’t helpful. To try to make sure I understand your point, let me try to describe such a scenario in more detail: Imagine a human programmer who is working with a bunch of DL modules and interpretability tools and programming heuristics which feed into these modules in different ways—in a sense the opposite end of the spectrum from monolithic language models. This person might program some noxiousness heuristics that input into a language module. Those might correspond to a Phenumb-like phenomenology. This person might program some other noxiousness heuristics that input into all modules as scalars. Those might end up being valenced or might not, hard to say. Without having thought about this in detail, my mesa-optimization framing doesn’t seem very helpful for understanding this scenario.
Ideally we’d want a method for identifying valence which is more mechanistic that mine. In the sense that it lets you identify valence in a system just by looking inside the system without looking at how it was made. All that said, most contemporary progress on AI happens by running base-optimizers which could support mesa-optimization, so I think it’s quite useful to develop criterion which apply to this context.
Hopefully this answers your question and the broader concern, but if I’m misunderstanding let me know.
Your interpretation is a good summary!
Re comment 1: Yes, sorry this was just meant to point at a potential parallel not to work out the parallel in detail. I think it’d be valuable to work out the potential parallel between the DM agent’s predicate predictor module (Fig12/pg14) with my factored-noxiousness-object-detector idea. I just took a brief look at the paper to refresh my memory, but if I’m understanding this correctly, it seems to me that this module predicts which parts of the state prevent goal realization.
Re comment 2: Yes, this should read “(positive/negatively)”. Thanks for pointing this out.
Re EDIT: Mesa-optimizers may or may not represent a reward signal—perhaps there’s a connection here with Demski’s distinction between search and control. But for the purposes of my point in the text, I don’t think this much matters. All I’m trying to say is that VPG-type-optimizers have external reward signals, whereas mesa-optimizers can have internal reward signals.
The google form link seems not to work.
I would be particularly interested to know if ‘technical AI academic’ meant just professors, or included post-docs/PhDs.
Also are we to assume that any non 1person*year annotated question meant causing to exist an entirely new career-up-to-doom/TAI worth of work?