I lead a small think tank dedicated to accelerating the pace of scientific advancement by improving the conditions of science funding. As well, I’m a senior advisor to the Social Science Research Council. Prior to these roles, I spent some 9 years at Arnold Ventures (formerly the Laura and John Arnold Foundation) as VP of Research.
Stuart Buck
OK, fair enough, what I said was perhaps a bit overstated. It is even more overstated to refer to a “conference filled with racist speakers” etc.
The main debate here is whether people who ever aid controversial things should be allowed to attend an event at all, and/or to give a talk about unrelated issues.
Ah, that is a fair point!
Great comment and overview of the event, which I very much enjoyed.
Was anyone there who had ever uttered a previous phrase or sentence with which I might disgree, even firmly so? Almost certainly.
I mean, Eliezer was there, and he has suggested that human infants might be susceptible to killing up to 18 months (https://x.com/antoniogm/status/1632162012229693440), which I regard as unbelievably monstrous.
But even if someone said something monstrous, I’m still willing to hear them out, to attend a conference with them, and to attempt to persuade them otherwise (if it comes up). And who knows, maybe some belief of mine might turn out to seem monstrous to other people. I should hope they’d try to engage with me.
Trying to cancel folks because they spoke at an event but another speaker said a bad thing 15 years ago—that’s an absurd level of guilt by association.
Is the consensus currently that the investment in Twitter has paid off or is ever likely to do so?
I could imagine making that case, but what’s the point of all the Givewell-style analysis of evidence, or all the detailed attempts to predict and value the future, if in the end, what would have been the single biggest allocation of EA funds for all time was being proposed based on vibes?
I did think Harris could have been slightly more aggressive in his questioning (as in, some level above zero). E.g., why would MacAskill even suggest that SBF might have have been altruistic in his motivations, even though we now know about the profligate and indulgent lifestyle that SBF led? MacAskill had to have known about that behavior at the time (why didn’t it make him suspicious?).
And why was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk’s purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?
- 18 Apr 2024 9:47 UTC; 128 points) 's comment on Personal reflections on FTX by (
This statement cracked me up for some reason: “At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research any of these, please email us.”
I.e., this isn’t an RFP (request for proposals). Instead, it’s more like a RFINP, BMITFWDEK? Request for Information Not Proposals, But Maybe In The Future (We Don’t Even Know)?
OK, mostly joking—in all seriousness, I haven’t seen wealthy philanthropies release lists of ideas that are hopefully funded elsewhere, but maybe that actually makes sense! No philanthropy can fund everything in the world that might be interesting/useful. So maybe all philanthropies should release lists of “things we’re not funding but that we hope to see or learn from.”
I don’t have personal references, but a ton of great journalists have gotten laid off in 2023, and they never were paid that much in the first place (not being TV or celebrity journalists).
https://www.poynter.org/business-work/2023/buzzfeed-news-closed-180-staffers-laid-off/
https://www.sfgate.com/tech/article/wired-layoffs-conde-nast-magazine-18550381.php
https://www.washingtonpost.com/style/media/2023/10/10/washington-post-staff-buyouts/
Great journalists are getting laid off all the time these days. You could find any number of professional and highly accomplished journalists for a tiny fraction of $800k per year.
$800k per year? For one person to do investigative journalism? What would all that money be spent on?
“Completely remove efficacy testing requirements, making the FDA a non-binding consumer protection and labeling agency.”
That seems to be how the supplement market works under DSHEA. Is there any evidence that stuff marketed as supplements is any better at curing disease, preventing disease, improving health, etc., than the FDA pathway? Indeed, is there any evidence that the supplement market works well at all? We’ve known for decades that antioxidants don’t actually improve health, yet they are still a huge market. https://pubmed.ncbi.nlm.nih.gov/17327526/
So let me put it this way:
If there is a future bioterrorist attack involving, say, smallpox, we can disaggregate quite a few elements in the causal chain leading up to that:
The NIH published the entire genetic sequence of smallpox for the world to see.
Google indexed that webpage and made it trivially easy to find.
Thanks to electricity and internet providers, folks can use Google.
They now need access to a laboratory and all the right equipment.
Either they need to have enough resources to create their own laboratory from scratch, or else they need to access someone’s lab (in which case they run a significant risk of being discovered).
They need a huge amount of tacit knowledge in order to able to actually use the lab—knowledge that simply can’t be captured in text or replicated from text (no matter how detailed). Someone has to give them a ton of hands-on training.
An LLM could theoretically speed up the process by giving them a detailed step-by-step set of instructions.
They are therefore able to actually engineer smallpox in the real world (not just generate a set of textual instructions).
The question for me is: How much of the outcome here depends on 6 as the key element, without which the end outcome wouldn’t occur?
Maybe a future LLM would provide a useful step 6, but anyone other than a pre-existing expert would always fail at step 4 or 5. Alternatively, maybe all the other steps let someone let someone do this in reality, and an accurate and complete LLM (in the future) would just make it 1% faster.
I don’t think the current study sheds any light whatsoever on those questions (it has no control group, and it has no step at which subjects are asked to do anything in the real world).
In a way, the sarin story confirms what I’ve been trying to say: a list of instructions, no matter how complete, does not mean that people can literally execute the instructions in the real world. Indeed, having tried to teach my kids to cook, even making something as simple as scrambled eggs requires lots of experience and tacit knowledge.
I guess the overall point for me is that if the goal is just to speculate about what much more capable and accurate LLMs might enable, then what’s the point of doing a small, uncontrolled, empirical study demonstrating that current LLMs are not, in fact, that kind of risk?
Just saw this piece, which is strongly worded but seems defensible: https://1a3orn.com/sub/essays-propaganda-or-science.html
Thanks for your thoughtful replies!
Do you think that future LLMs will enable bioterrorists to a greater degree than traditional tools like search engines or print text?
I can imagine future AIs that might do this, but LLMs (strictly speaking) are just outputting strings of text. As I said in another comment: If a bioterrorist is already capable of understanding and actually carrying out the detailed instructions in an article like this, then I’m not sure that an LLM would add that much to his capacities. Conversely, handing a detailed set of instructions like that to the average person poses virtually no risk, because they wouldn’t have the knowledge or abilty to actually do anything with it.
As well, if a wannabe terrorist actually wants to do harm, there are much easier and simpler ways that are already widely discoverable: 1) Make chlorine gas by mixing bleach and ammonia (or vinegar); 2) Make sarin gas via instructions that were easily findable in this 1995 article:
How easy is it to make sarin, the nerve gas that Japanese authorities believe was used to kill eight and injure thousands in the Tokyo subways during the Monday-morning rush hour?
“Wait a minute, I’ll look it up,” University of Toronto chemistry professor Ronald Kluger said over the phone. This was followed by the sound of pages flipping as he skimmed through the Merck Index, the bible of chemical preparations.Five seconds later, Kluger announced, “Here it is,” and proceeded to read not only the chemical formula but also the references that describe the step-by-step preparation of sarin, a gas that cripples the nervous system and can kill in minutes.
“This stuff is so trivial and so open,” he said of both the theory and the procedure required to make a substance so potent that less than a milligram can kill you.
And so forth. Put another way, if we aren’t already seeing attacks like that on a daily basis, it isn’t for lack of GPT-5--it’s because hardly anyone actually wants to carry out such attacks.
If yes, do you think the difference will be significant enough to warrant regulations that incentivize developers of future models to only release them once properly safeguarded (or not at all)?
I guess it depends on what we mean by regulation. If we’re talking about liability and related insurance, I would need to see a much more detailed argument drawing on 50+ years of the law and economics literature. For example, why would we hold AI companies liable when we don’t hold Google or the NIH (or my wifi provider, for that matter) liable for the fact that right now, it is trivially easy to look up the entire genetic sequences for smallpox and Ebola?
Do you think that there are specific areas of knowledge around engineering and releasing exponentially growing biology that should be restricted?
If we are worried about someone releasing smallpox and the like, or genetically engineering something new, LLMs are much less of an issue than the fact that so much information (e.g., the smallpox sequence, the CRISPR techniques, etc.) is already out there.
“future model could successfully walk an unskilled person through the process without the person needing to understand it at all.”
Seems very doubtful. Could an unskilled person be “walked through” this process just by slightly more elaborate instructions? https://www.nature.com/articles/nprot.2007.135? Seems that the real barriers to something as complex as synthesizing a virus are 1) lack of training/skill/tacit knowledge, 2) lack of equipment or supplies. Detailed instructions are already out there.
Also, if you’re worried about low-IQ people being able to create mayhem, I think the least of our worries should be that they’d get their hands on a detailed protocol for creating a virus or anything similar (see, e.g., https://www.nature.com/articles/nprot.2007.135) -- hardly anyone would be able to understand it anyway, let alone have the real-world skills or equipment to do any of it.
I thought this piece was going to be about the firm Anthropic . . . anyway, interesting subject, carry on!