I lead a small think tank dedicated to accelerating the pace of scientific advancement by improving the conditions of science funding. As well, I’m a senior advisor to the Social Science Research Council. Prior to these roles, I spent some 9 years at Arnold Ventures (formerly the Laura and John Arnold Foundation) as VP of Research.
Stuart Buck
Why Effective Altruists Should Put a Higher Priority on Funding Academic Research
Metascience Since 2012: A Personal History
What would have been really interesting is if someone wrote a piece critiquing the EA movement for showing little to no interest in scrutinizing the ethics and morality of Sam Bankman-Fried’s wealth.
To put a fine point on it, has any of his wealth come from taking fees from the many scams, Ponzi schemes, securities fraud, money laundering, drug trafficking, etc. in the crypto markets? FTX has been affiliated with some shady actors (such as Binance), and seems to be buying up more of them (such as BlockFi, known for securities fraud). Why isn’t there more curiosity on the part of EA, and more transparency on the part of FTX? Maybe there’s a perfectly good explanation (and if so, I’ll certainly retract and apologize), but it seems like that explanation ought to be more widely known.
- 11 Nov 2022 8:36 UTC; 142 points) 's comment on The FTX Future Fund team has resigned by (
- 13 Nov 2022 1:49 UTC; 75 points) 's comment on CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by (
- 23 Nov 2022 2:22 UTC; 17 points) 's comment on EA should blurt by (
- 8 Nov 2022 23:46 UTC; 13 points) 's comment on FTX Crisis. What we know and some forecasts on what will happen next by (
Also from the Sequoia profile: “After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism.” It was precisely at this time that SBF launched Alameda Research, with Tara Mac Aulay (then the president of CEA) as a co-founder ( https://www.bloomberg.com/news/articles/2022-07-14/celsius-bankruptcy-filing-shows-long-reach-of-sam-bankman-fried).
To what extent was Will or any other CEA figure involved with launching Alameda and/or advising it?
If You Were Hoping Musk Would Donate to EA . . .
I’m not sure what to make of this kind of paper. They specifically trained the model on openly available sources that you can easily google, and the paper notes that “there is sufficient information in online resources and in scientific publications to map out several feasible ways to obtain infectious 1918 influenza.”
So, all of this is already openly available in numerous ways. What do LLMs add compared to Google?
Not clear: When participants “failed to access information key to navigating a particular path, we directly tested the Spicy model to determine whether it is capable of generating the information.” In other words, the participants did end up getting stumped at various points, but the researchers would jump in to see if the LLM would return a good answer IF the prompter already knew the answer and what exactly to ask for.
Then, they note that “the inability of current models to accurately provide specific citations and scientific facts and their tendency to ‘hallucinate’ caused participants to waste considerable time . . . ” I’ll bet. LLMs are notoriously bad at this sort of thing, at least currently.
Bottom line in their own words: “According to our own tests, the Spicy model can skillfully walk a user along the most accessible path in just 30 minutes if that user can recognize and ignore inaccurate responses.”
What an “if”! The LLM can tell a user all this harmful info … IF the user is already enough of an expert that they already know the answer!
Bottom line for me: Seems mostly to be scaremongering, and the paper concludes with a completely unsupported policy recommendation about legal liability. Seems odd to talk about legal liability for an inefficient, expensive, hallucinatory way to access information freely available via Google and textbooks.
- 3 Nov 2023 10:20 UTC; 5 points) 's comment on Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk by (LessWrong;
As a Bloomberg article put it in September: https://www.pymnts.com/cryptocurrency/2022/bankman-frieds-stake-in-quant-trading-firm-raises-conflict-questions/
Alameda’s position as a major market maker on FTX, profiting on the spread between buying and selling prices, puts it in a position to have a potential conflict of interest with FTX, which gets its revenue from transaction fees and margin loans to traders.
And while the firm’s executives, and Bankman-Fried, say there is a strong firewall between the two — something that, Bloomberg correctly notes, no one has actually said or even suggested has been breached — concerns about the potential for such conflicts of interest are growing as the size and activity of Alameda gains more attention.
Are you trying to suggest that when two firms need to be at arms-length because of the potential for an enormous conflict of interest, it wouldn’t matter if the two firms’ chief executives were dating each other?
All of which means it’s not a real “exchange.” The New York Stock Exchange could never go bankrupt like this. They make money just by small fees on transactions and by membership fees. But an “exchange”that gambles with customers’ assets by engaging in self-serving loans to sister corporations—well, that’s crazy, and it’s part of why humans invented financial regulation in the first place. Looks like Bankman-Fried has just been using new technology to reinvent old forms of fraud.
I did think Harris could have been slightly more aggressive in his questioning (as in, some level above zero). E.g., why would MacAskill even suggest that SBF might have have been altruistic in his motivations, even though we now know about the profligate and indulgent lifestyle that SBF led? MacAskill had to have known about that behavior at the time (why didn’t it make him suspicious?).
And why was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk’s purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?
- 18 Apr 2024 9:47 UTC; 128 points) 's comment on Personal reflections on FTX by (
“I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world”
This admirably honest statement deserves more emphasis. As we know from medicine and international development and anywhere that does RCTs, it is really, really hard—even when the results of your actions are right in front of you—to know whether you have helped someone or harmed them. There are just too many confounding factors, selection bias, etc.
The long-termist AGI stuff has always struck me as even worse off in this respect. How is anyone supposed to know that the actions they take today will have a beneficial impact on the world decades from now, rather than making things worse? And given the premises of AGI alignment, making things worse would be utterly catastrophic for humanity.
Great journalists are getting laid off all the time these days. You could find any number of professional and highly accomplished journalists for a tiny fraction of $800k per year.
I’ve been a grantmaker (at Arnold Ventures, a $2 billion philanthropy), and I couldn’t agree more. Those kinds of questions are good if the aim is to reward and positively select for people who are good at bullshitting. And I also worry about a broader paradox—sometimes the highest impact comes from people who weren’t thinking about impact, had no idea where their plans would lead, and serendipitously stumbled into something like penicillin while doing something else.
My post from 4 months ago linked to a story about the lawyer, which is why I said I merely hinted at this point. The post from 2 months ago didn’t expressly mention it, but a followup post definitely did in detail (I deleted the post soon thereafter because I got a few downvotes and I got nervous that maybe it was over the line).
Thanks for your thoughtful replies!
Do you think that future LLMs will enable bioterrorists to a greater degree than traditional tools like search engines or print text?
I can imagine future AIs that might do this, but LLMs (strictly speaking) are just outputting strings of text. As I said in another comment: If a bioterrorist is already capable of understanding and actually carrying out the detailed instructions in an article like this, then I’m not sure that an LLM would add that much to his capacities. Conversely, handing a detailed set of instructions like that to the average person poses virtually no risk, because they wouldn’t have the knowledge or abilty to actually do anything with it.
As well, if a wannabe terrorist actually wants to do harm, there are much easier and simpler ways that are already widely discoverable: 1) Make chlorine gas by mixing bleach and ammonia (or vinegar); 2) Make sarin gas via instructions that were easily findable in this 1995 article:
How easy is it to make sarin, the nerve gas that Japanese authorities believe was used to kill eight and injure thousands in the Tokyo subways during the Monday-morning rush hour?
“Wait a minute, I’ll look it up,” University of Toronto chemistry professor Ronald Kluger said over the phone. This was followed by the sound of pages flipping as he skimmed through the Merck Index, the bible of chemical preparations.Five seconds later, Kluger announced, “Here it is,” and proceeded to read not only the chemical formula but also the references that describe the step-by-step preparation of sarin, a gas that cripples the nervous system and can kill in minutes.
“This stuff is so trivial and so open,” he said of both the theory and the procedure required to make a substance so potent that less than a milligram can kill you.
And so forth. Put another way, if we aren’t already seeing attacks like that on a daily basis, it isn’t for lack of GPT-5--it’s because hardly anyone actually wants to carry out such attacks.
If yes, do you think the difference will be significant enough to warrant regulations that incentivize developers of future models to only release them once properly safeguarded (or not at all)?
I guess it depends on what we mean by regulation. If we’re talking about liability and related insurance, I would need to see a much more detailed argument drawing on 50+ years of the law and economics literature. For example, why would we hold AI companies liable when we don’t hold Google or the NIH (or my wifi provider, for that matter) liable for the fact that right now, it is trivially easy to look up the entire genetic sequences for smallpox and Ebola?
Do you think that there are specific areas of knowledge around engineering and releasing exponentially growing biology that should be restricted?
If we are worried about someone releasing smallpox and the like, or genetically engineering something new, LLMs are much less of an issue than the fact that so much information (e.g., the smallpox sequence, the CRISPR techniques, etc.) is already out there.
Over a year ago, I thought it was completely inexplicable that SBF hired a Chief Regulatory Officer who was a lawyer known only for involvement in online fraud. There is no legitimate reason to hire such a person. And even apart from the fraud, his resume was not the typical profile of someone who a legitimate multi-billionaire would hire at a high level. An actual multi-billionaire could have poached away the general counsel from a Fortune 100 company. Why settle for someone like this? https://coingeek.com/tether-links-to-questionable-market-makers-yet-another-cause-for-concern/
I finally worked up the courage to hint at this point 4 months ago ( https://forum.effectivealtruism.org/posts/KBw6wKDbvmqacbB5M/crypto-markets-ea-funding-and-optics?commentId=wcvYZtw7b4xvrdetL ), and then was a little more direct 2 months ago ( https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest?commentId=odHG4hhM2FSGiXXFQ).
Lo and behold, that guy was probably in on the whole thing—he also served as general counsel to Alameda! https://www.nbcnews.com/news/ftxs-regulatory-chief-4-job-titles-2-years-was-really-rcna57965
One issue for me is just that EA has radically different standards for what constitutes “impact.” If near-term: lots of rigorous RCTs showing positive effect sizes.
If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . . BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact.
FTX and Alameda definitely needed more bureaucracy—as in, doing stuff in a way that doesn’t resemble a scene from Idiocracy. https://docs.house.gov/meetings/BA/BA00/20221213/115246/HHRG-117-BA00-Wstate-RayJ-20221213.pdf “Although our investigation is ongoing and detailed findings will have to await its conclusion, the FTX Group’s collapse appears to stem from the absolute concentration of control in the hands of a very small group of grossly inexperienced and unsophisticated individuals who failed to implement virtually any of the systems or controls that are necessary for a company that is entrusted with other people’s money or assets.”
I wouldn’t be very confident in the level of due diligence undertaken by supposedly sophisticated investors:
https://twitter.com/zebulgar/status/1590394857474109441
With Asana’s stock down 82% in the past six months, Meta down 43%, and SBF’s net worth cut in half in the past month, maybe the bigger worry should be a period of austerity and cutbacks?
I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here’s the text:
Consider this bit from the origin story of FTX:
Binance, you say? This Binance?
Or consider FTX’s hiring of Daniel Friedberg as a chief compliance officer. This article claims that he had been involved in previous cheating/fraud at other businesses:
Then there are all the recent examples of FTX trying to buy up other crypto players. For example, in July, FTX signed a deal to buy BlockFi for up to $240 million, and to give it $400 million in revolving credit. BlockFi is most famous for having agreed to pay $100 million in penalties for its securities fraud. It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
Just last week, there was a story that FTX is thinking about buying Celsius, another fraudulent firm.
Another story from July had the remarkable claim that SBF is even thinking of putting his own cash into bailing out other crypto firms:
Why is FTX and perhaps SBF himself putting so much money into buying up other people’s scams? I would hope it’s because they intend to reform the crypto industry and put it on more of a moral footing, although that would reduce the market size by an order of magnitude or two.
***
At least, SBF and FTX ought to provide more transparency into where exactly all the wealth came from, and what (if anything) they are actively doing to prevent crypto frauds/scams. And one might argue that FTX Foundation has a particular moral duty to establish a fund to help out all of the people whose lives were ruined by falling for crypto’s many Ponzi schemes and other assorted scams.