SBF was additionally charged with bribing Chinese officials with $40 million. Caroline Ellison testified in court that they sent a $150 million bribe.
Tobias Häberli
GiveWell didn’t lose $900 million to fraud. GiveDirectly lost $900′000 to fraud.
My hope and expectation is that neither will be focused on EA
I’d be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it’s unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80′000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren’t traded a lot and are not very informative.[1]
This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn’t centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, ‘higher-ups’ at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected fraud happened, where EA is only one piece of the puzzle. One can equally get a sense of “EA led SBF to do fraud” as “SBF used EA as a front to do fraud”.
ETA:
The book description[2] “mentions “philanthropy”, makes it clear that it’s mainly about SBF and not FTX as a firm, and describes the book as partly a psychological portrait.- ^
I also created a similar market for CEA, but with 2 mentions as the resolving criteria. One mention is very likely as SBF worked briefly for them.
- ^
“In Going Infinite Lewis sets out to answer this question, taking readers into the mind of Bankman-Fried, whose rise and fall offers an education in high-frequency trading, cryptocurrencies, philanthropy, bankruptcy, and the justice system. Both psychological portrait and financial roller-coaster ride, Going Infinite is Michael Lewis at the top of his game, tracing the mind-bending trajectory of a character who never liked the rules and was allowed to live by his own—until it all came undone.”
- ^
(Not sure if this is within the scope of what you’re looking for. )
I’d be excited about having something like a roundtable with people who have been through 80′000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I’d imagine this could be a good fit for 80k After Hours?
On Microsoft Edge (the browser) there’s a “read aloud” option that offers a range of natural voices for websites and PDFs. It’s only slightly worse than speechify and free – and can give a glimpse of whether $139/year might be worth it for you.
I think that a very simplified ordering for how to impress/gain status within EA is:
Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified
Looking back on my early days interacting with EAs, I generally couldn’t present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I’m not sure about what hurdles to overcome if you want EA communities to push towards ‘Agreement sloppily justified’ and ‘Disagreement sloppily justified’ being treated similarly.
As far as I understand, the paper doesn’t disagree with this and an explanation for it is given in the conclusion:
Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.
The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.
There is a comprehensive process in place… it is a cohesive approach to aligning font, but thank you for the drama!
Insiders know that EA NYC has ambitious plans to sprout a whole network of Bodhi restaurants. To those who might criticize this blossoming “bodhi count,” let’s not indulge in shaming their gastronomic promiscuity. After all, spreading delicious vegan dim sum and altruism is something we can all savour.
I find the second one more readable.
Might be due to my display: If I zoom into the two versions, the second version separates letters better.
But you’re also right, that we’ll get used to most changes :)
I find the font to be less readable and somewhat clunky.
Can’t quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn’t match the native resolution.
I’m not really sure if the data suggests this.
The question is rather vague, making it difficult to determine the direction of the desired change. It seems to suggest that longtermists and more engaged individuals are less likely to support large changes in the community in general. But both groups might, on average, agree that change should go in the ‘big tent’ direction.
Although there are statistically significant differences in responses to “I want the community to look very different” between those with mild vs. high engagement, their average responses are still quite similar (around 4.2/7 vs. 3.7/7). Finding statistically significant differences in beliefs between two groups doesn’t always imply that there are large or meaningful differences in the content of their actual beliefs. I feel I could also just be overlooking something here.
The only source for this claim I’ve ever found was Emile P. Torres’s article What “longtermism” gets wrong about climate change.
It’s not clear where they take the information about an “enormous promotional budget of roughly $10 million” from. Not saying that it is untrue, but also unclear why Torres would have this information.
The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.ETA: I found another article by Torres that discusses the claim in a bit more detail.
MacAskill, meanwhile, has more money at his fingertips than most of us make in a lifetime. Left unmentioned during his “Daily Show” appearance: he hired several PR firms to promote his book, one of which was paid $12,000 per month, according to someone with direct knowledge of the matter. MacAskill’s team, this person tells me, even floated a total promotional budget ceiling of $10 million — a staggering number — thanks partly to financial support from the tech multibillionaire Dustin Moskovitz, cofounder of Facebook and a major funder of EA.
If I remember correctly, Claude had limited public deployment roughly a month before the Google investment, and roughly 2 months after their biggest funder (FTX) went bankrupt.
Thanks for getting back to me and providing more context.
I do agree that Churchill was probably surprised by Roosevelt’s use of the term because it was not in the official communiqué. Trying to figure out how certain historical decisions were influenced is very challenging.
The way you describe the events strikes me as very strong and requires a lot of things to be true other than the term being used accidentally:
Accidentally called for unconditional surrender of the Japanese, leading to the eventual need for the bomb to be dropped. (p.35)
Based on the available information and until we have better evidence for the claim, I would not want to use this as an example of a simple mistake having severe consequences. And because the Anecdote is incredibly catchy, I worry that policy researchers and practitioners will read it and subsequently use it in conversation.
In EA, the roles of “facilitator” and “attendee” may not be as straightforward as they appear to be in AR. From personal experience, there are many influential people in the EA community who do not hold designated roles that overtly reveals their power. Their influence/soft power only becomes apparent once you get a deeper understanding of how community members interrelate and how information is exchanged. On the other hand, someone who is newly on a Community Building grant may have more power on paper than in reality.
I agree with the need for a policy. I just want it to reflect the nuances of power dynamics in EA. While no policy will be perfect, we should aim to create one that does not unnecessarily restrict people – which could lead to disillusionment with the policy. And more importantly, one that does stick in cases where it should stick – e.g. to people with a lot of soft power.
This is currently at 14 agree votes and the same question for Will MacAskill is at −13 disagree votes.
Would be curious if this is mainly because of Nick Beckstead having been the CEO and therefore carrying more responsibility, or are there other considerations?
The most recent Scott Alexander Post seems potentially relevant to this discussion.
The following long section is about what OpenAI could be thinking – and might also translate to Anthropic. (The rest of the post is also worth checking out.)
Why OpenAI Thinks Their Research Is Good Now, But Might Be Bad Later
OpenAI understands the argument against burning timeline. But they counterargue that having the AIs speeds up alignment research and all other forms of social adjustment to AI. If we want to prepare for superintelligence—whether solving the technical challenge of alignment, or solving the political challenges of unemployment, misinformation, etc—we can do this better when everything is happening gradually and we’ve got concrete AIs to think about:
“We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios […] As we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.
A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.”
You might notice that, as written, this argument doesn’t support full-speed-ahead AI research. If you really wanted this kind of gradual release that lets society adjust to less powerful AI, you would do something like this:
Release AI #1
Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
Then release AI 2018-2019 Long-Term Future Fund Grantees: How did they do? Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
And so on . . .
Meanwhile, in real life, OpenAI released ChatGPT in late November, helped Microsoft launch the Bing chatbot in February, and plans to announce GPT-4 in a few months. Nobody thinks society has even partially adapted to any of these, or that alignment researchers have done more than begin to study them.
The only sense in which OpenAI supports gradualism is the sense in which they’re not doing lots of research in secret, then releasing it all at once. But there are lots of better plans than either doing that, or going full-speed-ahead.
So what’s OpenAI thinking? I haven’t asked them and I don’t know for sure, but I’ve heard enough debates around this that I have some guesses about the kinds of arguments they’re working off of. I think the longer versions would go something like this:
The Race Argument:
Bigger, better AIs will make alignment research easier. At the limit, if no AIs exist at all, then you have to do armchair speculation about what a future AI will be like and how to control it; clearly your research will go faster and work better after AIs exist. But by the same token, studying early weak AIs will be less valuable than studying later, stronger AIs. In the 1970s, alignment researchers working on industrial robot arms wouldn’t have learned anything useful. Today, alignment researchers can study how to prevent language models from saying bad words, but they can’t study how to prevent AGIs from inventing superweapons, because there aren’t any AGIs that can do that. The researchers just have to hope some of the language model insights will carry over. So all else being equal, we would prefer alignment researchers get more time to work on the later, more dangerous AIs, not the earlier, boring ones.
“The good people” (usually the people making this argument are referring to themselves) currently have the lead. They’re some amount of progress (let’s say two years) ahead of “the bad people” (usually some combination of Mark Zuckerberg and China). If they slow down for two years now, the bad people will catch up to them, and they’ll no longer be setting the pace.
So “the good people” have two years of lead, which they can burn at any time.
If the good people burn their lead now, the alignment researchers will have two extra years studying how to prevent language models from saying bad words. But if they burn their lead in 5-10 years, right before the dangerous AIs appear, the alignment researchers will have two extra years studying how to prevent advanced AGIs from making superweapons, which is more valuable. Therefore, they should burn their lead in 5-10 years instead of now. Therefore, they should keep going full speed ahead now
The Compute Argument:
Future AIs will be scary because they’ll be smarter than us. We can probably deal with something a little smarter than us (let’s say IQ 200), but we might not be able to deal with something much smarter than us (let’s say IQ 1000).
If we have a long time to study IQ 200 AIs, that’s good for alignment research, for two reasons. First of all, these are exactly the kind of dangerous AIs that we can do good research on—figure out when they start inventing superweapons, and stamp that tendency out of them. Second, these IQ 200 AIs will probably still be mostly on our side most of the time, so maybe they can do some of the alignment research themselves.
So we want to maximize the amount of time it takes between IQ 200 AIs and IQ 1000 AIs.
If we do lots of AI research now, we’ll probably pick all the low-hanging fruit, come closer to optimal algorithms, and the limiting resource will be compute—ie how many millions of dollars you want to spend building giant computers to train AIs on. Compute grows slowly and conspicuously—if you’ve just spent $100 million on giant computers to train AI, it will take a while before you can gather $1 billion to spend on even gianter computers. Also, if terrorists or rogue AIs are gathering a billion dollars and ordering a giant computer from Nvidia, probably people will notice and stop them.
On the other hand, if we do very little AI research now, we might not pick all the low-hanging fruit, and we might miss ways to get better performance out of smaller amounts of compute. Then an IQ 200 AI could invent those ways, and quickly bootstrap up to IQ 1000 without anyone noticing.
So we should do lots of AI research now.
The Fire Alarm Argument:
Bing’s chatbot tried to blackmail its users, but nobody was harmed and everyone laughed that off. But at some point a stronger AI will do something really scary—maybe murder a few people with a drone. Then everyone will agree that AI is dangerous, there will be a concerted social and international response, and maybe something useful will happen. Maybe more of the world’s top geniuses will go into AI alignment, or will be easier to coordinate a truce between different labs where they stop racing for the lead.
It would be nice if that happened five years before misaligned superintelligences building superweapons, as opposed to five months before it, since five months might not be enough time for the concerted response to do something good.
As per the previous two arguments, maybe going faster now will lengthen the interval between the first scary thing and the extremely dangerous things we’re trying to prevent.
These three lines of reasoning argue that that burning a lot of timeline now might give us a little more timeline later. This is a good deal if:
Burning timeline now actually buys us the extra timeline later. For example, it’s only worth burning timeline to establish a lead if you can actually get the lead and keep it.
A little bit of timeline later is worth a lot of timeline now.
Everybody between now and later plays their part in this complicated timeline-burning dance and doesn’t screw it up at the last second.
I’m skeptical of all of these.
Thanks for the context, didn’t know that!