“Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025).”
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn’t become public knowledge through FOIA requests?
As far as I understand the plan is for it to be a (sort of?) national/governmental institute. The UK government has quite a few scientific institutes. It might be the first in the world of that kind.
In this article from early October, the phrasing implies that it would be tied to the UK government:
Sunak will use the second day of Britain’s upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.The idea is that the institute could emerge from what is now the United Kingdom’s government’s Frontier AI Taskforce[...].
Sunak will use the second day of Britain’s upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.
The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.
The idea is that the institute could emerge from what is now the United Kingdom’s government’s Frontier AI Taskforce[...].
Thanks for the context, didn’t know that!
SBF was additionally charged with bribing Chinese officials with $40 million. Caroline Ellison testified in court that they sent a $150 million bribe.
GiveWell didn’t lose $900 million to fraud. GiveDirectly lost $900′000 to fraud.
My hope and expectation is that neither will be focused on EA
I’d be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it’s unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80′000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren’t traded a lot and are not very informative.
This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn’t centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, ‘higher-ups’ at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected fraud happened, where EA is only one piece of the puzzle. One can equally get a sense of “EA led SBF to do fraud” as “SBF used EA as a front to do fraud”.ETA:The book description “mentions “philanthropy”, makes it clear that it’s mainly about SBF and not FTX as a firm, and describes the book as partly a psychological portrait.
I also created a similar market for CEA, but with 2 mentions as the resolving criteria. One mention is very likely as SBF worked briefly for them.
“In Going Infinite Lewis sets out to answer this question, taking readers into the mind of Bankman-Fried, whose rise and fall offers an education in high-frequency trading, cryptocurrencies, philanthropy, bankruptcy, and the justice system. Both psychological portrait and financial roller-coaster ride, Going Infinite is Michael Lewis at the top of his game, tracing the mind-bending trajectory of a character who never liked the rules and was allowed to live by his own—until it all came undone.”
(Not sure if this is within the scope of what you’re looking for. )I’d be excited about having something like a roundtable with people who have been through 80′000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I’d imagine this could be a good fit for 80k After Hours?
On Microsoft Edge (the browser) there’s a “read aloud” option that offers a range of natural voices for websites and PDFs. It’s only slightly worse than speechify and free – and can give a glimpse of whether $139/year might be worth it for you.
I think that a very simplified ordering for how to impress/gain status within EA is:
Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified
Looking back on my early days interacting with EAs, I generally couldn’t present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I’m not sure about what hurdles to overcome if you want EA communities to push towards ‘Agreement sloppily justified’ and ‘Disagreement sloppily justified’ being treated similarly.
As far as I understand, the paper doesn’t disagree with this and an explanation for it is given in the conclusion:
Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.
The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.
There is a comprehensive process in place… it is a cohesive approach to aligning font, but thank you for the drama!
Insiders know that EA NYC has ambitious plans to sprout a whole network of Bodhi restaurants. To those who might criticize this blossoming “bodhi count,” let’s not indulge in shaming their gastronomic promiscuity. After all, spreading delicious vegan dim sum and altruism is something we can all savour.
I find the second one more readable.
Might be due to my display: If I zoom into the two versions, the second version separates letters better.
But you’re also right, that we’ll get used to most changes :)
I find the font to be less readable and somewhat clunky. Can’t quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn’t match the native resolution.
I’m not really sure if the data suggests this.
The question is rather vague, making it difficult to determine the direction of the desired change. It seems to suggest that longtermists and more engaged individuals are less likely to support large changes in the community in general. But both groups might, on average, agree that change should go in the ‘big tent’ direction.
Although there are statistically significant differences in responses to “I want the community to look very different” between those with mild vs. high engagement, their average responses are still quite similar (around 4.2/7 vs. 3.7/7). Finding statistically significant differences in beliefs between two groups doesn’t always imply that there are large or meaningful differences in the content of their actual beliefs. I feel I could also just be overlooking something here.