Man this is one of the best posts I’ve ever read on the forum. Extremely educational while remaining very engaging (rare to find both). Thank you for writing this, I hope you’ll do similar write-ups for other research you do!
What age range are you intending for the book to be for? I look forward to reading it with my niece when she is old enough :)
But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a—criminal—thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it’s because so many of your actions are those of people I will not invest in further and who I don’t want anywhere near my life or life’s work.
This paragraph really resonated with me. I suspect many people whom we would benefit greatly from having in our community are turned off because of they got the same feeling you articulated here.
I’m finding difficult to articulate why I think this is, but let me attempt:
When I’ve been at my least productive, I find myself falling into a terrible zero-sum mindset of actively searching for things that are unjust or unfair. My thoughts often take the shape of something like:
Why do influential EA’s only care about <thing they think is important> and not <thing I think is important> ?
‘If EA was less <elitist/nepotistic> and more <democratic/open/whatever> then my pet cause would get the attention it deserves! ’
On the other hand, when I’m at my most productive and fully immersed projects that matter to me, I don’t ever find myself thinking those thoughts. I’m too focused on actually getting things done and producing surplus to care about how others spend their time and resources.
In this mindset I’m incredibly optimistic and I intuitively feel that any problem solvable if I put my mind to it. In the former mindset, everything seems doomed to fail and I want to sneer at anyone who thinks otherwise.These mindsets feel very distinct, and it’s very clear the latter is highly conducive to success and the former is actively harmful. If somebody with the latter mindset gets their first impression of EA from people with the former, I don’t blame them for bailing.
Thank you for writing this. It’s barely been a week, take your time.
There’s been a ton of posts on the forum about various failures, preventative measures, and more. As much as we all want to get to the bottom of this and ensure nothing like this ever happens again, I don’t think our community benefits from hasty overcorrections. While many of the points made are undoubtedly good, I don’t think it will hurt the EA community much to wait a month or two before demanding any drastic measures.
EA’s should probably still be ambitious. Adopting rigorous governance and oversight mechanisms sometimes does more harm than good. Let’s not throw out the baby with the bathwater.
I’m still reflecting and am far from having fully formed beliefs yet, I am confused about just how many strong views there have been expressed on the forum. Alone correctly recalling my thoughts and feelings around FTX before the event is difficult. I’m noticing a lot of finger pointing and not a lot of introspection.
I don’t know about everyone else, but I’m pretty horrified at just how similar my thinking seems to have been to SBF’s. If a person who seemingly agreed with me on so many moral priorities was capable of doing something so horrible, how can I be sure that I am different?I’m going to sit with that thought for a while, and think about what type of person I want to strive to be.
Good question, I’ve created a manifold market for this:
I wouldn’t conclude much from the future fund withholding funds for now. Even if they are likely in the clear, freezing payments until they have made absolutely sure strikes me as a very reasonable thing to do.
My only worry will be that there will be more things posted in a short time than anyone will have time to read. I’m still working my way through all the cause area reports. Some system to distribute the posts out to prevent fatigue might be warranted for events like these and future writing contests.
You can only spend your resources once. Unless you argue that there is a free lunch somewhere, any money and time spent by UN inevitably has to come from somewhere else. Arguing that longtermist concerns should be prioritized necessarily requires arguing that other concerns should be de-prioritized.
If EA’s or the UN argue that longtermism should be a priority, it’s reasonable for the authors to question from where those resources are going to come.
For what it’s worth I think it’s a no-brainer that the UN should spend more energy on ensuring the future goes well, but we shouldn’t pretend that it’s not at the expense of those who currently exist.
In the early 2000′s when climate change started seriously getting onto the multilateral agenda, there were economists like Bjørn Lomborg arguing that we instead should spend our resources on cost-effective poverty alleviation.
He was widely criticized for this, for example by Michael Grubb, an economist and lead author for several IPCC reports, who argued:
To try and define climate policy as a trade-off against foreign aid is thus a forced choice that bears no relationship to reality. No government is proposing that the marginal costs associated with, for example, an emissions trading system, should be deducted from its foreign aid budget. This way of posing the question is both morally inappropriate and irrelevant to the determination of real climate mitigation policy.
Yet today, much (if not most) multilateral climate mitigation, is funded by countries’ foreign aid budgets. The authors of this article, like Lomborg was almost two decades ago, are reasonable to worry that multilateral organizations adopting new priorities comes at the expense of the existing.
I believe we should spend much more time and money ensuring the future goes well, but we shouldn’t pretend that this isn’t at the expense of other priorities.
To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism’s conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it’s from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.
This might be the correct call, but I think it’s a reasonable thing to disagree with.
Thank you, this is an excellent post. This style of transparent writing can often come across as very ‘ea’ and gets made fun of for its idiosyncrasies, but I think it’s a tremendous strength of our community.
I would advise you to shorten the length of the total application to around one fourth of what it is currently. Focus on your strong points (running a growing business, strong animal welfare profile) and leave out the rest. The weaker parts of your application water down the parts which are the strongest.
Admissions are always a messy process and good people get rejected often. A friend of mine who I’m sure will go on to become a top-tier ai safety engineer, got rejected from eag, because there isn’t a great way to convey this information through an application form. Vetting people at scale is just really difficult.
Thanks for writing this Jonas. As someone much below the lesswrong average at math, I would be grateful for a clarification of this sentence:
Provided pj,dk, pj,pk and dj,dk are independent when j≠k
What does j and k refer to here? Moreover is it a reasonable assumption, that the uncertainties of existential risks are independent? It seems to me that many uncertainties run across risk types, such as chance of recovery after civilisations collapse.
For anyone interested in pursuing this further Charity Entrepreneurship is looking to incubate a charity working on road traffic safety.
Their report on the topic can be found here: https://www.charityentrepreneurship.com/research
Thanks for giving everyone the opportunity to provide feedback!
I’m unsure how I feel about the section on global poverty and wellbeing. As of now, the section mostly just makes the same claim over and over that some charities are more effective than others, without much rigorous discussion around why that might be.There’s a ton of great material under the final ‘differences in impact’ post that I would love to see as part of the main sequence. Right now, I’m worried that people new to global health and development will leave this section feeling waay overconfident about how sure we are about all of this charity stuff. If I was a person with experience working in the aid sector and decided to go through the curricula as it is, I think I would be left thinking that EAs are way overconfident despite barely knowing a thing about global poverty.
Here is an example of a potential exercise you could include that I think might go a long way to convey just how difficult it is to gain certainty about this stuff:Read and evaluate two RCT’s on vaccine distribution in two southern Indian states. What might these RCT’s tell us about vaccine distribution in India? Have the reader try to assess which aspects of these RCT’s will generalise to the rest of India and which aspects won’t. They could for example make predictions (practicing another relevant EA skill!) on the results of an RCT for a northern Indian state.
You only have to do one deep dive on a topic to gain an appreciation for how little we know.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can’t help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
Yeah, you’re right.
That puts EA in an even better light!”While the rest of the global health community imposes its values on how trade-offs should be made, the most prominent global health organisation in EA actually surveys and asks what the recipients prefer.”
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I’m often a bit perplexed as to how quick people are to jump from ‘nearly everyone dies’ to ‘literally everyone dies’. Similarly I’m sympathetic to the point that it’s difficult to imagine particularly compelling scenarios where AI doesn’t radically alter the world in some way.But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didn’t predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldn’t have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.
intelligence peaks more closely to humans, and super intelligence doesn’t yield significant increases to growth.
superintelligence in one domain doesn’t yield superintelligence in others, leading to some, but limited growth, like most other technologies.
we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either:The superintelligence is misaligned and it kills us allThe superintelligence is misaligned with our own objectives but is benignThe superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about.
AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either:
The superintelligence is misaligned and it kills us all
The superintelligence is misaligned with our own objectives but is benign
The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about.
I think there are many more options than this, and every argument that follows banks entirely your logical models being correct. Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?I think you are massively overconfident, considering that your only source of evidence is abstract models with zero feedback loops. There is nothing wrong with creating such models, but be aware of just how difficult it is to get even something much simpler right.
It’s great that you spent a year thinking about this, but many have spent decades and feel MUCH less confident about all of this than you.