I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I believe the division of areas for the event is quite decent. However, I think EAGx events also allow for the introduction of new ideas into the EA community. What cause areas do others believe we should prioritize but currently do not? Personally, I am considering areas like protecting liberal democracy, improving decision-making (individual and institutional), and addressing great power conflicts (broader than AI and nuclear issues). There are likely many other areas, and the causes Iâve listed here are already somewhat related to EA. Perhaps there are topics that are further outside the box.
I am also somewhat uncertain about the term âEntrepreneurship skills.â Could someone clarify what is meant by this exactly?
Thank you! Seems like a valuable tool to learn!
I enjoyed reading this post!
My question is on a small topic though, what is a BIRD decision making tool? A google search resulted in very little useful links.
Yes, I totally agree it is important not to hide our mistakes. I just wish SBF was presented in the context I see it in: As an unbelievable fuck-up /â distaster /â crime in a community that is at least trying very hard to do good.
The sad fact is that this book might be the main way people in the Netherlands learn about the link between SBF and EA. But I guess there is little we can do about it now.
You should also tell us how well you feel your experiment with explicitly asking a large group for feedback is going! This seems like a much more interesting approach than just having the form available somewhere, which in my experience results in very little input.
This story is interesting; however, I must admit, I am most surprised by the âup to $10 millionâ figure. I had assumed the US would allocate significantly more funds for this. For comparison:
The UK safety institute is expected to spend over ÂŁ100 million (128m+ USD) during the first two years, if I understand correctly.
The EU AI Office (while serving a slightly different role, yet with considerable overlap) will likely spend âŹ46.5 million (50m+ USD) annually.
What am I overlooking?
These all seem good topics to flesh out further! Is 1 still a âhot takeâ though? I thought this was pretty much the consensus here at this point?
Iâm not OP, obviously, and I am only speaking from experience here, so I have no data to back this up, but:
My feeling is that foresight projects have a tendency to become political very quickly, and they are much more about stakeholder engagement than they are about finding the truth, whereas forecasting can remain relatively objective for longer.
That being said: I am very excited about combining these approaches.
Iâm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that thereâs potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.
I (and I hope this aligns with OPâs vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. Iâm somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.
E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying Superforecasters good forecasters.
I feel like many of the other comments here better express my view. But I just wanted to chip in and say that I think your EA friends are wrong and that this post makes me respect you much more, not less. I also really appreciate that you are being so open about your thinking and internal conflicts, even though you are yourself a leader in the space I really look up to.
Recent announcements of Meta had me thinking about âopen sourceâ AI systems more, and I am wondering whether it would be worthwhile to reframe open source models, and start referring to them as, âModels with publicly available model weightsâ, or âFree weight-modelsâ.
This is not just more accurate, but also a better political frame for those (like me) that think that releasing model weights publicly is probably not going to lead to safer AI development.
Ozy is consistently both underread and incredibly insightful, I wish these kinds of posts got crossposted to the forum more often.
Out of curiosity, why do you think this is the case? Isnât the Berlin and Nordics conference (and the London EAG) much more accessible for most EAs in Western Europe?
(Also, personally I assumed that the 35% was not a goal but a maximum to make sure the speakers are not from the Netherlands too much.)