I created this account because I wanted to have a much lower bar for participating in the Forum, and if I don’t do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I don’t do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS’s spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year’s budget per person will likely be significantly lower.
I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you!
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community’s growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.
However, I’m less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition’s perceived expense, I’m concerned that the cost may deter potential newcomers, especially given the feedback I’ve heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.
While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
Out of curiosity, why do you think this is the case? Isn’t the Berlin and Nordics conference (and the London EAG) much more accessible for most EAs in Western Europe?
(Also, personally I assumed that the 35% was not a goal but a maximum to make sure the speakers are not from the Netherlands too much.)
I believe the division of areas for the event is quite decent. However, I think EAGx events also allow for the introduction of new ideas into the EA community. What cause areas do others believe we should prioritize but currently do not? Personally, I am considering areas like protecting liberal democracy, improving decision-making (individual and institutional), and addressing great power conflicts (broader than AI and nuclear issues). There are likely many other areas, and the causes I’ve listed here are already somewhat related to EA. Perhaps there are topics that are further outside the box.
I am also somewhat uncertain about the term “Entrepreneurship skills.” Could someone clarify what is meant by this exactly?
Thank you! Seems like a valuable tool to learn!
I enjoyed reading this post!
My question is on a small topic though, what is a BIRD decision making tool? A google search resulted in very little useful links.
Yes, I totally agree it is important not to hide our mistakes. I just wish SBF was presented in the context I see it in: As an unbelievable fuck-up / distaster / crime in a community that is at least trying very hard to do good.
The sad fact is that this book might be the main way people in the Netherlands learn about the link between SBF and EA. But I guess there is little we can do about it now.
You should also tell us how well you feel your experiment with explicitly asking a large group for feedback is going! This seems like a much more interesting approach than just having the form available somewhere, which in my experience results in very little input.
This story is interesting; however, I must admit, I am most surprised by the “up to $10 million” figure. I had assumed the US would allocate significantly more funds for this. For comparison:
The UK safety institute is expected to spend over £100 million (128m+ USD) during the first two years, if I understand correctly.
The EU AI Office (while serving a slightly different role, yet with considerable overlap) will likely spend €46.5 million (50m+ USD) annually.
What am I overlooking?
These all seem good topics to flesh out further! Is 1 still a “hot take” though? I thought this was pretty much the consensus here at this point?
I’m not OP, obviously, and I am only speaking from experience here, so I have no data to back this up, but:
My feeling is that foresight projects have a tendency to become political very quickly, and they are much more about stakeholder engagement than they are about finding the truth, whereas forecasting can remain relatively objective for longer.
That being said: I am very excited about combining these approaches.
I’m considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there’s potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.
I (and I hope this aligns with OP’s vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. I’m somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.
E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying Superforecasters good forecasters.
I feel like many of the other comments here better express my view. But I just wanted to chip in and say that I think your EA friends are wrong and that this post makes me respect you much more, not less. I also really appreciate that you are being so open about your thinking and internal conflicts, even though you are yourself a leader in the space I really look up to.
Recent announcements of Meta had me thinking about “open source” AI systems more, and I am wondering whether it would be worthwhile to reframe open source models, and start referring to them as, “Models with publicly available model weights”, or “Free weight-models”.
This is not just more accurate, but also a better political frame for those (like me) that think that releasing model weights publicly is probably not going to lead to safer AI development.
Ozy is consistently both underread and incredibly insightful, I wish these kinds of posts got crossposted to the forum more often.
This is a quick and response to this, I find it an interesting question but do not have time to respond in detail:
I think, to be honest, there is very little evidence that would make me change my mind about capitalism because It is such a comprehensive term that people disagree about quite radically on what exactly they’re referring to.
If, for example, it is referring to problems related to poverty, racism, sexism, imbalances in power, I think I don’t need to be convinced, because I already think that those are big problems that we need to face, and that we need structural change to address those. If, on the other hand, opposing capitalism would mean the same as supporting violent overthrow of the systems that we currently have, I think there is evidence that would persuade me that this is necessary, but it would take a very different form than convincing me that the problems mentioned above are important and worth addressing.
Opposing capitalism also seems like an easy applause light that doesn’t come with any costs, such as thinking through what kind of alternative system would need to be presented and what kind of power structures they would be built on.
Tldr: I am just not sure Capitalism is a useful concept and would need to be convinced of that before I could be convinced to “oppose it”.