I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
They seem really good! I feel like an idiot for asking this, but where on their website can I subscribe to the newsletter?
Thank you for writing this and for your kind words about the Dutch EA community!
I am curious to know whether you feel like an organisation that doubles down on a single country could be more effective? At least in the political realm, it should be possible to build good relationships with the relevant political actors, though obviously you would trade away a lot of expertise that comes from having a more international perspective.
This seems incredibly exciting! I know several organizations that are looking to spin up their own internal forecasting systems, but canât find a good open-source system to use as a basis. Metaculus is definitely the most advanced forecasting system available, and I am super excited to see whether it will be possible to host local versions of it!
Are you here to win or the win the race?
Iâve been reflecting on the various perspectives within AI governance discussions, particularly within those concerned about AI safety.
One noticeable dividing line is between those concerned about the risks posed by advanced AI systems. This group advocates for regulating AI as it exists today and increasing oversight of AI labs. Their reasoning is that slowing down AI development would provide more time to address technical challenges and allow society to adapt to AIâs future capabilities. They are generally cautiously optimistic about international cooperation. I think FLI falls into this camp.
On the other hand, there is a group increasingly focused not only on developing safe AI but also on winning the race, often against China. This group believes that the US currently has an advantage and that maintaining this lead will provide more time to ensure AI safety. They likely think the US embodies better values compared to China, or at least prefer US leadership over Chinese leadership. Many EA organizations, possibly including OP, IAPS, and those collaborating with the US government, may belong to this group.
Iâve found myself increasingly wary of the second group, tending to discount their views, trust them less, and question the wisdom of cooperating with them. My concern is that their primary focus on winning the AI race might overshadow the broader goal of ensuring AI safety. I am not really sure what to do about this, but I wanted to share my concern and hope to think a bit in the future about what can be done to prevent a rift emerging in the future, especially since I expect the policy stakes will get more and more important in the coming years.
I donât disagree with your final paragraph, and I think this is worth pursuing generally.
However, I do think we must consider the long-term implications of replacing long-established structures with AI. These structures have evolved over decades or centuries, and their dismantling carries significant risks.
Regarding startups: to me, it seems like their decline in efficiency as they scale is a form of regression to the mean. Startups that succeed do so because of their high-quality decision-making and leadership. As they grow, the decision-making pool expands, often including individuals who havenât undergone the same rigorous selection process. This dilution can reduce overall alignment of decisions with those the founders would have made (a group already selected for decent decision-making quality, at least based on the limited metrics which cause startup survival).
Governments, unlike startups, do not emerge from such a competitive environment. They inherit established organizations with built-in checks and balances designed to enhance decision-making. These checks and balances, although contributing to larger bureaucracies, are probably useful for maintaining accountability and preventing poor decisions, even though they also prevent more drastic change when this is necessary. They also force the decision-maker to take into account another large group of stakeholders within the bureaucracy.
I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
I think this post misses one of the concerns I have in the back of the mind about AI: How much is current pluralism, liberalism and democracy based on the fact that governance canât be automized yet?
Currently, policymakers need the backing of thousands of bureaucrats to execute policy, this same bureaucracy provides most of the information to the policymaker. I am fairly sure that this makes the policymaker more accountable and ensures that some truly horrible ideas do not get implemented. If we create AI specifically to help with governance and automate a large amount of this kind of labor, we will find out how important this dynamic isâŚ
I think this dynamic was better explained in this post.
Thank you for all the hard work!
I wasnât really surprised by anything here, except for the heavy emphasis on EAGxNetherlands 2024. Is this based on the EAGxRotterdam results? Did that lead to significant community growth? That would slightly surprise me because my sense of these events was that they mostly attract a group that is already pretty active in EA.
I am fairly sure that the JWS means to say that these subgroups are about to /â should lose some of their dominance in the EA movement.
Like Karthik, I donât really understand what is so terrible about this, but I agree that the California edition is at least strange. Itâs interesting how many of the ideas central to EA originate from California. While exploring the origin stories of these ideas is intriguing, I would be much more interested in an issue that explores ideas from far outside that comfort zone and see what they can teach us.
However, Iâm not an editor and donât think Iâd make a good one either đ
In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?
Isnât it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?
Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldnât this tailored approach make promoting EA among friends and acquaintances more effective and engaging?
Thank you! And Bullet Journal seems like a great new addition, congratulations!
I will take this post as an opportunity to ask a quick question about the company pledge: I got the feeling that it has been placed a little on the back burner. Or at least, I have never seen it promoted and only found out it existed when I was looking at the list of pledge takers. Is this still a product that is actively receiving attention? If not, why not?
What do you mean by being on their side of the fence? It is quite hard for me to discern the underlying disagreement here. I feel like I am one of the most engaged EAs in my local community, but the beliefs Torres ascribes to EA are so far removed from my own that it is difficult to determine whether there is any actual substantial disagreement underlying all this nastiness in the first place.
This was very helpful, thank you!
I canât find a better place to ask this, but I was wondering whether/âwhere there is a good explanation of the scepticism of leading rationalists about animal consciousness/âmoral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.
I really like Zviâs work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar peopleâs thoughts on this.
Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.
Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHSâs spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this yearâs budget per person will likely be significantly lower.
I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you!
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the communityâs growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.
However, Iâm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam editionâs perceived expense, Iâm concerned that the cost may deter potential newcomers, especially given the feedback Iâve heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/âeffectiveness.
While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
Out of curiosity, why do you think this is the case? Isnât the Berlin and Nordics conference (and the London EAG) much more accessible for most EAs in Western Europe?
(Also, personally I assumed that the 35% was not a goal but a maximum to make sure the speakers are not from the Netherlands too much.)
I am aware of Metaforecast, but from what I understood, it is no longer maintained. Last time I checked, it did not work with Metaculus anymore. It is also not very easy to use, to be honest.