I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
I created this account because I wanted to have a much lower bar for participating in the Forum, and if I donât do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
Thank you for fixing this!
I also donât think itâs a good use of time, which is why Iâm asking the question.
However, I believe attending is worth significantly more than three hours. Thatâs why Iâve invested a lot of time in this previously, though Iâd still prefer to allocate that time elsewhere if possible.
E: Itâs very helpful to know that the acceptance rate is much higher than I had thought. It already makes me feel like I can spend less time on this task this year.
Hi, I hope this is a good time to ask a question regarding the application process. Is it correct that it is possible to apply a second time after an initial application has been rejected?
I understand that the bar for acceptance might be higher on a second attempt. However, I feel this would allow me to save considerable time on the application process. Since I was accepted last year and a few times before, I might be able to reuse an old application with minimal editing. This could help meâand potentially many othersâavoid spending three or more hours crafting an entirely new application from scratch.
Looking forward to your response! đ
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I havenât:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if Iâm only available for that time slot, as this is something I would never do for an in-person conference.
I canât find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
Hi Lauren!
Thank you for another excellent post! Iâm becoming a big fan of the Substack and have been recommending it.
Quick question you may have come across in the literature, but I didnât see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.
Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?
Iâd be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.
Could you say a bit more about your uncertainty regarding this?
After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes.
Or do you mean it more from an outsider/âactivist perspectiveâthat the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions?
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
Simple Forecasting Metrics?
Iâve been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: itâs simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecasterâs accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.
What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a reportâsomeone who might be interested in platforms like Metaculus. Imagine if we could say, âWhen Metaculus predicts something with 80% certainty, it happens between X and Y% of the time,â or âOn average, Metaculus forecasts are off by X%â. This kind of clarity could make comparing forecasting sources and platforms far easier.
Iâm curious whether anyone has explored creating such a concise metricâone that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. Iâm interested in hearing any thoughts or seeing any work that has been done in this direction.
Hi there!
I really enjoy the curated EA forum podcast and appreciate all the effort that goes into it. However, I wanted to flag a small issue: in my podcast app, emojis cannot be included in filenames. With the increasing use of the âđ¸â in forum usernames, this has been causing some problems.
Would it be possible to remove emojis from the filenames?
Thanks for considering this!
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Just to clarify, I assume that our distributions will not be made public /â associated with our names?
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like âSuffering-freeâ, or âCleanâ, or âcruelty-freeâ.
I am aware of Metaforecast, but from what I understood, it is no longer maintained. Last time I checked, it did not work with Metaculus anymore. It is also not very easy to use, to be honest.
They seem really good! I feel like an idiot for asking this, but where on their website can I subscribe to the newsletter?
Thank you for writing this and for your kind words about the Dutch EA community!
I am curious to know whether you feel like an organisation that doubles down on a single country could be more effective? At least in the political realm, it should be possible to build good relationships with the relevant political actors, though obviously you would trade away a lot of expertise that comes from having a more international perspective.
This seems incredibly exciting! I know several organizations that are looking to spin up their own internal forecasting systems, but canât find a good open-source system to use as a basis. Metaculus is definitely the most advanced forecasting system available, and I am super excited to see whether it will be possible to host local versions of it!
Are you here to win or the win the race?
Iâve been reflecting on the various perspectives within AI governance discussions, particularly within those concerned about AI safety.
One noticeable dividing line is between those concerned about the risks posed by advanced AI systems. This group advocates for regulating AI as it exists today and increasing oversight of AI labs. Their reasoning is that slowing down AI development would provide more time to address technical challenges and allow society to adapt to AIâs future capabilities. They are generally cautiously optimistic about international cooperation. I think FLI falls into this camp.
On the other hand, there is a group increasingly focused not only on developing safe AI but also on winning the race, often against China. This group believes that the US currently has an advantage and that maintaining this lead will provide more time to ensure AI safety. They likely think the US embodies better values compared to China, or at least prefer US leadership over Chinese leadership. Many EA organizations, possibly including OP, IAPS, and those collaborating with the US government, may belong to this group.
Iâve found myself increasingly wary of the second group, tending to discount their views, trust them less, and question the wisdom of cooperating with them. My concern is that their primary focus on winning the AI race might overshadow the broader goal of ensuring AI safety. I am not really sure what to do about this, but I wanted to share my concern and hope to think a bit in the future about what can be done to prevent a rift emerging in the future, especially since I expect the policy stakes will get more and more important in the coming years.
I donât disagree with your final paragraph, and I think this is worth pursuing generally.
However, I do think we must consider the long-term implications of replacing long-established structures with AI. These structures have evolved over decades or centuries, and their dismantling carries significant risks.
Regarding startups: to me, it seems like their decline in efficiency as they scale is a form of regression to the mean. Startups that succeed do so because of their high-quality decision-making and leadership. As they grow, the decision-making pool expands, often including individuals who havenât undergone the same rigorous selection process. This dilution can reduce overall alignment of decisions with those the founders would have made (a group already selected for decent decision-making quality, at least based on the limited metrics which cause startup survival).
Governments, unlike startups, do not emerge from such a competitive environment. They inherit established organizations with built-in checks and balances designed to enhance decision-making. These checks and balances, although contributing to larger bureaucracies, are probably useful for maintaining accountability and preventing poor decisions, even though they also prevent more drastic change when this is necessary. They also force the decision-maker to take into account another large group of stakeholders within the bureaucracy.
I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
I think this post misses one of the concerns I have in the back of the mind about AI: How much is current pluralism, liberalism and democracy based on the fact that governance canât be automized yet?
Currently, policymakers need the backing of thousands of bureaucrats to execute policy, this same bureaucracy provides most of the information to the policymaker. I am fairly sure that this makes the policymaker more accountable and ensures that some truly horrible ideas do not get implemented. If we create AI specifically to help with governance and automate a large amount of this kind of labor, we will find out how important this dynamic isâŚ
I think this dynamic was better explained in this post.
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilođ¸. I donât always agree with him, but adding some numbers makes every discussion better!