Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
Unable to work. Was community director of EA Netherlands, had to quit due to long covid.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research.
I want to share a concern that hasn’t been raised yet: this seems like a huge conflict of interest.
From the Power for Democracies website:
Power for Democracies was founded in 2023 by Markus N. Beeko (the former Secretary General of Amnesty International in Germany) together with Stefan Shaw and Stephan Schwahlen, the founders of the philanthropy advisory legacies.now and co-founders of effektiv-spenden.org. Power for Democracies is funded by small family foundations and individuals from Germany and Switzerland who wish to make an effective contribution in support of liberal democracies worldwide.
And then, the Protect Democracy fund is listed simply with the other charity options at the Spenden (Giving) page, with only the small Beta label and this explanation under ‘more info’:
With our donation fund “Defend Democracy,” we support organizations that we consider particularly suitable to bolster the resilience of democracy. In doing so, we are initially focusing exclusively on Germany. Due to the less robust research in this area, the measures funded here are still associated with more uncertainty.
I understand why you want to start a democracy charity evaluator. I think it’s an interesting experiment in trying to create more leverage in other movements and create a large impact. I also understand the desire to use your platform to support this project.
However, I find the current situation inappropriate. Your role is to recommend independently-vetted highly effective projects. This is a large responsibility and requires outstanding integrity. You cannot use this reputation to support your own spin-off in this way without making a much clearer distinction between it and the other options.
I can see another situation working, where the Project Democracy fund is clearly separated from the other options (i.e. not in the same list) and where your conflicts of interest are clearly stated (e.g. in a disclaimer at the top).
I also wonder whether you have a conflict of interest policy. If not, I recommend developing one.
Do you think EA has the problem of “hero worship”? (I.e. where opinions of certain people, you included, automatically get much more support instead of people thinking for themselves) If yes, what can the “worshipped” people do about it?
Can you give some examples of exciting work that you’d find exciting enough to accept, and your selection criteria/heuristics?
I think it’s premature to judge things based on the little information that’s currently available. I would be surprised if there weren’t reasons for the board’s unconventional choices. (I’m not ruling it out though, that what you say ends up being right)
Worth noting that of the 4 remaining board members, 2 are associated with EA: Helen Toner (CSET) and Tasha McCauley (EV UK board member)
seemed like a genuine attempt at argument and reasoning and thinking about stuff
I think any genuine attempt needs to acknowledge that Trump tried to overturn the election he lost.
I’m all for discussing the policies, but here it’s linked to “EAs should vote for Trump” and that demands that it assesses all the important consequences. (Also, arguing for a political candidate is against Forum norms. I wouldn’t like a pro-Harris case)
I have not been very closely connected to the EA community the last couple of years, but based on communications, I was expecting:
an independent and broad investigation
reflections by key players that “approved” and collaborated with SBF on EA endeavors, such as Will MacAskill, Nick Beckstead, and 80K.
For example, Will posted in his Quick Takes 9 months ago:
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=yxK8NCxrZQBjAxpCL
It now turns out that this has changed into podcasts, which is better than nothing, but doesn’t give room to conversation or accountability.
I think 80K has been most open in reflecting on their mistakes and taking responsibility.
I was also implicitly expecting:
a broader conversation in the community (on the Forum and/or at conferences) where everyone could ask questions and some kind of plan of improvement would be made
It is disappointing that too little had happened, and it feels kind of like a relationship where a bad thing happened, where the immediate fallout was addressed, but then never quite aired out. I think it would be very healthy for the community to take these steps and reflect on & learn from the SBF affair as well as the mismanaged aftermath, and then hopefully we can all move forward.
What are your top 3 “existential risks” to EA? (i.e. risks that would permanently destroy or curtail the potential of Effective Altruism—both to the community and the ideas)
What has been the biggest benefit to your well-being since getting into EA? What would you advice to the many EA’s who struggle with being happy/not burning out? (our community seems to have a higher than average rate of mental illness)
The karma of this post is quite disproportional to its value.. It doesn’t have that much information, or am I missing something?
What is the relevance of “the link between biodiversity and economic growth” to existential risk? It is not immediately obvious to me.
This seems like a very long expected causal chain, and therefore—unless each link is specifically supported by evidence—unlikely to produce much effect compared to other approaches. It seems to assume:
1) Climate change is a relatively large x-risk factor (I interpreted the presentation I saw of your forthcoming article as claiming that “climate change is a non-negligible risk factor, but not a relatively large one”).
2) Improving sustainability of businesses and business leaders is a relatively effective way of addressing climate change (possibly, but there are many alternatives)
3) Increasing the amount of sustainability in business school programs will improve the sustainability of business and business leaders (There seem more direct ways of influencing business leaders; Examples: what about corporate campaigns but focused on sustainability? What about carbon taxes?)
4) Affecting business rankings will affect the curriculum (Yes, this seems to happen)
It might be the case that this was an opportunity that passed by Ellen Quiqley and was low-effort to give input on. But I’m afraid this was not a great use of time, and furthermore I’m afraid this validates the—for lack of a better term—“good-by-association fallacy”:
Cause Y is important.
Intervention A addresses cause Y.
Therefore, intervention A is a good use of resources.
I think this fallacy is a harmful meme that poses a risk to the EA and x-risk brand, because it’s very bad prioritization.
New York Times suggesting a more nuanced picture: https://archive.li/lrLzK
Altman was critical of Toner’s recent paper, discussed outing her, and wanted to expand the board. The board disagreed on which people to add, leading to a stalemate. Ilya suddenly changed position, and the board took abrupt action.
They don’t offer an explanation what the ‘dishonesty’ would’ve been about.
This is the paper in question, which I think will be getting a lot of attention now: https://cset.georgetown.edu/publication/decoding-intentions/
How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? AI technologies are evolving rapidly and enable a wide range of civilian and military applications. Private sector companies lead much of the innovation in AI, but their motivations and incentives may diverge from those of the state in which they are headquartered. As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition.
In this brief, we explore a crucial policy lever that has not received much attention in the public debate: costly signals.
Thank you Chris, that’s understandable.
How about public feedback on just the top 4 though? Or even just the #1. I find it odd that, in a competition of this scale, no specific reasons are provided for why you picked these winners.
A lot of people put a lot of effort into these reports. Providing reasons why you pick certain winners seems to be like a basic aspect of running a competition in a way that’s respectful to participants. This helps participants to compare their own submissions and learn from that. (I think the reward for good faith submissions is a nice contribution to that, and I’m grateful for it, but I don’t think it’s a replacement)
I think there is a conversation to be had about EA funders inside-companies strategy, and whether that creates issues. For example, Holly Elmore writes that many EA institutions don’t want to fund AI Pause advocacy to the public, because that could jeopardize their influence with those companies. (https://twitter.com/ilex_ulmus/status/1690097834123755521?t=FSq1OWL406KkdF74IN0T3w&s=19) That seems like an unhealthy dynamic.
However, Politico has a history of bad-faith EA coverage (https://forum.effectivealtruism.org/posts/Q4TJ2vPQnD5Zw2aiz/james-herbert-s-shortform?commentId=wXZzJgJqZ8TxArEN6), and also here I find that they’re using a much lower standard for EA criticism* than for the defense.
I do wonder if there are any particular ways in which those working on catastrophic risk should visibly signal cooperativeness with those focused on current-model risk.
** Especially this part: “There’s a push being made that the only thing we should care about is long-term risk because ‘It’s going to take over the world, Terminator, blah blah blah,’” Venkatasubramanian said. ”
Not that we can do much about it, but I find the idea of Trump being president in a time that we’re getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
There’s a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn’t it.
I agree that the more inquisitive posts are more interesting, but the goal of this post is clearly not meant to reflect deeply on what to learn from the situation. It’s RP giving an update/statement that’s legally robust and shares the most important details relevant to RP’s functioning
Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature.
Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it.
I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?
Ray Dalio is giving out free $50 donation vouchers: tisbest.org/rg/ray-dalio/
Still worked just a few minutes ago