Currently in Berkeley working with MATS
Juan Gil
I agree, most of my uncertainty / hedging was on parts of the post that were removed within a few hours of posting. Thanks for checking.
[this comment references the first version of this post, which has since been edited substantially such that this qualification no longer feels necessary]
Just want to note that my main contribution to this post was listing out questions I wanted answered to inform what EAs or the EA community should do. I have a lot of uncertainty about the structure of what assets belong to whom (compared to previous expectations) and what this implies about the EA funding landscape.
I don’t have high confidence in empirical claims that might be made in this post, and I think there should be a more obvious qualifier at the beginning indicating that this was put together quickly with some crowdsourcing (and that it will be updated in response to spotted inaccuracies).
For 2), you might be interested in the EA Coworking Discord: https://discord.gg/zpCVDBGE (link valid for 7 days)
I’ve heard and used “aligned EA” to refer to someone in category 2 (that is, someone deeply committed to overarching EA principles).
I don’t think arrangement 1 (investor buys house and rents out just to EAs) is better than arrangement 2 (investor invests in whatever has highest returns, and EAs rent most convenient house) since the coordination required and inflexibility might be too much of a downside.
If the goal is to reduce costs of living together for EAs, the investor could subsidize the rent for the group of EAs while investing in something completely different with higher returns.
Some possible benefits of arrangement 1 are if the cohabitating EAs could actively make the house a better investment through e.g. maintenance. In other words, they would have a stake in the investment making good returns, and so would treat the house differently than average tenants.
Some thoughts I have:
I agree that making connections and learning more about the World Federalism movement seems valuable, especially for people working in global governance from a longtermist perspective.
I agree that achieving a federalist world government would potentially solve lots of coordination problems that contribute to x-risk while being less susceptible to especially bad world government lock-in problems (than other world government types)
That said, I think I’m a bit pessimistic about the tractability of the end goal of achieving a federalist world government, since it seems like a political impossibility without a massive change in global dynamics (which could happen in response to a large war, global catastrophe, x-risk evident enough to create political will, etc.)
I should note that WF orgs are working on relatively more tractable subgoals for now that seem like they might be valuable by themselves, like trying to set up a (consultative) UN Parliament or improving representation/democracy/transparency in other global institutions.
I don’t think the World Federalist Movement is larger now or more attractive than it was after WWII, and it’s not clear to me that there’s reason to believe it’ll grow past that point. (Really unsure about this point though)
Quantum randomness seems aleatory, so anything that depends on that to a large extent (everything depends on that to some extent) would probably also fit the term.
Winter solstice / summer solstice? Popular secular holiday in EA circles (though not strictly EA per se)
In case someone has capacity to do this right now, I’m under the impression that Open Phil does want their own page (based on conversation I had with someone researching there).
I think “fast takeoff” and “intelligence explosion” mean approximately the same thing as FOOM (notably “catastrophic AI” refers to a broader category of scenarios), and these terms are often used especially in more formal contexts.
I’m not concerned about this being a big problem, but do think this post is a good nudge for people who don’t typically think about the effect their language has on getting buy-in for their ideas.