And what I’m describing isn’t an individual project full of people who live together; it’s coordinating a bunch of people who work on many different projects to move to the same general area. And even if I were describing an individual project full of people who live together, every single failure of such a project within EA is a rounding error compared to the Manhattan Project, for better or worse.
Peter Berggren
I thought the whole point of EA was that we based our grantmaking decisions on rigorous analyses rather than hunches and anecdotes.
Seems like the kind of thing that should have at least one FTE on it. Is there a reason no one has really put a lot of time into it (e.g. a specific compelling argument that this isn’t the right call), or is it just that no one has gotten to it?
How many FTEs are working on this problem?
Additionally, I wonder why there hasn’t been an effort to start a more “intense” EA hub somewhere outside the Bay to save on rent and office costs. Seems like we’re been writing about coordination problems for quite some time; let’s go and solve one.
It is serious, and in my time zone, it wasn’t April 1.
Thanks for the advice. To be clear, I’m not certain that a hardcore environment would be the best environment for me either, but it seems worth a shot. And judging by how people tend to change in their involvement in EA as they get older, I’ll probably only be as hardcore as this for like ten years.
Thanks for the reflection.
I’ve read about Leverage, and it seems like people are unfairly hard on it. They’re the ones who basically started EA Global, and people don’t give them enough credit for that. And honestly, even after what I’ve read about them, their work environment still sounds better to me than a supposedly “normal” one.
Thanks for the advice. I was more wondering if there was some specific organization that was known to give that sort of environment and was fairly universally recognized as e.g. “the Navy SEALs of EA” in terms of intensity, but this broader advice sounds good too.
This was semi-serious, and maybe “totalizing” was the wrong word for what I was trying to say. Maybe the word I more meant was “intense” or “serious.”
CLARIFICATION: My broader sentiment was serious, but my phrasing was somewhat exaggerated to get my point across.
I think there are two meanings of “distraction” here. The first, more “serious” meaning that the media probably uses is in the more generic sense of “something which distracts people.” The second one, and one that a lot of people in the “AI ethics” community like to use, is a sense in which this was deliberately thought up as a diversion by tech companies to distract the public from their own misconduct.
A problem I see is people equivocating between these two meanings, and thus inadvertently arguing against the media’s weird steel-man version of the AI ethicists’ core arguments, instead of the real arguments they are making.
No one should ever move to the Bay Area, for many reasons. Seattle is fine.
Don’t really think there is any; in fact, there’s plenty of evidence to the contrary, from the polls I’ve seen.
Last I checked, the whole point of the Overton window is that you can only shift it by advocating for ideas outside of it.
Would be interested in seeing the cause of the general problem here, and some possible solutions.
I know some people in this category, mostly because they are extremely uncertain over what the best work is on AI risk.
I amend my previous comment to replace the phrase “seriously considered” with “considered.” Also, there are many state laws against human reproductive cloning, but many states have no such laws:
https://www.thenewatlantis.com/publications/appendix-state-laws-on-human-cloning
I think that it’s good that this proposal was seriously considered. I don’t think it currently beats other megaprojects on impact/solvability/neglectedness, especially since quite a bit of genetic engineering research is already legal in the US (I am once again reminding everyone that human reproductive cloning is legal in many US states, and that it seems unlikely for blue states to enact new laws against reproductive autonomy in a post-Roe era). However, I think that it’s good that this proposal was seriously considered, and there should be, on the margin, more proposals like it (in terms of large scale, outside-the-box thinking, potential “weirdness,” etc.)
Last I checked, Tetlock’s result on the efficacy of superforecasters vs. domain experts wasn’t apples-to-apples: it was comparing individual domain expert forecasts vs. superforecaster forecasts that had been aggregated.
And one more thing: if some people are nervous, wouldn’t it be possible to get funded from people who are enthusiastic?