This is helpful context. I think it is still a bit unsettling that there was a noticeable strain of this type of stuff from the attendees (like if I went to a ticketed party and noticed that 5% of it was into race science somehow, I’d feel uncomfortable and want to leave.)
Garrison
I think controversial is a totally fair and accurate description of the event given that it was the subject of a very critical story from a major newspaper, which then generated lots of heated commentary online.
And just as a data point, there is a much larger divide between EAs and rationalists in NYC (where I’ve been for 6+ years), and I think this has made the EA community here more welcoming to types of people that the Bay has struggled with. I’ve also heard of so many people who have really negative impressions of EA based on their experiences in the Bay which seem specifically related to elements of the rationalist community/culture.
Idk what caused this to be the case, and I’m not suggesting that rationalists should be purposefully excluded from EA spaces/events, but I think there are major risks to EA to be closely identified with the rationality community.
This is helpful, though Lighthaven is definitely backed by EA money.
As others have noted, it looks like the journalists got a lot of basic things wrong in this reporting. I’m doubly frustrated by this because basically all of the EA/rationalist discourse on Twitter is about these mistakes, with almost no discussion of the unchallenged allegations in the piece: that Manifold’s conference had attendees/speakers with ties to eugenicist and racist people and groups.
For example, whether or not Richard Hanania uses prediction markets, I want him nowhere near EA or EA-funded groups/events. For why, see this.
(Writing this quickly and while very sleep deprived).
I really appreciate the OP for so clearly making the case for such a big idea and everyone’s engagement with it. That said, it’s a bummer that maybe the most common/upvoted reply on the EA forum to pro-left-wing arguments is something like this because it assumes that socialism is just about making the government bigger, but it’s not, at least not necessarily. There are lots of different definitions of socialism, but I think the common thread is: a system that aims to empower the working class to build an alternative to capitalism. The most compelling and practical vision of this to me is Jacobin founder Bhaskar Sunkara’s. His appearance on Lex Fridman is a (relatively) short articulation, and his book The Socialist Manifesto goes into more detail.
(An uncomfortable implication of the above commenter’s perspective is that we should redistribute more money from the poor to the rich, on the off chance they put it toward effective causes.)
I don’t blame people for thinking socialism = more government, because, at least in the US, education on the topic is extremely bad (we did have a whole Cold War and all).
Some examples of policies that push in a more socialist direction that don’t necessarily involve growing the government:
Worker codetermination on corporate boards (common in Germany, which has a strong economy and a far more equal distribution of wealth than the US)
Worker cooperatives
Participatory budgeting
And there are plenty of socialist-y policies that would grow the public sector but in a directed way to improve welfare for lots of people, like:
Public banking
Green subsidies
Public options for natural monopolies like fiber optic internet
Single-payer or nationalized healthcare
If you look at rich countries, there is a strong positive association between left-wing policies and citizen wellbeing. I think it’s worth noting that the book linked is pretty clearly written with a serious pro-market slant (as is the comment). At a glance, the book doesn’t appear to get into examples of socialist/leftist movements in Europe, the US or Canada. But these movements and the results of their policies are far more relevant to any discussion of socialism in rich countries with strongly developed civil societies (where most EAs live). Ignoring Europe, and Scandinavia in particular is cherry-picking.
Further, almost no socialists I know are advocating for a command economy like the Soviet Union, but rather things like the above.
In general on the forum, it feels like capitalism-sympathetic views are treated with far less scrutiny than left-wing views.
(If anyone’s curious, I discussed EA and the left with Habiba Banu on my podcast a while back.)
It seems weird that none of the labs would have said that when asked for comment?
Hm yeah, I think the previous analysis was done on top of that report, so if anyone has access to it, it might be helpful.
I’ve found Mac Whisper to be the most accurate (haven’t tested many though), but it doesn’t distinguish between speakers or do any formatting.
Late to the party, but isn’t the relevant thing for AMF donors the counterfactual number of fish killed by mosquito nets distributed by AMF? It seems like AMF has higher rates of nets being used properly than other charities.
Yeah, I think I meant pretty neutral compared to the prompts given to elicit SupremacyAGI from CoPilot, but upon reflection, I think I largely agree with your objection.
I do still think Claude’s responses here tell us something more interesting about the underlying nature of the model than the more unhinged responses from CoPilot and Bing Chat. In its responses, Claude is still mostly trying to portray itself as harmless, helpful, and pro-humanity, indicating that some amount of its core priorities persist, even while it’s play-acting. Sydney and SupremacyAGI were clearly not still trying to be harmless, helpful, and pro-humanity. I think it’s interesting that Claude could still get to some worrying places while rhetorically remaining committed to its core priorities.
Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.
Idk of any online communities explicitly focused on this intersection, but would be interested in participating in one! Facebook groups historically have been good for this sort of thing (especially bc of the mod approval questions you could include), but I’ve basically stopped using FB entirely, as have lots of others I know. A Slack channel within the larger EA Slack may work (eagreconnect.slack.com), but I just experimented with this and there doesn’t seem to be a native feature like the FB mod approval questions. You could have channel admins that add people manually, but that seems work-intensive.
One problem I can envision is that people may be wary of having candid conversations in public-ish spaces because of the possibility of journalists or others quoting them now that EA is more high profile.
One thing I will note is that there are way more leftist EAs than is commonly assumed. As one of the more public ones, I have a biased sample I’m sure (people will reach out to me). But one anecdote: at the last EAG Bay Area, I was sitting at a random table of ~6 other people in the main food area and 4 of them were leftists.
I think there definitely would have been pushback against this at the time! And if there wasn’t, I would have not felt like this was a community for me. Titotal’s comment explains this better than I could. Additionally, GiveDirectly could have deployed billions and animal welfare charities were nowhere close to fully funded even at the height of the FTX bubble.
The idea of refuges broadly isn’t obviously terrible, but all the specifics of this one seem terrible, again for reasons outlined by others.
See above
This seems like a pretty essential piece of the proposal!
Fin Moorhouse asked something along these lines on Twitter. Pasting his question and my response below:
Fin: “Great article. I’m curious: are there estimates for how many extra fish deaths are caused by fishing wild-caught fish, especially high on the food chain (like tuna and salmon)? Seems complicated if fishing diminishes fish stocks and ∴ reduces predation in the long run?”
Me: “I didn’t come across any. I think this is an interesting line of reasoning, and it makes me a bit more uncertain about the ethics of wild-fishing, but ultimately, it doesn’t move me much.
Why?
1. If killing predators in the wild is good, why stop at fish? Why not systematically hunt tigers and lions to extinction? Some people bite this bullet, but I feel like we don’t know nearly enough to know what the welfare effects of such a large ecosystem change would be.
2. Given how clueless we are, I think that having clear signals that we care about the wellbeing of others is more robust than coming up with a byzantine diet where eating wild-caught predator fish is good, but eating other kids of fish is bad.
As our knowledge of the world gets better, I think diets like vegetarianism and veganism are more likely to lead to good welfare outcomes, both because they’re easier memes to spread & because someone eating wild-caught fish because they are predators may have motivated reasoning to keep eating them even when our understanding of the welfare effects change.
Wow, thanks so much – very cool to hear!
Totally agreed RE the central nervous system!
Unfortunately, I wasn’t able to find good data on something that specific. Obviously, someone going from an omnivorous diet where they replace all land animals with plants and eat the same number of fish is going to consume fewer animals. But at least in my case, and in others of people I know, they increased their fish consumption as a result of going pescetarian.
There are also lots of recommendations to swap out land animals for fish for climate and health reasons, so I wanted to focus more on the animal welfare implications of doing that.
Interesting, will check these out.
Given that many fish we eat come from farms (and that number is increasing), do you think these arguments still hold?
Congratulations Clara! I think this is a really valuable project and am excited to see it come to fruition.
The obvious reason to not put too much weight on positive survey results from attendees: the selection effect.
There are surely people (e.g. Peter Wildeford, as he mentioned) who would have contributed to and benefited from Manifest but don’t attend because of past and present speaker choices. As others have mentioned, being maximally inclusive will end up excluding people who (justifiably!) don’t want to share space with racists. By including people like Hanania, you’re making an implicit vote that you’d rather have people with racist views than people who wouldn’t attend because of those people. Not a trade I would make.