I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don’t take these bets very seriously. They also aren’t a great way to uncover people’s true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
Sol3:2
But he did not, in fact, disclose the conflict of interest. “My wife is President of Anthropic” means nothing in and of itself without some good idea of what stake she actually owns.
Conflict of interest disclosure: my wife is co-founder and President of Anthropic. Please don’t assume things about my takes on specific AI labs due to this.10
This is really amazing. How much of Anthropic does Daniela own? How much does your brother in law own? If my family was in line to become billionaires many times over due to a certain AI lab becoming successful, this would certainly affect my takes.
We have a large universe of “technologies we make for economic benefit”: nearly all of them are “somewhat fine” to “very good”. Famous exceptions of course exist like leaded petrol but are relatively rare. I don’t count nuclear bombs in this comparison class given that they were explicitly invented to kill large numbers of people Given the massive commercial incentive to make AI useful, we should plausibly expect it to be safe. This is IMO the base-rate-thinking case. Purely from the outside view, we should expect AI to be fine.
well if we’re judging people negatively based on bad vibes, then I can think of a few people who might be higher up in the firing line (like 95% of “EA leadership”)
The view seems to be fairly common among EA circles that global development is intractable, or very low in tractability. The best that can be done is global health work (although the idea that this actually leads to faster growth somewhere down the line is IMO very very poorly evidenced) or mass immigration to the West.
Personally, I think this view is mostly wrong although bits of it are clearly true. Obviously there’s nothing that EA can realistically do for development in South Africa or Argentina, countries with very different histories but completely screwed-up utterly hopeless politics. That said, most developing countries have institutions that are quite a bit more functional than this and there’s a lot more to work with (even in somewhere like Nigeria, vide https://kenopalo.substack.com/p/on-why-i-remain-bullish-on-nigeria). That sort of work ultimately takes long-term cadre building in developing countries themselves since trying to import governance wholesale rarely works outside of (occasionally) authoritarian regimes, but the potential payoffs are of course absolutely enormous.
I’m confused as to why CEA is doing a “governance reform” project when it is probably badly in need of some governance reform itself, and in any case it does not control the supply of funds to the EA community. OpenPhil does, and is more or less the sole funder (RIP FTX, sadly). If OpenPhil wants to see change it has all the tools it needs to make it happen.
How can you do a multi-week work trial for a CEO role??? And how is this remotely compatible with attracting experienced top-level executive candidates?
No, the joke is in thinking that you can combine a very hierarchical funding structure, a high % male, widespread polyamory, and have it all not end with torrid #MeToo stories strewn across the mainstream press. Eating babies, on the other hand, would actually have solved Ireland’s overpopulation problem, provided they were eaten in sufficient quantities.
Thank you. I agree and disagree in part with the last paragraph in particular. It is very true that we do not live in an ideal world, and therefore should opt for pragmatic, sensible solutions. Virtually every other social movement manages to do just fine without widespread polyamory. Perhaps they’re better off for it? If poly is essentially confined to some cults and EA, surely the default hypothesis should be that it’s clearly societally maladaptive.
An ounce of prevention is better than a pound of cure, after all, so while the Community Health team clearly needs a complete replacement of its existing personnel, perhaps it would be better if such a team either wasn’t needed or could be far smaller with far less to do. Procedures fail and bureaucracies are clumsy things. No likes the HR department for a reason: surely everyone would be far better off if there was less need for the EA version of it?
With regards to your last sentence, the problem here is that I’m not really talking about “safeguarding and misconduct” issues per se. Women can still have a miserable time in a social scene where there are no real safeguarding problems and no obvious misconduct beyond financial COIs (which polyamory does make inevitable, but hey). Monogamy is not infallible of course, but as a default it just makes everything massively easier when it comes to navigating social dynamics between men and women.
(before someone tells me that no one is poly in their uni EA club, yes I know and that’s also not what I’m talking about here)
Tom, bro, as much as I love you, I have a few very respectful points of dissent:
I don’t understand what I, as a British taxpayer, am getting out of this. I don’t understand why the UK in particular benefits from trying to take the regulatory lead on AI. In fact, it seems likely that we would be paying substantial costs and get no benefits, by actively making ourselves a worse place to do AI research as opposed to more permissive jurisdictions. We already have a very severe economic growth problem in the UK and should be careful about doing things to make it even worse.
I don’t understand what the world is supposed to be getting out of this, given the incredible dysfunctionality of virtually every major British public institution. Nothing works, as brutally exposed during the pandemic and since. The state cannot even provide very basic public goods like effective anti-pandemic institutions, A&E waiting times below 10 hours, and an Army capable of generating more than a single armoured brigade. I humbly suggest that it would be better for the state to learn to walk again before it tries something as complicated as regulation for cutting-edge AI. The Online Safety Bill is on the verge of banning WhatsApp, for crying out loud, which does not fill me with confidence about the state’s ability to regulate tech.
As you yourself rightly point out, AI regulation is likely to be especially difficult given the unprecedented nature of this technology, which makes it even harder to regulate and even more likely that a premature rush to regulation winds up doing incredible harm by cutting off access to a technology that could be a tremendous source of benefit. You cite the precedent of nuclear regulation. From where I stand this has been a disaster. Virtually no where in the West has access to cost-effective nuclear power generation at the moment, with South Korea seemingly the last remaining friendly country that can still build nuclear power plants and operate them at coal-competitive prices. Meanwhile North Korea and Pakistan have nuclear bombs (not sure which is worse). Something in nuclear regulation has gone horribly wrong and it makes me shiver when AI safety people cite it as some kind of useful precedent.