See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I’m no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that’s bent on proselytising both while not listening deeply enough to integrate other perspectives).
My preference was for the former metric (based on AI PitchBook-NVCA Venture Monitor), and another metric based on some threshold for the absolute amount Anthropic or OpenAI got in investments in a next round (which Marcus reasonably pointed out could be triggered if the company just decided to do a some extra top-up round).
I was okay with using Marcus’ Anthropic valuation metric with the threshold set higher, and combined with another possible metric. My worry was that Anthropic execs would not allow their valuation to be lowered unless they were absolutely forced to offer shares at a lower price; a bit like homeowners holding on to their house during a downturn unless their mortgage forces them to sell.
I kinda liked the YCombinator option in principle, but I guessed that applicants for the summer 2025 program would already start to get selected around now, so that would not pick up on a later crash. Also, YC feels like the center of the AI hype to me, so I worried that they’d be last to give way (Marcus thought staff have their hand on the pulse and could change decisions fast, and therefore that made YC more of a leading indicator).