Misha_Yagudin(Misha Yagudin)
I’ve preregistered a bunch of soft expectations about the next generation of LLMs and encouraged others in the group to do the same. But I don’t intend to share mine on the Forum. I haven’t written down my year-by-year expectations with a reasonable amount of detail yet.
Update to Samotsvety AGI timelines
The person in charge of the program should be unusually productive/work long hours/etc. because otherwise, they would lack the mindset, tacit knowledge, and intuitions that go into having an environment optimized for productivity. E.g., most people undervalue the time and time of others and hence significantly underinvest in time-saving/convenience/etc. stuff at work.
(Sorry if mentioned above; haven’t read the post.)
The point was that there is a non-negligible probability that EA will end up negative.
If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is negative. I honestly can’t see how you can be very confident in the latter. Skrewing things up is easy; unintentionally messing up AI/LTF stuff seems easy and given high-stakes causing massive amounts of harm is an option (it’s not an uncommon belief that FLI’s Puerto Rico conferences turned out negatively, for example).
I read it, not as a list of good actors doing bad things. But as a list of idealistic actors [at least in public perception] not living up to their own standards [standards the public ascribes to them].
Looking back on my upvotes, a surprisingly few great posts this year (< 10 if not ~5). Don’t have a sense of how things were last year.
Thanks, I wasn’t aware of some of these outside my cause areas/focus/scope of concern. Very nice to see others succeeding/progressing!
Given how much things are going on in EA these days (I can’t keep up even with the forum) might be good to have this as a quarterly thread/post and maybe invite others to celebrate their successes in the comments.
If Global Health Emergency is meant to mean public health emergency of international concern , then the base rate is roughly 45% = 7 / 15.5: declared 7 times, while the appropriate regulation come into force in mid-2007.
Consider suggesting it to https://forum.effectivealtruism.org/posts/H7xWzvwvkyywDAEkL/creating-a-database-for-base-rates
Well, yeah, I struggle with interpreting that:
Prescriptive statements have no truth value — hence I have trouble understanding how they might be more likely to be true.
Comparing “what’s more likely to be true” is also confusing as, naively, you are comparing two probabilities (your best guesses) of X being true conditional on “T ” and “not T;” and one is normally very confident in their arithmetic abilities.
There are less naive ways of interpreting that would make sense, but they should be specified.
Lastly and probably most importantly, a “probability of being more likely under condition” is not illuminating (in these cases, e.g., how much larger expected returns to community building is actually an interesting one).
Sorry for the lack of clarity: I meant that despite my inability to interpret probabilities, I could sense their vibes, and I hold different vibes. And disagreeing with vibes is kinda difficult because you are unsure if you are interpreting them correctly. Typical forecasting questions aim to specify the question and produce probabilities to make underlying vibes more tangible and concrete — maybe allowing to have a more productive discussion. I am generally very sympathetic to the use of these as appropriate.
I am quite confused about what probabilities here mean, especially with prescriptive sentences like “Build the AI safety community in China” and “Beware of large-scale coordination efforts.”
I also disagree with the “vibes” of probability assignment to a bunch of these, and the lack of clarity on what these probabilities entail makes it hard to verbalize these.
Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.
One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn’t even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: “so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold.” Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.
I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current “last dollar” margins.
In GH, this sorta follows from how OP’s bar didn’t change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP’s GH last dollar cost-effectiveness changed even less.
In longtermism, it’s more difficult to argue. But a bunch of grants that pass the current bar are “meh,” and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it’s not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn’t say it’s clearly good, but having the option to do so seems beneficial).
I don’t think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.
Interesting thread on early RAND culture: https://twitter.com/jordanschnyc/status/1593294746725756929
Yes, more broadly, I think that we should think about governance more… I guess there are a bunch of low-hanging fruits we can import from the broader world, e.g., someone doing internal-to-EA investigative journalism could have unraveled risks related to FTX/Alameda leadership or just did an independent risk analysis (e.g., this forecasting question put the risk of FTX default at roughly 8%/yr — I am not sure betters had any private information, I think just base-rates give probability around 10%).
Great! I think you missed a few from newer ones from https://ftxfuturefund.org/all-grants/?_area_of_interest=epistemic-institutions
I think the value of information is really high for the Future Fund. If p(doom) is really high (e.g., the largest prize is claimed), they might decide to almost exclusively focus on AI stuff — this would be a major organizational change that (potentially/hopefully) would help with AI risk reduction quite a bit.
Another follow-up forecast from Swift: https://www.swiftcentre.org/what-would-be-the-consequences-of-a-nuclear-weapon-being-used-in-the-russia-ukraine-war/
I don’t think your argument reflects much on the importance of forecasting. E.g., it might be the case that forecasting is much more important than whatever experts are going (in absolute terms), but nonetheless, experts should do their things because no one else can substitute them. (To be clear, this is a hypothetical against the structure of the argument.)
I think it’s best to access the value of information you can get from forecasting directly.
Hopefully, we can make forecasts credible and communicate it to sympathetic experts on such teams.
Thank you! We agree and [...], so hopefully, it’s more informative and is not about edge cases of Turing Test passing.