Impact Markets link: https://app.impactmarkets.io/profile/clfljvejd0012oppubuwne2k2
Writer
I think his answer is here:
Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advantage, too great.
At the very least, the odds we get something good-enough here seem slim. (How have those climate treaties gone? That seems like a dramatically easier problem compared to this.)
I think we still see really good engagement with the videos themselves. The average view duration for the AI video is currently 58.7% of the video, and 25% of viewers watched the whole video
This average percentage relates to organic traffic only, right? The paid traffic APV must look much lower, something like 5%?
No, for now, we aren’t committing to any specific type of niche!
Thumbs up to this summary. My only nitpick is that I wouldn’t call Mana “virtual currency” since it could be confused with cryptocurrency, while it’s mere internet points.
There is a single winner so far, and it will be announced with the corresponding video release. The contest is still open, though!
Edit: another person claimed a bonus prize, too.
Easy fix: let the user pick a discounted sum of future income. It could also be calculated using some average over past daily income if that’s available to see.
There’s a maybe naive way of seeing their plan that leads to this objection:
”Once we have AIs that are human-level AI alignment researchers, it’s already too late. That’s already very powerful and goal-directed general AI, and we’ll be screwed soon after we develop it, either because it’s dangerous in itself or because it zips past that capability level fast since it’s an AI researcher, after all.”
What do you make of it?
No, but we’ll need more than one voice actor for some videos. We’ll consider you for those occasions if you send us your portfolio.
Can I promote your courses without restraint on Rational Animations? I think it would be a good idea since people can go through the readings by themselves. My calls to action would be similar to this post I made on the Rational Animations’ subreddit: https://www.reddit.com/r/RationalAnimations/comments/146p13h/the_ai_safety_fundamentals_courses_are_great_you/
Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/
I hadn’t advertised it until now because I had to find someone to help moderate it.
I want people here to be among the first to join since I expect having EA Forum users early on would help foster a good epistemic culture.
I think the photo of the Yoruba folks might be a bit misleading in the context of this post, and I wouldn’t include it.
I’m not entirely sure If I agree, but I removed them out of abundance of caution.
Edit: yeah, you are correct actually.
k
I wonder why performance on AP English Literature and AP English Language stalled
I was considering downvoting, but after looking at that page maybe it’s good not to have it copy-pasted
This article is evidence that Elon Musk will focus on the “wokeness” of ChatGPT, rather than do something useful about AI alignment. But still, we should keep in mind that news are very often incomplete or simply just plain false.
Also, I can’t access the article.
Related: I’ve recently created a prediction market about whether Elon Musk is going to do something positive for AI risk (or at least not do something counterproductive) according to Eliezer Yudkowsky’s judgment: https://manifold.markets/Writer/if-elon-musk-does-something-as-a-re?r=V3JpdGVy
Hard agree, the shoggoth memes are great.
It would probably be really valuable if people could forecast the ability to build/deploy AGI to within roughly 1 year, as it could inform many people’s career planning and policy analysis (e.g., when to clamp down on export controls). In this regard, an error/uncertainty of 3 years could potentially have a huge impact.
Yeah, being able to have such forecasting precision would be amazing. It’s too bad it’s unrealistic (what forecasting process would enable such magic?). It would mean we could see exactly when it’s coming and make extremely tailored plans that could be super high-leverage.
This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to “how fast is AI takeoff going to be?” has the most strategic leverage and that this topic influences the probability we’re going extinct due to AI the most, together with timelines (although it seems to me that we’re less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?
One class of examples could be when there’s an adversarial or “dangerous” environment. For example:
Bots generating low-quality content.
Voting rings.
Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn’t be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
For me, perhaps the biggest takeaway from Aschenbrenner’s manifesto is that even if we solve alignment, we still have an incredibly thorny coordination problem between the US and China, in which each is massively incentivized to race ahead and develop military power using superintelligence, putting them both and the rest of the world at immense risk. And I wonder if, after seeing this in advance, we can sit down and solve this coordination problem in ways that lead to a better outcome with a higher chance than the “race ahead” strategy and don’t risk encountering a short period of incredibly volatile geopolitical instability in which both nations develop and possibly use never-seen-before weapons of mass destruction.
Edit: although I can see how attempts at intervening in any way and raising the salience of the issue risk making the situation worse.