Impact Markets link: https://app.impactmarkets.io/profile/clfljvejd0012oppubuwne2k2
Writer
No, for now, we aren’t committing to any specific type of niche!
Thumbs up to this summary. My only nitpick is that I wouldn’t call Mana “virtual currency” since it could be confused with cryptocurrency, while it’s mere internet points.
There is a single winner so far, and it will be announced with the corresponding video release. The contest is still open, though!
Edit: another person claimed a bonus prize, too.
Easy fix: let the user pick a discounted sum of future income. It could also be calculated using some average over past daily income if that’s available to see.
There’s a maybe naive way of seeing their plan that leads to this objection:
”Once we have AIs that are human-level AI alignment researchers, it’s already too late. That’s already very powerful and goal-directed general AI, and we’ll be screwed soon after we develop it, either because it’s dangerous in itself or because it zips past that capability level fast since it’s an AI researcher, after all.”
What do you make of it?
No, but we’ll need more than one voice actor for some videos. We’ll consider you for those occasions if you send us your portfolio.
Can I promote your courses without restraint on Rational Animations? I think it would be a good idea since people can go through the readings by themselves. My calls to action would be similar to this post I made on the Rational Animations’ subreddit: https://www.reddit.com/r/RationalAnimations/comments/146p13h/the_ai_safety_fundamentals_courses_are_great_you/
Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/
I hadn’t advertised it until now because I had to find someone to help moderate it.
I want people here to be among the first to join since I expect having EA Forum users early on would help foster a good epistemic culture.
I think the photo of the Yoruba folks might be a bit misleading in the context of this post, and I wouldn’t include it.
I’m not entirely sure If I agree, but I removed them out of abundance of caution.
Edit: yeah, you are correct actually.
k
I wonder why performance on AP English Literature and AP English Language stalled
I was considering downvoting, but after looking at that page maybe it’s good not to have it copy-pasted
This article is evidence that Elon Musk will focus on the “wokeness” of ChatGPT, rather than do something useful about AI alignment. But still, we should keep in mind that news are very often incomplete or simply just plain false.
Also, I can’t access the article.
Related: I’ve recently created a prediction market about whether Elon Musk is going to do something positive for AI risk (or at least not do something counterproductive) according to Eliezer Yudkowsky’s judgment: https://manifold.markets/Writer/if-elon-musk-does-something-as-a-re?r=V3JpdGVy
Hard agree, the shoggoth memes are great.
It would probably be really valuable if people could forecast the ability to build/deploy AGI to within roughly 1 year, as it could inform many people’s career planning and policy analysis (e.g., when to clamp down on export controls). In this regard, an error/uncertainty of 3 years could potentially have a huge impact.
Yeah, being able to have such forecasting precision would be amazing. It’s too bad it’s unrealistic (what forecasting process would enable such magic?). It would mean we could see exactly when it’s coming and make extremely tailored plans that could be super high-leverage.
This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to “how fast is AI takeoff going to be?” has the most strategic leverage and that this topic influences the probability we’re going extinct due to AI the most, together with timelines (although it seems to me that we’re less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?
One class of examples could be when there’s an adversarial or “dangerous” environment. For example:
Bots generating low-quality content.
Voting rings.
Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn’t be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
In my understanding, EigenKarma only creates bubbles if it also acts as a default content filter. If, for example, it is just displayed near usernames, it shouldn’t have this effect but would still retain its use as a signal of trustworthiness.
Also, sometimes creating a bubble—a protected space—is exactly what you want to achieve, so it might be the correct tool to use in specific contexts.
It’s the first time I read about this, so please correct me if I’m misunderstanding.
Personally, I find the idea very interesting.
I have to squint a lot to see the sense in this mapping
This average percentage relates to organic traffic only, right? The paid traffic APV must look much lower, something like 5%?