Impact Markets link: https://app.impactmarkets.io/profile/clfljvejd0012oppubuwne2k2
Writer
This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to “how fast is AI takeoff going to be?” has the most strategic leverage and that this topic influences the probability we’re going extinct due to AI the most, together with timelines (although it seems to me that we’re less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?
Here’s some more evidence I got in favor of the fact that this is a particularly good book to give to new people. So far, the Rational Animations video about the “Rethinking Identity” section is the channel’s most appreciated video in terms of comments, both on Reddit and YT. Also, I’m seeing comments suggesting that at least some people deeply understand and incorporate the message. On r/videos, which is a pretty generalist sub, I’m finding some uplifting (for me) interactions:
I’ve seen some criticism of this book in EA/Rationality spaces and in some Amazon reviews about the fact that it uses too much internet culture as examples and ties too much with current internet discourse. But I think this is potentially something good. It could achieve at least three things: 1. provide real examples (in a non-aggressive way) that are likely to be somewhat associated with people’s identities, thus maybe making them break from this pattern. 2. Be a guide and act as example on how to achieve non-inflammatory non-mind-killing discourse on potentially sensitive topics, and 3. be read more because it ties deeply with how discourse is happening on the internet in recent years. Before obtaining real-world evidence I wouldn’t necessarily bet on the fact that it achieves these positive effects, but after seeing reactions in the wild I’m more positive. The negative examples I’ve seen are fewer and generally downvoted.
Can I promote your courses without restraint on Rational Animations? I think it would be a good idea since people can go through the readings by themselves. My calls to action would be similar to this post I made on the Rational Animations’ subreddit: https://www.reddit.com/r/RationalAnimations/comments/146p13h/the_ai_safety_fundamentals_courses_are_great_you/
In my understanding, EigenKarma only creates bubbles if it also acts as a default content filter. If, for example, it is just displayed near usernames, it shouldn’t have this effect but would still retain its use as a signal of trustworthiness.
Also, sometimes creating a bubble—a protected space—is exactly what you want to achieve, so it might be the correct tool to use in specific contexts.
It’s the first time I read about this, so please correct me if I’m misunderstanding.
Personally, I find the idea very interesting.
Hi!
I haven’t made any announcement yet, but I’d be potentially interested in hiring people as contractors for Rational Animations in the following roles:
- Scriptwriter (also see this contest)
- Fact-checker
- Community manager
- Social Media manager
- Illustrator
If you think Rational Animations could be a good fit for some people and clears your bar for “high-impact”, I’d be happy if you sent some candidates my way. They/you can reach out at rationalanimations@gmail.com.
Also, important disclaimer: I’m not sure how fast I’ll be able to make hiring decisions, and some people may need to get a job soon. Therefore, anyone who applies shouldn’t wait for an answer from me if they need a job fast.
We have funding until at least ~ March.
We try to avoid processes that take months and leave grantees unclear on when they’re going to reach a decision.”
It’s true that we made decisions on the vast majority of proposals on roughly this timeline, and then some of the more complicated / expensive proposals took more time (and got indications from us about when they were supposed to hear back next).
The indication I got said that FTX would reach out “within two weeks”, which meant by April 20. I haven’t heard back since, though. I reached out eight days ago to ensure that my application or relevant e-mails haven’t been lost, but I haven’t received an answer. :((I get that this is probably not on purpose, and that grant decisions take as long as they need to, but if I see an explicit policy of “we are going to reach out even if we haven’t made a decision yet” then I’m left wondering if something has broken down somewhere and about what to do. It seems a good choice to try to reach out myself… and comment under this thread to provide a data point.)
I think the photo of the Yoruba folks might be a bit misleading in the context of this post, and I wouldn’t include it.
I’m not entirely sure If I agree, but I removed them out of abundance of caution.
Edit: yeah, you are correct actually.
Hard agree, the shoggoth memes are great.
One class of examples could be when there’s an adversarial or “dangerous” environment. For example:
Bots generating low-quality content.
Voting rings.
Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn’t be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
I am CHAOTIC Good MUAAHAHA
I confirm that this resolved. Thanks for the e-mail response!
This looks super interesting to me. We can, in a sense, simulate a longer history of Effective Altruism and see what patterns there are.
Thanks a lot! This is definitely going to be helpful :)
Hey, thanks a lot for this comment. It did brighten my mood.
I think I’ll definitely want to send scripts to CEA’s press team, especially if they are heavily EA related like this one. Do you know how can I contact them? (I’m not sure I know what’s the CEA’s press team. Do you mean that I should just send an e-mail to CEA via their website?)
This makes me think that I would definitely use more feedback. It makes me kind of sad that I could have added this point myself (together with precising the estimate by Toby Ord) and didn’t because of...? I don’t know exactly.
Edit: one easy thing I could do is to post scripts in advance here or on LW and request feedback (other than messaging people directly and waiting for their answers, although this is often slow and fatiguing).
Edit 2: Oh, and the Slack group. Surely more ideas will come to me, I’ll stop adding edits now.
I think we still see really good engagement with the videos themselves. The average view duration for the AI video is currently 58.7% of the video, and 25% of viewers watched the whole video
This average percentage relates to organic traffic only, right? The paid traffic APV must look much lower, something like 5%?
Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/
I hadn’t advertised it until now because I had to find someone to help moderate it.
I want people here to be among the first to join since I expect having EA Forum users early on would help foster a good epistemic culture.
Several EA organizations are working together with a communications advising firm to answer questions like
Who are key audiences we especially want to reach?
How do these audiences currently see EA?
What are the best ways to reach these audiences?
What EA ideas are especially important to convey?
I hope EA orgs end up sharing their new best guesses regarding these questions with the broader community, or at least reach out to smaller and newer organizations dedicated to outreach so that they can scale their outreach in a good direction and self-correct more easily.
I’ve returned home, and my simulated self is not disintegrated, because he can’t compare these metrics with other posts, so he should be fine.
I wonder why performance on AP English Literature and AP English Language stalled