This point is not to identify with it. It’s a fib.
Phib
U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team [and Paul Christiano update]
NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute
Desensitizing Deepfakes
[Question] Curious if GWWC takes into account existential risk probabilities in calculating impact of recurring donors.
Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage—seems like a decent way to try and learn. Of course, good intentions to solve a ‘disinformation crisis’ like this aren’t sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem’s existence as evidence.
I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about “high quality content”. But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.
But I’m doing the same thing and just being meta here, which is easy, so I’ll try too in another comment
Appreciate the post quite a bit, thank you for taking the time to share.
Silly idea to enhance List representation accuracy
Nice post, and I appreciate you noticing something that bugged you and posting about it in a pretty constructive manner.
EA and AI Safety Schism: AGI, the last tech humans will (soon*) build
Agreed, the evidence is solely, “according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.”
Some of My Current Impressions Entering AI Safety
I use it to see if I’ve missed anything significant, esp. since I’ve started looking at lesswrong more (uh, apologies about that? More of a cause specific thing with ai and getting more into rationalism)
I don’t think I click on that many links typically, but I might leave the digest unread in my inbox until I give it a complete read through. I could imagine myself reading through it and seeing some post that makes me go down a rabbit hole and by the time I get back to the email tab I need to just mark unread to review again, for instance. Wouldn’t be surprised if this had occurred, that is.
Idk much more, I like the setup and do actually use it as described above as a sort of, well, I guess newsletter, huh.
I updated a bit from this post to be more concerned about the AIs themselves, I think your depiction really evoked my empathy. I’d previously been just so concerned with human doom that I’d almost refused to consider it, but in the meantime I’ll definitely make an effort to be conscious of this sort of possibility.
For a fictional representation of my thinking (what your post reminded me of…), Ted Chiang has a short story about virtual beings that can be cloned and some were even potentially abused… “the lifecycle of software objects”
Anecdata: thanks for curating, I didn’t read this when it first came through and now that I did, it really impacted me.
Edit: Coming back after approaching it on LessWrong and now I’m very confused again—seems to have been much less well received. What someone here says is, “great balance of technical and generally legible content” over there might be considered “strawmanning and frustrating”, and I really don’t know what to think.
Yeah discernment of truth makes sense to me—and fair, spam is probably not productive, but it got across my intention of ‘desensitizing’ people to this strategy of playing on our ‘discernment of truth’. I think Geoffrey’s comment on the next political cycle is really interesting for thinking about how that ‘spam’ may end up looking.
(feel a little awkward just pushing news but feel some completeness obligation on this subject)
My initial thoughts around this are that yeah, good information hard to find and prioritize, but I would really like better and more accurate information to be more readily available. I actually think AI models like chatgpt achieve this to some extent, as a sort of not-quite-expert on a number of topics, and I would be quite excited to have these models become even better accumulators of knowledge and communicators. Already it seems like there’s been a sort of benefit to productivity (one thing I saw recently: https://arxiv.org/abs/2403.16977). So I guess I somewhat disagree with AI being net negative as an informational source, but do agree that it’s probably enabling the production of a bunch of spurious content and have heard arguments that this is going to be disastrous.
But I guess the post is focused moreso on news itself? I appreciate the idea of a sort of weekly digest in that it would somewhat detract from the constant news hype cycle, I guess I’m in more favor of longer time horizons for examining what is going on in the world. The debate on covid origin comes to mind, especially considering Rootclaim, as an attempt to create more accurate information accumulation. I guess forecasting is another form of this, whereby taking bets on things before they occur and being measured by your accuracy is an interesting way to consume news which also has a sort of ‘truth’ mechanism to it—and notably has legible operationalization of truth! (Edit: guess I should also couch this more so in what already exists on EAF, and lesswrong and rationality pursuits in general seem pretty adjacent here)
To some extent my lame answer is just AI enabling better analysis in the future as probably the most tractable way to address information. (Idk, I’m no expert on information and this seems like a huge problem in a complex world. Maybe there are more legible interventions on improving informational accuracy, I don’t know them and don’t really have much time, but would encourage further exploration and you seem to be checking out a number of examples in another comment!)
FWIW I think this post: https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas
Is a way better version of what I was trying to get at here, and MacAskill’s answer is pretty good.
This puts some of my concerns into a way better form than I could’ve produced so thank you kindly, if I got your piece correctly—this sort of innovation is concerning with GPT-4 not because it would necessarily produce x-risk adequate AGI but because next models treated the same way (open-access to API) would do so. This I agree with, people are actively pursuing whatever definition of AGI they conceive of for exciting capabilities, rather haphazardly, and we may end up stuck with some external individual, not even ~OpenAI developing something dangerous. I believe this is reflected somewhere in the LW/AISafety litany.
I’m curious also what happened to the LW post—I know they’ve increased their moderation standards, but it also says it was deleted by author? I always feel like the technical AI Safety discussion there is higher quality...
Ditto pseudonym, I recognize from another comment that there is an upcoming Constellation post from the original poster and a more effortful response forthcoming there, but I still think that despite receiving this piece in advance I am kind of surprised the following were not responded to?
Lack of Senior ML Research Staff
Lack of Comm… w/ ML Community
Conflicts of interest with funders
I guess people are busy and this is not a priority—seems like people are mostly thinking about Underwhelming Research Output (and Nate himself seems to say as much here)