Sometimes, it is not enough to make a point theoretically, it has to be made in practice. Otherwise, the full depth of the point may not be appreciated. In this case, I believe the point is that, as a community, we should have consistent (high-quality) standards for investigations or character assessments.
This is why I think it is reasonable to have the section “Sharing Information on Ben Pace”. It is also why I don’t see it as retaliatory.
The response to that section is negative by some even though Kat specifically pointed out all the flaws in it, said that people shouldn’t update about it, and that Ben shouldn’t have to respond to such things. Why? I believe she is illustrating the exact problem with saying such things, even if one tries to weaken them. The emotional and intellectual displeasure you feel is correct. And it should apply to anyone being assessed in such a way.
I fear there are those who don’t see the parallel between Ben’s original one-sided (by his own statements) post and Kat’s one-sided example (also by her own statements), that is clearly for educational purposes only.
Although apparently problematic to some, I hope the section has been useful to highlight the larger point: assessments of character should be more comprehensive, more evidence-based, and (broadly) more just (eg allowing those discussed time to respond).
Darren McKee
If what’s at issue was the ‘overall character of Nonlinear staff’, then is it fair to assume you fully disagreed with Ben’s one-sided approach?
Audiobook is out :)
Thanks!
There might be. If you’re interested in pursuing that in Australia, send me a DM and we’ll explore what’s possible.
It’s a tricky balance and I don’t think that there is a perfect solution. The issue that both the Title and the Cover have to be intriguing and compelling (also, ideally, short / immediately understandable). What will intrigue some will be less appealing to others.
So, I could have had a question mark, or some other less dramatic image… but when not only safety researchers but the CEOs of the leading AI companies believe the product that they are developing could lead to extinction, I believe that this is alarming. This is an alarming fact about the world. That drove the cover.
The inside is more nuanced and cautious.
Sure do! As I said in the second last bullet it is in progress :)
(hopefully within the next two weeks)
Great post. I can’t help but agree the broad idea given that I’m just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November.
I’m happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks). As such, I’m advocating for safe innovation with firm rules/regs that enable that. If those bars can’t be met, then we obviously shouldn’t have unsafe innovation. I sincerely want good things from advanced AI, but not if it will likely harm everyone.
Thank you.
I quite like the “we don’t have a lot of time” part, both in the fact that we’d need to prepare in advance, and because making decisions under time pressure is almost always worse.
Noted. I find many are stuck on the ‘how’. That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn’t entirely clear who needs to hear which arguments/analysis.
Great post!
A and B about 30 years are useful ideas/talking points. Thanks for the reminder/articulation!
I’m definitely aware of that complication but I don’t think that is the best way to broader impact. Uncertainty abounds. If I can get it out in 3 months, I will.
Thanks for sharing, this and the others. I read that one and it was a bit more about the rationality community than the risks. (It’s in the list with a different title)
FYI, I’m working on a book about the risks of AGI/ASI for a general and I hope to get it out within 6 months. It likely won’t be as alarmist as your post but will try to communicate the key messages, the importance, the risks, and the urgency. Happy to have more help.
Thank you for a great post and the outreach you are doing. We need more posts and discussions about optimal framing.
I was referring to external credibility if you are looking for a scientific paper with the key ideas. Secondarily, an online, modular guide is not quite the frame of the book either (although it could possible be adapted towards such a thing in the future)
Interesting points. I’m working on a book which is not quite a solution to your issue but hopefully goes in the same direction.
And I’m now curious to see that memo :)
Thanks for the compilation! This might be helpful for the book I’m writing.
One of my aspirations was to throw a brick through the overton window regarding AI safety, but things have already changed with more and more stuff coming out like what you’ve listed.
I am fully supportive of more books coming out on EA related topics. I’ve also always enjoyed your writings.
As someone trying to write a book about the threat of AI for a broader audience, I’ve learned that you should have a good idea of your goal for the book’s distribution. Meaning, is your goal to get this published by a publisher?
Or self-publish? An eCopy or audiobook?
To get something published, you typically need an agent. To get an agent you usually need a one-page pitch, a writing sample, and perhaps an outline.
If no agent is interested, it is a risk to write the book if you want a third party to publish it.
Thought that this was filled with interesting ideas. Thank you.
If open to constructive feedback, I think that there is an opportunity (mainly for host but also perhaps you as guest) to reduce the amount of ‘likes’ that are equivalent to ‘ums’ in your speech.
This may be cultural/generational, and perhaps few others care, but personally I found parts hard to listen to because there were so many ‘likes’.
I couldn’t help but be curious, so I did a search on the transcript and it pops up 577 times (of course, a decent chunk of those are not filler but part of normal speech).
Something(!) needs to be done. Otherwise, it’s just a mess for clarity and the communication of ideas.