Great post. I can’t help but agree the broad idea given that I’m just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November.
I’m happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks). As such, I’m advocating for safe innovation with firm rules/regs that enable that. If those bars can’t be met, then we obviously shouldn’t have unsafe innovation. I sincerely want good things from advanced AI, but not if it will likely harm everyone.
Darren McKee
Announcing New Beginner-friendly Book on AI Safety and Risk
Seeking input on a list of AI books for broader audience
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community
Interview with Roman Yampolskiy about AGI on The Reality Check
Going Infinite—Quick Review
Vonnegut passage on purpose, humanity, and machines
I am fully supportive of more books coming out on EA related topics. I’ve also always enjoyed your writings.
As someone trying to write a book about the threat of AI for a broader audience, I’ve learned that you should have a good idea of your goal for the book’s distribution. Meaning, is your goal to get this published by a publisher?
Or self-publish? An eCopy or audiobook?
To get something published, you typically need an agent. To get an agent you usually need a one-page pitch, a writing sample, and perhaps an outline.
If no agent is interested, it is a risk to write the book if you want a third party to publish it.
“Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.
Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang.”
Book Review (mini) - Not The End of the World by Hannah Ritchie
Something(!) needs to be done. Otherwise, it’s just a mess for clarity and the communication of ideas.
Seeking Input to AI Safety Book for non-technical audience
If what’s at issue was the ‘overall character of Nonlinear staff’, then is it fair to assume you fully disagreed with Ben’s one-sided approach?
FYI, I’m working on a book about the risks of AGI/ASI for a general and I hope to get it out within 6 months. It likely won’t be as alarmist as your post but will try to communicate the key messages, the importance, the risks, and the urgency. Happy to have more help.
Interesting initiative. We should connect because...
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community—EA Forum (effectivealtruism.org)
Book Review (mini): Co-Intelligence by Ethan Mollick
Thank you for a great post and the outreach you are doing. We need more posts and discussions about optimal framing.
Thought that this was filled with interesting ideas. Thank you.
If open to constructive feedback, I think that there is an opportunity (mainly for host but also perhaps you as guest) to reduce the amount of ‘likes’ that are equivalent to ‘ums’ in your speech.
This may be cultural/generational, and perhaps few others care, but personally I found parts hard to listen to because there were so many ‘likes’.
I couldn’t help but be curious, so I did a search on the transcript and it pops up 577 times (of course, a decent chunk of those are not filler but part of normal speech).
Thanks for the compilation! This might be helpful for the book I’m writing.
One of my aspirations was to throw a brick through the overton window regarding AI safety, but things have already changed with more and more stuff coming out like what you’ve listed.
Sometimes, it is not enough to make a point theoretically, it has to be made in practice. Otherwise, the full depth of the point may not be appreciated. In this case, I believe the point is that, as a community, we should have consistent (high-quality) standards for investigations or character assessments.
This is why I think it is reasonable to have the section “Sharing Information on Ben Pace”. It is also why I don’t see it as retaliatory.
The response to that section is negative by some even though Kat specifically pointed out all the flaws in it, said that people shouldn’t update about it, and that Ben shouldn’t have to respond to such things. Why? I believe she is illustrating the exact problem with saying such things, even if one tries to weaken them. The emotional and intellectual displeasure you feel is correct. And it should apply to anyone being assessed in such a way.
I fear there are those who don’t see the parallel between Ben’s original one-sided (by his own statements) post and Kat’s one-sided example (also by her own statements), that is clearly for educational purposes only.
Although apparently problematic to some, I hope the section has been useful to highlight the larger point: assessments of character should be more comprehensive, more evidence-based, and (broadly) more just (eg allowing those discussed time to respond).