Blogging and editing at [/nickai/](https://www.thinkingmuchbetter.com/nickai/). PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).
Nicholas Kross
EDIT: Due to the incoming administration’s ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to “they’re gonna go all-in on racing for AI, regardless of how ‘needed’ it actually is”.
For more details on (the business side of) a potential AI crash, see recent articles by the blog Where’s Your Ed At, which wrote the sorta-well-known post “The Man Who Killed Google Search”.
For his AI-crash posts, start here and here and click on links to his other posts. Sadly, the author falls into the trap of “LLMs will never get to reasoning because they don’t, like, know stuff, man”, but luckily his core competencies (the business side, analyzing reporting) show why an AI crash could still very much happen.
I’m a Definooooor! I’m gonna Defiiiiiiine! AAAAAAAAAAAAAAAA
I like circles, though my favorites are (of course) boxes and arrows.
It’s an anti-procrastination tool!
TIL that a field called “argumentation theory” exists, thanks!
Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I’d phrase them, but maybe that’s to be expected(?)
Is principled mass-outreach possible, for AGI X-risk?
Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.
(There’s a question of who would actually read those responses, and correspondingly where they’d be published, but that’s a key question that all persuasive-media-creators should be answering anyway.)
Yeah I get that, I mean specifically the weird risky hardcore projects. (Hence specifying “adult”, since that’s both harder and potentially more necessary under e.g. short/medium AI timelines.)
Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.
Why hasn’t e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
Nicholas / Heather Kross’s Quick takes
There’s a new chart template that is better than “P(doom)” for most people.
Have long hoped someone would do this thoroughly, thank you.
Much cheaper, though still hokey, ideas that you should have already thought of at some point:
A “formalization office” that checks and formalizes results by alignment researchers. It should not take months for a John Wentworth result to get formalized by someone else.
Alignment-specific outreach at campuses/conventions with top cybersecurity people.
How to Search Multiple Websites Quickly
Maybe! I’m most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered “hard” or “impenetrable” by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).
Yeah, we’d hope there’s a good bit of existing pedagogy that applies to this. Not much stood out to me, but maybe I haven’t looked hard enough at the field.
This also could’ve helped with other orgs over the years, where the “culture” stuff turned out to have important signal. E.g. FTX, Leverage Research.