Software engineer, blogging and editing at /nickai/. PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).
NicholasKross
Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I’d phrase them, but maybe that’s to be expected(?)
Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.
(There’s a question of who would actually read those responses, and correspondingly where they’d be published, but that’s a key question that all persuasive-media-creators should be answering anyway.)
Yeah I get that, I mean specifically the weird risky hardcore projects. (Hence specifying “adult”, since that’s both harder and potentially more necessary under e.g. short/medium AI timelines.)
Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.
Why hasn’t e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
- 5 Jan 2024 15:49 UTC; 11 points) 's comment on MIRI 2024 Mission and Strategy Update by (
There’s a new chart template that is better than “P(doom)” for most people.
Have long hoped someone would do this thoroughly, thank you.
Much cheaper, though still hokey, ideas that you should have already thought of at some point:
A “formalization office” that checks and formalizes results by alignment researchers. It should not take months for a John Wentworth result to get formalized by someone else.
Alignment-specific outreach at campuses/conventions with top cybersecurity people.
Maybe! I’m most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered “hard” or “impenetrable” by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).
Yeah, we’d hope there’s a good bit of existing pedagogy that applies to this. Not much stood out to me, but maybe I haven’t looked hard enough at the field.
We ought to have a new word, besides “steelmanning”, for “I think this idea is bad, but it made me think of another, much stronger idea that sounds similar, and I want to look at that idea now and ignore the first idea and probably whoever was advocating it”.
Good points, thanks! (Mainly the list part)
Thank you! Another person pointed this out on LW.
This post/cause seems sorely underrated; e.g. what org exists can someone donate to, for mass case detection? It has such a high potential lives-saved-per-$1,000!
OK, thanks! Also, after more consideration and object-level thinking about the questions, I will probably write a good bit of prose anyway.
These details help, thank you!
I have a question.
IF:
we can submit multiple entries (but only one will win), AND
judging is based on 67% uncovering considerations and 33% clarifying concepts,
THEN, would you prefer if I:
make one large entry that puts all my research/ideas/information in one place, OR
make several smaller entries, each one focusing on a single idea?
(Assuming this is for answering one question. Presumably, since multiple entries are allowed, I could duplicate this strategy for the other question, or even use a different one for each. But if I’m wrong about this, I’d also like to know that!)
\d{1,5}\s\w.\s(\b\w*\b\s){1,2}\w*. (Source)
Enough cheating at business, we must cheat at League next
TIL that a field called “argumentation theory” exists, thanks!