Any living person (or list of people). Assume they can be persuaded that the problem of existential risk from AI is real and important.
[Question] Who would you have on your dream team for solving AGI Alignment?
My primary reaction to this was “ah man, I hope this person doesn’t inadvertently annoy important people about AI safety being important, hurting the reputation of AI safety/longtermism/EA etc”
:( As far as I know, no one from EA has annoyed (randomly emailed) Terry Tao about it, despite many people saying he would be a great person to have on board.
Obviously I’m not in favour of random EAs annoying important people (and hurting the reputation of EA/AI Alignment), but I do think given the urgency of the situation we are in, at some point, some high up people in EA/AI Alignment have to make some serious attempt at putting together such a dream team (more).
You are probably aware, but someone recently drafted an email and intended to send it, but was convinced not to send the email.
Yes, I think the fact that they didn’t go through with it is some evidence that such a list need not be counterproductive to our goal (and the EV is probably positive). Ultimately the Dream Team needs to be approached, but I’m optimistic that this can be done in a careful and coordinated manor by the relevant senior people in EA/Alignment.
One design ideation method is instead of trying to think of good ideas, try to think of the worst possible idea.
With that in mind, encourage the writers of “It’s Always Sunny in Philadelphia” to do an episode “The Gang Solves AGI Alignment”.
A bit sad that no one has actually answered the object level question and nearly all the discussion is meta. I can understand why. But I also think that we are at crunch time with this, and the stakes are as high as they can be. So this is actually a very serious question that serious people should be considering. Maybe (some) people high up in EA are considering it. I hope so!
I think the question is basically “who are the most talented researchers in fields at least vaguely related to AI?” The EA community is probably not the best group for answering this question. But it’s an important question for sure!
Some ideas for identifying dream team members:
Anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem).
Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy(?) or Economics(?).
Those topping relevant world ranking lists.
People who’ve scored at the very top end in standardised tests.
Those with a track record of multiple field-changingly-important research accomplishments.
The most highly paid engineers or researchers.
Winners of top computer programming, maths or physics competitions/olympiads.
The world’s best (board and video) games players
Being able to grok what is being alluded to here?
I feel like this question is so much more fun if we can include dead people, so I’m gonna do just that.
Off the top of my head:
Isaac Newton
John Forbes Nash
John von Neumann
Alan Turing
Amos Tversky
Ada Lovelace
Leonhard Euler
Terence Tao
John Stuart Mill
Eliezer Yudkowsky
Herbert Simon
Here’s what GPT3 thinks:
No surprises there (although a bit surprised that GPT-3 doesn’t know that Alan Turing is dead, and can’t spell Eliezer).
Actually that was me who misspelled Eliezer ugh
Any standard list of “top AI researchers” will do. Also look at top researchers in CS, math, stats, physics, philosophy (note the new CAIS philosophy fellowship as an example of how you might attract people from other fields). Edward Witten comes to mind. But you’ll get better answers if you ask professors within these subjects or even turn to Reddit, Quora, etc.
Hmm..… Who are the leading thinkers/speakers who argue we should not further develop AI? Such folks would not need to be persuaded, and would perhaps be willing to consider the full range of options.
People who have invested heavily in AI careers are not likely to be receptive to proposals which don’t include the continuation of AI development, that is, not open to the full range of options.
One way to solve AI alignment would be to stop developing AI. I know, very challenging, but then so are all other options, none of which would seem to offer such a definitive solution.