I suspect that this doesn’t work as an idea, largely because of what motivates mathematicians at that level. But I’d ask Scott Garrabrant whether he thinks this would be worthwhile—given that he knows Tao, at least a bit, and has worked with other mathematicians at UCLA.
Interesting. I wonder: many people say they aren’t motivated by money, but how many of them have seriously considered what they could do with it other than personal consumption? And how many have actually been offered a lot of money—to do something different to what they would otherwise do, that isn’t immoral or illegal—and turned it down? What if it was a hundred million, or a billion dollars? Or, what if the time commitment was lower—say 6 months, or 3 months?
Good point. And yes, it seems likely that they’d change their research, but I don’t think that motivation and curiosity are as transferable. Still on net not a bad idea, but I’m still skeptical it would be this easy.
True. But maybe the limiting factor is just the consideration of such ideas as a possibility? When I was growing up, I wanted to be a scientist, liked space-themed Sci-Fi, and cared about many issues in the world (e.g. climate change, human rights); but I didn’t care about having or wanting money (in fact I mostly thought it was crass), or really think much about it as a means to achieving ends relating to my interests. It wasn’t until reading about (proto-)EA ideas that it clicked.
I suspect that the offer would at least capture his attention/curiosity. Even if he rejected the offer, he’d probably find himself curious enough to read some of the current research. And he’d probably be able to make some progress without really trying.
Idea: What if this was a fellowship? It could quickly become one of the most prestigious fellowships in the world!
Good idea about the fellowship. I’ve been thinking that it would need to come from somewhere prestigious. Perhaps CHAI, FLI or CSER, or a combination of such academic institutions? If it was from, say, a lone crypto millionaire, they might risk being dismissed as a crackpot, and by extension risk damaging the reputation of AGI Safety. Then again, perhaps the amounts of money just make it too outrageous to fly in academic circles? Maybe we should be looking to something like sports or entertainment instead? Compare the salary to that of e.g. top footballers or musicians. (Are there people high up in these fields who are concerned about AI x-risk?)
I spoke with someone at MIRI who I expect knows more about this, and they pointed out that there aren’t good operationalizable math questions in AI safety to attack, and that the money might point them to the possibility of interesting questions, but on its own probably wouldn’t convince them to change focus.
As an intuition pump, imagine Neuman was alive today; would it be worthwhile to pay him to look into alignment? (He explicitly did contract work at extraordinary rates IIRC). I suspect that it would be worth it, despite the uncertainties. If you agree, then it does seem worthwhile to try to figure out who is the closest to being a modern Neumann and paying them to look into alignment.
That seems mostly right, and I don’t disagree that making the offer is reasonable if there are people who will take it—but it’s still different than paying someone who only does math to do AI alignment, since von Neumann was explicitly someone who bridged fields including several sciences and many areas of math.
I don’t think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don’t need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.
I suspect that this doesn’t work as an idea, largely because of what motivates mathematicians at that level. But I’d ask Scott Garrabrant whether he thinks this would be worthwhile—given that he knows Tao, at least a bit, and has worked with other mathematicians at UCLA.
Interesting. I wonder: many people say they aren’t motivated by money, but how many of them have seriously considered what they could do with it other than personal consumption? And how many have actually been offered a lot of money—to do something different to what they would otherwise do, that isn’t immoral or illegal—and turned it down? What if it was a hundred million, or a billion dollars? Or, what if the time commitment was lower—say 6 months, or 3 months?
Good point. And yes, it seems likely that they’d change their research, but I don’t think that motivation and curiosity are as transferable. Still on net not a bad idea, but I’m still skeptical it would be this easy.
If top mathematicians had an EA mindset towards money, they would most likely not be publishing pure math papers.
True. But maybe the limiting factor is just the consideration of such ideas as a possibility? When I was growing up, I wanted to be a scientist, liked space-themed Sci-Fi, and cared about many issues in the world (e.g. climate change, human rights); but I didn’t care about having or wanting money (in fact I mostly thought it was crass), or really think much about it as a means to achieving ends relating to my interests. It wasn’t until reading about (proto-)EA ideas that it clicked.
I suspect that the offer would at least capture his attention/curiosity. Even if he rejected the offer, he’d probably find himself curious enough to read some of the current research. And he’d probably be able to make some progress without really trying.
Idea: What if this was a fellowship? It could quickly become one of the most prestigious fellowships in the world!
Good idea about the fellowship. I’ve been thinking that it would need to come from somewhere prestigious. Perhaps CHAI, FLI or CSER, or a combination of such academic institutions? If it was from, say, a lone crypto millionaire, they might risk being dismissed as a crackpot, and by extension risk damaging the reputation of AGI Safety. Then again, perhaps the amounts of money just make it too outrageous to fly in academic circles? Maybe we should be looking to something like sports or entertainment instead? Compare the salary to that of e.g. top footballers or musicians. (Are there people high up in these fields who are concerned about AI x-risk?)
>I suspect that this doesn’t work as an idea, largely because of what motivates mathematicians at that level.
How confident of this are you? How many mathematicians have been offered, say, $10M for a year of work and turned it down?
I spoke with someone at MIRI who I expect knows more about this, and they pointed out that there aren’t good operationalizable math questions in AI safety to attack, and that the money might point them to the possibility of interesting questions, but on its own probably wouldn’t convince them to change focus.
As an intuition pump, imagine Neuman was alive today; would it be worthwhile to pay him to look into alignment? (He explicitly did contract work at extraordinary rates IIRC). I suspect that it would be worth it, despite the uncertainties. If you agree, then it does seem worthwhile to try to figure out who is the closest to being a modern Neumann and paying them to look into alignment.
That seems mostly right, and I don’t disagree that making the offer is reasonable if there are people who will take it—but it’s still different than paying someone who only does math to do AI alignment, since von Neumann was explicitly someone who bridged fields including several sciences and many areas of math.
I don’t think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don’t need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.