I absolutely agree with all the other points. This isn’t an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes:
“People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later… the philosophical side of this seems like ineffective posturing.
Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly.”
https://drive.google.com/file/d/1O—V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view
That’s a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.
R.e. safety for something that hasn’t been invented: I’m not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it’s not necessarily a feature you can just add on later. But it doesn’t sound like there’s a deep disagreement here, and in any case I’m not the best person to try to argue this case.
Intuitively, one analogy might be: we’re building a rocket, humanity is already on it, and the AI Safety people are saying “let’s add life support before the rocket takes off”. The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.
To your Beckstead paraphrase, I’ll add Tyler’s recent exchange with Joseph Walker:
Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.
Walker: So… Get on with it?
Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So it’s not like there’s some brute let’s be a builder view and then there’s some deeper wisdom that the real philosophers pursue. It’s you be a builder or a nervous Nelly, you take your pick, I say be a builder.
Thanks for clarifying, the delta thing is a good point. I’m not aware of anyone really trying to estimate “what are the odds that MIRI prevents XR”, though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
I absolutely agree with all the other points. This isn’t an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: “People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later… the philosophical side of this seems like ineffective posturing.
Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly.” https://drive.google.com/file/d/1O—V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view
That’s a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.
R.e. safety for something that hasn’t been invented: I’m not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it’s not necessarily a feature you can just add on later. But it doesn’t sound like there’s a deep disagreement here, and in any case I’m not the best person to try to argue this case.
Intuitively, one analogy might be: we’re building a rocket, humanity is already on it, and the AI Safety people are saying “let’s add life support before the rocket takes off”. The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.
To your Beckstead paraphrase, I’ll add Tyler’s recent exchange with Joseph Walker: