I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.
I think it’s pretty strong and important (as in, an important topic).
The first half in particular seems pretty dense. I could imagine some rewriting making it more understandable.
Many of the key points seem more encompassing than just AI. “Selection effects”, “being in the Bay Area” / “community epistemic problems”. I think I’d wish these could be presented as separate posts than linked to here (and other places), but I get this isn’t super possible.
I think some of the main ideas in the point above aren’t named too well. If it were me, I’d probably use the word “convenience” a lot, but I realize that’s niche now.
I really would like more work really figuring out what we should expect of AI in the next 20 years or so. I feel like your post was more like, “a lot of this extremist thinking seems fishy”, more than it was “here’s a model of what will happen and why”. This is fine for this post, but I’m interested in the latter.
I think I mentioned this earlier, but I think CFAR was pretty useful to me and a bunch of others. I think there was definitely a faction that wanted them to be much more aggressive on AI, and didn’t really see the point of donating to them besides that. I think my take is that the team was pretty amateur at a lot of key organizational/management things, so did some slipper work/strategy. That said, there was much less money then. There wasn’t a whole lot of great talent for such things. I think they were pretty overvalued at the time to rationalists, but I would consider them undervalued, in terms of what EAs tend to think of them as.
The diagrams could be improved. At least, bold/highlight the words “for” and “against. I’m also not sure if the different size blocks are really important
I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.
I think it’s pretty strong and important (as in, an important topic).
The first half in particular seems pretty dense. I could imagine some rewriting making it more understandable.
Many of the key points seem more encompassing than just AI. “Selection effects”, “being in the Bay Area” / “community epistemic problems”. I think I’d wish these could be presented as separate posts than linked to here (and other places), but I get this isn’t super possible.
I think some of the main ideas in the point above aren’t named too well. If it were me, I’d probably use the word “convenience” a lot, but I realize that’s niche now.
I really would like more work really figuring out what we should expect of AI in the next 20 years or so. I feel like your post was more like, “a lot of this extremist thinking seems fishy”, more than it was “here’s a model of what will happen and why”. This is fine for this post, but I’m interested in the latter.
I think I mentioned this earlier, but I think CFAR was pretty useful to me and a bunch of others. I think there was definitely a faction that wanted them to be much more aggressive on AI, and didn’t really see the point of donating to them besides that. I think my take is that the team was pretty amateur at a lot of key organizational/management things, so did some slipper work/strategy. That said, there was much less money then. There wasn’t a whole lot of great talent for such things. I think they were pretty overvalued at the time to rationalists, but I would consider them undervalued, in terms of what EAs tend to think of them as.
The diagrams could be improved. At least, bold/highlight the words “for” and “against. I’m also not sure if the different size blocks are really important