I changed the sentence you mention to “If you want to understand present-day algorithms, the “pre-driven car” model of thinking works a lot better than the “self-driving car” model of thinking. The present and past are the only tools we have to think about the future, so I expect the “pre-driven car” model to make more accurate predictions.” I hope this is clearer.
That is clearer, thanks!
I think that it is a hopeless endeavour to aim for such precise language in these discussions at this point in time, because I estimate that it would take a ludicrous amount of additional intellectual labour to reach that level of rigour. It’s too high of a target.
Well, it’s already possible to write code that exhibits some of the failure modes AI pessimists are worried about. If discussions about AI safety switched from trading sentences to trading toy AI programs, which operate on gridworlds and such, I suspect the clarity of discourse would improve.
I might post some scraps of arguments on my blog soonish, but those posts won’t be well-written and I don’t expect anyone to really read those.
That is clearer, thanks!
Well, it’s already possible to write code that exhibits some of the failure modes AI pessimists are worried about. If discussions about AI safety switched from trading sentences to trading toy AI programs, which operate on gridworlds and such, I suspect the clarity of discourse would improve.
Cool, let me know!