Hi!
Thanks for the all comments! Sorry for taking so long.
1) Yes, scaling inputs isn’t the same thing as scaling capabilities. That’s why the empirical success of the scaling hypothesis is/was so surprising.
2) Thanks for the point about self-driving cars, we removed that from the article on site. I’m going to investigate this some more before re-instating the self-driving claim on site. We’ll update the article on the EA forum to reflect the current on-site article. (Though we won’t keep doing that.)
A friend independently made a related point to me, where he noted a self-driving ML engineer claimed that the reason for the city based limitation of self-driving taxis was because the AI relied on 3d-mesh maps for navigation. I’m kind of confused about why this would be a big limitation, as I thought things like NeRFs made it possible to quickly generate a high res voxel map for static shape feasible. And that most of the difficulty in self-driving was in dealing with edge cases e.g. a baby deer jumping into the road, rather than being aware of static shapes about you. But that’s my confusion, and I’d rather not stick my neck out on a claim on which I’m not an expert, which experts do contest.
3) Mostly, I was basing these claims off Waymo, and their stats and anecdotal reports from folks in SF regarding how well they perform. When I looked into Waymo’s statistics for average miles w/o human interruption, and at first blush they looked quite impressive (17k miles!). But the one niggling worry I had was that they said humans feedback wasn’t always written down as an interruption.
Algon
Karma: 58
The road from human-level to superintelligent AI may be short
Human-level is not the limit
AI may attain human level soon
AI is advancing fast
What are Responsible Scaling Policies (RSPs)?
What are the differences between a singularity, an intelligence explosion, and a hard takeoff?
What is scaffolding?
Thanks for the feedback! So, these articles are intended to serve as handy links to share with people confused about some point of AI safety. (Which ties into our mission: spreading correct models on AI safety, which seems robustly good.) Plausibly, people on the EA forum encounter others like this, or fall into that category themselves. It’s a tricky topic, after all, and lots of people on the forum are new. Your comment suggests we failed to position ourselves correctly. And also that these articles might not be a great fit for the EA forum. Which is useful, because we’re still figuring out what content would be a good fit here, and how to frame it.
Does that answer your question?
Yeah, that’s a good point. TY! It would be wrong to claim that Metaculus predictions are somehow “expert predictions”. We’ll change the article to make it clearer that we’re not claiming that.
That said, we don’t use the word prediction market. And AFAICT, Metaculus has a favourable track record compared to actual prediction markets. And probably to other mechanisms for aggregating info, too. So I think there’s value to referencing them.
So perhaps the following, maybe as a footnote?
What do you think?