Wait a mo, H-1Bs are uncapped for non-profits? Has anyone ever gotten on for an EA org/​AI org? This is super intriguing to me!
Hawk.Yang 🔸
Karma: 21
- Hawk.Yang 🔸 4 Jan 2021 0:06 UTC3 points0 ∶ 0in reply to: JackM’s comment on: How much more imÂporÂtant is work in USA over UK?
.
- Hawk.Yang 🔸 2 Dec 2020 23:43 UTC1 point1 ∶ 0in reply to: richard_ngo’s comment on: richard_ngo’s Shortform
.
Honestly, not sure I would agree with this. Like Chollet said, this is fundamentally different from simply scaling the amount of parameters (derived from pre-training) that a lot of previous scaling discourse centered around. To then take this inference time scaling stuff, which requires a qualitatively different CoT/​Search Tree strategy to be appended to an LLM alongside an evaluator model, and call it scaling is a bit of a rhetorical sleight of hand.
While this is no doubt a big deal and a concrete step toward AGI, there are enough architectural issues around planning, multi-step tasks/​projects and actual permanent memory (not just RAG) that I’m not updating as much as much as most people are on this. I would also like to see if this approach works on tasks without clear, verifiable feedback mechanisms (unlike software engineering/​programming or math). My timelines remain in the 2030s.