Hiya! I’m Bryce, and I’ve been an EA since 2016. My background is mostly in software engineering, and I have a master’s degree in computer science, emphasis in AI.
My biggest passion is in helping cool ambitious people to level-up and solve important problems. If you are reading this, you probably fall into that category!
I have strong intuitions that there is valuable low-hanging fruit in the “talent development for EAs” space. Towards that end, I recently left my job in order to spend some time investigating 1) whether my intuitions are correct, and 2) whether I am personally a good fit to pursue this. I am currently focusing mostly on self-development and coaching skills.
If you are interested in coaching or pair debugging, you can sign up for a conversation at calendly.com/bryce-woodworth. If you have any feedback for me, I’d love to hear it!
(emphasis mine)
This feels wildly counterintuitive to me, unless “few differences” is much weaker than I’m expecting or “autonomous mind” is a way narrower concept than it looks. On LW the author gives further elaboration in the comments, which I understand as “some autonomous processes like face recognition seem to be mostly the same between people”.
Maybe it’s true that most people have nearly-identical performance in those domains. But to me it looks like almost all of the differences between people lie in the autonomous mind. The vast majority of actions I take throughout the day are autonomous. When I observe skill differences between myself and someone else, most of the variance seems to come from differences in our intuitions and pattern-matching, rather than our mindware or algorithmic thinking.
I can’t even imagine a worldview that says otherwise, so I’d be curious to hear from anyone who legitimately agrees with the “few individual differences in autonomous reasoning” model. If this turned out to be correct then I would restructure a lot of how I’m trying to become more generally competent.