So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
So I actually draw an important distinction between “mid-level EAs”, where there’s three stages:
“The beginning of the Middle” – once you’ve read all the basics of EA, the thing you should do is… read more things about EA. There’s a lot to read. Stand on the shoulders of giants.
“The Middle of the Middle” – ????
“The End of the Middle” – Figure out what to do, and start doing it (where “it” is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don’t yet have the right meta skills to do that.
Ah.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile… “sufficiently advanced thinking looks like doing”, or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn’t actually rise to the level of “thinking for real.” Thinking for real is real work.
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
So, yeah, proceed on that, I guess.