For another perspective: personally I feel like the most important aspect of “good ops writing” is something like “making it really easy for the other person to do exactly the thing they need to do and get the info they need, even if they’re just quickly skimming[1]”. I’m thinking of things like:
Good use of formatting, e.g. bold, bullet points, etc; so that someone who’s skimming it at a glance will easily identify the parts relevant to them or where they need to engage further.
The opposite of this: important facts being hidden in the middle of long plain blocks of text, meaning people will only notice them if they’re reading carefully
General clarity, e.g. wording and sentence structure not being confusing
For messages: clearly identifying what actions are required vs. optional; or if the message is just an FYI with no action needed
Having anticipated questions the reader will have and provided what they’d want. But also balancing this with not making the action-relevant parts too long.
I don’t think this is only important because of readers who are busy / not very engaged. I think even for a really engaged reader, it’s useful to be able to identify the most relevant parts at a glance before going deep.
Especially for ops roles, I’ve been surprised by people not actually following the instructions correctly (especially related to re-naming the work test document in a specific way, or making sure you’re using the correct view settings).
Another extreme example would be for candidates to just copy and paste LLM answers and only slightly formatting them—oftentimes the answers are way too long and not relevant for the specific task.
As Eli said, making sure you have few typos, consistent formatting and answering the questions correctly, being clear on your uncertainties and making comments how you made a decision or what you’d do if you had more time etc.
For strong writing I’m thinking of things like: a near complete lack of typos, incorrect word choices, or writing-related formatting issues. I’m also thinking of whether the writing flows well, i.e., if I read it aloud (or in my head) does it make sense and sound good. In certain cases tone or register might matter too, for example whether the writing is too formal/informal for the required context. In many cases I expect applicants can actually write quite well but underperform, perhaps because they’re stressed, tired, or don’t realize how high the bar will be.
For strong work tests in general: this will depend on the work test, but I’m thinking of things like whether they wrote a sufficient amount of copy for the allotted time, whether they answered all parts of the question, and whether they provided sufficient reasoning if required. There’s also naturally a quality aspect, for example if a work test is asking them to investigate conference venues I might want to see that the applicant was thinking about the right sorts of trade-offs and whether they’d explained these trade-offs clearly.
In both these cases I expect it’d be easier if I could point to examples of strong vs weak work test responses, which I can’t easily do without making them up myself.
Could you say more about what counts as high quality for writing or a work test?
For another perspective: personally I feel like the most important aspect of “good ops writing” is something like “making it really easy for the other person to do exactly the thing they need to do and get the info they need, even if they’re just quickly skimming[1]”. I’m thinking of things like:
Good use of formatting, e.g. bold, bullet points, etc; so that someone who’s skimming it at a glance will easily identify the parts relevant to them or where they need to engage further.
The opposite of this: important facts being hidden in the middle of long plain blocks of text, meaning people will only notice them if they’re reading carefully
General clarity, e.g. wording and sentence structure not being confusing
For messages: clearly identifying what actions are required vs. optional; or if the message is just an FYI with no action needed
Having anticipated questions the reader will have and provided what they’d want. But also balancing this with not making the action-relevant parts too long.
I don’t think this is only important because of readers who are busy / not very engaged. I think even for a really engaged reader, it’s useful to be able to identify the most relevant parts at a glance before going deep.
Especially for ops roles, I’ve been surprised by people not actually following the instructions correctly (especially related to re-naming the work test document in a specific way, or making sure you’re using the correct view settings).
Another extreme example would be for candidates to just copy and paste LLM answers and only slightly formatting them—oftentimes the answers are way too long and not relevant for the specific task.
As Eli said, making sure you have few typos, consistent formatting and answering the questions correctly, being clear on your uncertainties and making comments how you made a decision or what you’d do if you had more time etc.
For strong writing I’m thinking of things like: a near complete lack of typos, incorrect word choices, or writing-related formatting issues. I’m also thinking of whether the writing flows well, i.e., if I read it aloud (or in my head) does it make sense and sound good. In certain cases tone or register might matter too, for example whether the writing is too formal/informal for the required context. In many cases I expect applicants can actually write quite well but underperform, perhaps because they’re stressed, tired, or don’t realize how high the bar will be.
For strong work tests in general: this will depend on the work test, but I’m thinking of things like whether they wrote a sufficient amount of copy for the allotted time, whether they answered all parts of the question, and whether they provided sufficient reasoning if required. There’s also naturally a quality aspect, for example if a work test is asking them to investigate conference venues I might want to see that the applicant was thinking about the right sorts of trade-offs and whether they’d explained these trade-offs clearly.
In both these cases I expect it’d be easier if I could point to examples of strong vs weak work test responses, which I can’t easily do without making them up myself.