I’m struck by the effects reported after just around 4 sessions ~ 7 hours. I can’t help but question whether these effects will last for more than a month after the coaching. When did they fill in the survey relative to the coaching? For how long do you predict that the effects will last?
What do you think the ideal coaching frequency is for people in this reference class? I.e., every week, every other week, once per month? (Assume that we’ll have unlimited supply of high quality coaches).
One of the main rooms for improvement (from my perspective) might be if the evaluation had been conducted by a third person and I’ll probably see if I can find someone like this if/when I do a trial myself. Do you have any thoughts or reactions to that?
I’m struck by the effects reported after just around 4 sessions ~ 7 hours. I can’t help but question whether these effects will last for more than a month after the coaching. When did they fill in the survey relative to the coaching? For how long do you predict that the effects will last?
Good questions – I think the set of claims that I’m more comfortable standing behind are that the coaching seems to be quite valuable and important during the period that the coachee is engaged, rather than trying to predict what the consequent effects will be after a pre-packaged period of time for the trial. A follow-up on the stickiness and potency of consequent effects would be interesting though. I’m taking this suggestion pretty seriously.
The set of claims I’m more comfortable standing behind is particularly true if that pre-packaged period of time for the trial is constructed for reasons that aren’t all aimed at maximizing effects (e.g. if I had unlimited resources to run this trial in order to cause effects, the duration and frequencies might have been different)
Nearly all filled in the survey after the 4th session. The turnaround time on getting a completed survey ranged between 2 days and 2-3 weeks, depending on the person’s responsiveness. A more rigorous trial would probably be more hardcore about when final feedback surveys are issued and completed. I didn’t feel that I was in a position to draw hard lines on when these leaders submitted the surveys.
What do you think the ideal coaching frequency is for people in this reference class? I.e., every week, every other week, once per month? (Assume that we’ll have unlimited supply of high quality coaches).
Short answer is that fortnightly (once every two weeks) seems to be the sweet spot for fairly busy leaders undertaking complex roles. But the frequency we end up going with is unique to the individual and varies according to a constellation of things – a non-exhaustive list includes what their goals are/what the subject matter is, how inclined they are to test out new actions and outlooks and the time horizons on those feedback loops, how inclined they are to take time to pause and reflect (ie have they taken the time to think through what they felt was important to think through), their mental space and general availability, personal financial situation, etc. I’m sure their pre-existing models of what they need to work on and how long it will take to bring those things to a good place also plays an important factor.
One of the main rooms for improvement (from my perspective) might be if the evaluation had been conducted by a third person and I’ll probably see if I can find someone like this if/when I do a trial myself. Do you have any thoughts or reactions to that?
Good point – I flirted with this idea and I’m still quite interested in doing this. My primary hesitation is that I’d be concerned about off the bat about whether there’s enough epistemic alignment on the ‘metrics’ that are chosen, and furthermore what the implications of certain metrics are. (For example, if someone over-engineered the quantitative metrics and anchored too hard in the importance of them, the results could be pretty damaging to how people look at your practice in a way that doesn’t seem justified to me.)
Anticipating epistemic idiosyncrasies in the wide variety of readers out there, I personally chose a variety of metrics that would likely resonate in different ways with different people. I was shooting for producing a collage of valuations that cut across different paradigms.
Following from that, I think it would actually be cool to have sections of a single unified evaluation designed by different people that measure along different paradigms.
The specifics of coaching leaders and the trial:
I’m struck by the effects reported after just around 4 sessions ~ 7 hours. I can’t help but question whether these effects will last for more than a month after the coaching. When did they fill in the survey relative to the coaching? For how long do you predict that the effects will last?
What do you think the ideal coaching frequency is for people in this reference class? I.e., every week, every other week, once per month? (Assume that we’ll have unlimited supply of high quality coaches).
One of the main rooms for improvement (from my perspective) might be if the evaluation had been conducted by a third person and I’ll probably see if I can find someone like this if/when I do a trial myself. Do you have any thoughts or reactions to that?
Good questions – I think the set of claims that I’m more comfortable standing behind are that the coaching seems to be quite valuable and important during the period that the coachee is engaged, rather than trying to predict what the consequent effects will be after a pre-packaged period of time for the trial. A follow-up on the stickiness and potency of consequent effects would be interesting though. I’m taking this suggestion pretty seriously.
The set of claims I’m more comfortable standing behind is particularly true if that pre-packaged period of time for the trial is constructed for reasons that aren’t all aimed at maximizing effects (e.g. if I had unlimited resources to run this trial in order to cause effects, the duration and frequencies might have been different)
Nearly all filled in the survey after the 4th session. The turnaround time on getting a completed survey ranged between 2 days and 2-3 weeks, depending on the person’s responsiveness. A more rigorous trial would probably be more hardcore about when final feedback surveys are issued and completed. I didn’t feel that I was in a position to draw hard lines on when these leaders submitted the surveys.
Short answer is that fortnightly (once every two weeks) seems to be the sweet spot for fairly busy leaders undertaking complex roles. But the frequency we end up going with is unique to the individual and varies according to a constellation of things – a non-exhaustive list includes what their goals are/what the subject matter is, how inclined they are to test out new actions and outlooks and the time horizons on those feedback loops, how inclined they are to take time to pause and reflect (ie have they taken the time to think through what they felt was important to think through), their mental space and general availability, personal financial situation, etc. I’m sure their pre-existing models of what they need to work on and how long it will take to bring those things to a good place also plays an important factor.
Good point – I flirted with this idea and I’m still quite interested in doing this. My primary hesitation is that I’d be concerned about off the bat about whether there’s enough epistemic alignment on the ‘metrics’ that are chosen, and furthermore what the implications of certain metrics are. (For example, if someone over-engineered the quantitative metrics and anchored too hard in the importance of them, the results could be pretty damaging to how people look at your practice in a way that doesn’t seem justified to me.)
Anticipating epistemic idiosyncrasies in the wide variety of readers out there, I personally chose a variety of metrics that would likely resonate in different ways with different people. I was shooting for producing a collage of valuations that cut across different paradigms.
Following from that, I think it would actually be cool to have sections of a single unified evaluation designed by different people that measure along different paradigms.