Kudos for pursuing this and not getting too attached to it as to not be willing to give it up when the evidence showed such made sense.
Here’s a question: do you have thoughts on how this could have failed faster? If you were to go about this again, what would you have done differently in order to spend even fewer resources on it?
Good question Ozzie. In the start of 2018 we mostly focussed on getting into schools and on the surveys (metrics 1 and 2 above), because they were our first hurdles and we were very uncertain on how these would go until we started the project.
However that meant we didn’t optimise our workshops for engaging students long term (metric 3) for several months after starting the project. That meant we weren’t confident in making decisions based on the first indications that we were not meeting metric 3, and ran the project for several more months as a result. If we had planned our long term engagement strategy at the start of 2018 and set success criteria earlier we could have learnt what we needed to in less time.
Reiterating my other comments: I don’t think it’s appropriate to say that the evidence showed it made sense to give up. As others have mentioned, there are measurement issues here. So this is a case where absence of evidence is not strong evidence of absence.
Kudos for pursuing this and not getting too attached to it as to not be willing to give it up when the evidence showed such made sense.
Here’s a question: do you have thoughts on how this could have failed faster? If you were to go about this again, what would you have done differently in order to spend even fewer resources on it?
Good question Ozzie. In the start of 2018 we mostly focussed on getting into schools and on the surveys (metrics 1 and 2 above), because they were our first hurdles and we were very uncertain on how these would go until we started the project.
However that meant we didn’t optimise our workshops for engaging students long term (metric 3) for several months after starting the project. That meant we weren’t confident in making decisions based on the first indications that we were not meeting metric 3, and ran the project for several more months as a result. If we had planned our long term engagement strategy at the start of 2018 and set success criteria earlier we could have learnt what we needed to in less time.
Reiterating my other comments: I don’t think it’s appropriate to say that the evidence showed it made sense to give up. As others have mentioned, there are measurement issues here. So this is a case where absence of evidence is not strong evidence of absence.