Plot the dots, create a run chart, learn what works – and what doesn’t!
If you’re used to doing audit with two data points, measurement for improvement can seem like a bit of a challenge. But if I told you that you just need a piece of paper, and a pen, the ability to count, and a few minutes of each day then you’d probably think that it was well within your reach.
Audit data offers a static snap shot – whilst you will know whether or not you have met your established standard – you won’t actually know if this was a result of chance due the normal daily or weekly difference that occur in any process, or because of something you did made a difference.
When our trainees start working on an improvement project, they rarely know anything about the normal daily or weekly difference in their service, and so they don’t actually know the best and the worst experience that their patients receive. And it does vary! They might know the mean or average performance – but this is made up of the sum of all the great care PLUS all the suboptimum care. If the team has excelled one week, or one member has been a super performer, this will be enough to make the average seem respectable. But at the same time, this will hide from view dreadful experiences that could have happened too! And this begs the question ‘How would you know if this was happening in your unit?’
We use our learning game Variation in practice: An in-flight experience™ to help learners appreciate the variation that exists in all processes. Once they know the ‘normal variation’ in their service, they are then able to say with some confidence whether the changes they make result in an improvement!
When a trainee consultant midwife, moved to a new Maternity unit it seemed to her that a high number of women who came into the birthing suite, expecting a ‘home birth’ experience were, in fact, transferred to the main maternity unit to give birth. And this intrigued and bothered her.
By looking at past records the trainee consultant midwife was quickly able to measure the numbers, plot the data on a graph, and create what we call a run chart. This showed that the numbers of transfers varied over the weeks, with some weeks being much lower than others. But she was surprised to find that the differences did not appear to coincide with workload. The average number of women managing to give birth in the birthing suite was in fact just 57%, but more importantly, she could see that it ranged week by week from 51 – 81%. And it was this simple graph that provided a baseline measure which Lisa could use to answer the question ‘ Will the changes I make result in an improvement?’.
As a result of her observations and diagnostic work, the trainee consultant midwife generated a list of potential interventions (change ideas). She ran small scale tests of change (PDSAs) for each of them over a number of weeks, whilst continuing to capture her weekly improvement measure.
This resulted in the trainee consultant midwife being able to annotate her graph (or run chart) to show when she has made changes.
“I was able to show that the average number of number of women staying in the birthing suite to give birth increased to 64%. At the same time, using balancing measures, I was able to reassure people that there was no extra risk to mothers and their babies e.g. admission to the neonatal suite after birth”.
Our trainee appreciated the coaching support provided by the Quality Improvement Clinic and the guidance she received to help her get a good baseline measure.