(A guest post from Andrew Brice. Data analytics and visualisation is Andrew's passion, turning data into insights. Enjoy some fascinating insights here: http://www.zyan.co.nz/)
The calls were stacking up. They weren’t meant
to be, this was just a normal call centre day. Call agents were racing,
management hearts were racing, but no one was quite sure where the finish line
was. Or why there was even a race.
Afterward, the vast swathes of data collected by the call
system was able to be analysed and it started to become clear as to what had
occurred and why. The needle in the haystack was found.
The thing is, it was all preventable. The needle could have
been tracked if only they had been analysing their own data, ideally with an
ecosystem perspective. If only.
OK, so how
would that have worked?
Example dashboard for call flow monitoring |
Two parts of
the organisation already knew this was not a normal call centre day. It was an
end-of-year filing date for a subset of clients (yes, it’s a Government
agency). Not a familiar filing date, but a statutory one for these clients. The
business knew and so did the call centre. Every year, on the same date, a spike
occurred. But the call centre didn’t analyse call patterns at a deep enough
granularity so they hadn’t picked this up. They were busy with “real” call
volumes… The data was collected but it was never sufficiently deployed through
to analytics. Of course, you kind of hope the business might have reminded the
call centre. Sigh.
As the calls
arrived, the call notes were not being augmented with robust, pre-determined,
keywords which would have enabled rapid and clear analysis of that entered text
on a near-real-time basis. The same effect could be achieved by simply asking
call agents what all the calls were about, but (to be fair), the call agents
were rather busy.
That keyword
data might have shown that the callers were all identifying as “Directors” and
most were asking (in quite elongated conversations), about how to log-in to one
of the organisation’s systems. Which, just about then, went splat. That’s a
technical IT term which means it stopped working for no particular reason. Now
the keyword flow changed from password issues to “the system won’t work”
issues.
Luckily, the
super-heroes in IT noticed a red-flashing light (I jest) and rebooted the
server. The system returned. However, the rest of the morning it kept
splatting, being rebooted, … You get the idea.
The data that IT
had could have painted a pretty clear picture that this was going to happen.
It’s just that they didn’t really care too much about that one little server,
they had lots of other more important ones. But their data (for the naughty
server) showed annual usage growing at around 4% a year over the last few years
and that, at its peak (which oddly only seemed to occur once a year), last year
was at 88%. Do the maths, the poor server was over-run. But each month, IT drew
beautiful charts showing all the organisations’ servers were paragons of good
health. Sometimes, they even shared these charts with the business (not the
call centre though).
The final piece
of the jigsaw was understanding why logging-in was proving such a problem for
this subset of clients and this particular service. Connecting call data to
client CRM data quickly showed that this was almost the only interaction these
clients had with the organisation each year. The call records then showed that
“remembering their password” or “knowing what data to enter” were the key tags
to describe the clients’ issues. Eminently common issues and ones that can
readily be addressed with better user interface design, education of agents and
users, reminder communications, and perhaps adopting external user-validation
processes. Of course, the next crisis arrived so these sorts of improvements
never actually happened.
So, what
lessons do we learn from this? It might be useful to use a simple graphic to show one way
of contemplating how to think about getting good data that enables quality
analytics.
Data has different owners who, traditionally,
impose their own standards, definitions, segmentation, and so on. Adopting
co-ordinated governance practices (just like for finance or risk) is a
massive enabler of good data.
Data is moving
from controlled source systems to all-over-the-place. And there’s so much more
of it than there ever used to be. Active and consistent curation is
needed to enforce (in a friendly way) an organisation’s data governance model.
Data now lives
in many places. Collecting that data together in coherent and consistent
ways matters. It’s also quite difficult. This isn’t about data marts, it’s
about descriptors and licences and metadata and data packaging. Building
confidence in the quality of the data being collected.
Data never
seems to arrive in perfect shape. It always needs consolidating or averaging or
building into new fields or any one of a myriad number of ways that data gets augmented.
It might need to be assembled with other data (IT data and call centre data for
example) or shaped a certain way for a particular visual. But there are pretty
consistent augmentation requirements and there are pretty consistent ways of
standardising and automating such augmentation. Done right, we can join a call
with a CRM entry and associate them with a system and then to a server. Now we
can see much of the ecosystem. And we can keep rerunning the analysis as we
make improvements to see if those improvements really are working.
We also have
an endless need to categorise data, to tag data. This is a phone call,
it’s a happy person on the line, it’s about this system, it is from that person.
These are all ways of adding tags to data. Tags are incredibly useful because
that’s how we can start joining things up, clumping them together, showing how
processes flow, visualising pathways to outcomes. Tags are exciting. But if we
all invent our own tags then value is lost. Governance and curation are how
tags are kept moderated.
And, finally,
we get to the fun bit. Actually, deploying the data in ways that enable
rapid, effective, and (hopefully), elegant visuals that communicate the story
inherent in the combined data and which enable audiences to quickly understand
and react to complex ecosystems. It might also be deploying into AI systems to
read call agents’ notes in real-time and then to tag calls automagically.
Deployment is
the real value proposition. But only if you do something about what you’ve
learned.
(Andrew Brice works with New Zealand government agencies on the visualisation of business ecosystems using complex, multi-faceted, data.)
No comments:
Post a Comment