For organizations looking to transform their current IT operations, reviewing performance metrics is a good place to start. The problem is those considering a shift from an internal solution to an outsourcing model don’t always track all service desk activities or establish clear benchmarks. It’s usually the fast-growing companies, the ones forced to keep up with increased support demands without a formal solution in place, that don’t have ready-made answers to standard service desk assessment questions. The good news it’s never too late to review how IT support is measured or to consider the implications. In fact, great organizations never stop.
First, what are your current service levels?
With an internal solution, measurable service levels, if in place at all, are often less accountable than they would be if outsourced to a Managed Service Provider and are usually delivered on a best effort basis. In effect, there are no contractual consequences to missing performance metrics whether benchmarked or not. IT management usually will establish individual Key Performance Indicators (KPIs) and review agent performance in relation to the team which, in theory, collectively contributes to overall service levels; however, if measurable agent goals are stellar, but the overall service levels still leave something to be desired, the current solution warrants further investigation.
Though the metrics may vary, the three main components of service levels are availability, resolution, and satisfaction. Most commonly used availability metrics are Average Speed to Answer (ASA) and abandonment rate. In other words, how quickly on average do service desk agents answer inbound calls and what percentage of callers hang up after a designated number of seconds has elapsed without a live answer? ASA can also apply to non-automated written responses to emails, test, and chat sessions. If your team is not as responsive as the end users would expect, the most likely causes are either they are shorthanded or are serving a dual purpose role that takes them off the phones and away from their desks. Either way, the lack of availability stems from some form of dysfunction with the workforce management in the current solution.
In an outsourcing model, there are legitimate contractual consequences to missed service levels either in the form of financial credits or, if exercising the nuclear option, contract termination. So the stakes are higher with an MSP specializing in service desk outsourcing, leaving little room for error in how workforce management practices impact SLAs.
What is your first contact resolution rate?
A high FCR means high productivity as end users are getting back to work quickly upon swift resumption of functionality, known formally as incident resolution. By the same token, a one and done conversation also means agents are free to handle the next inbound contact and shorten that end user’s downtime as well. Anything below 80% of remotely resolvable incidents should prompt some follow-up questions:
- Are tickets being escalated to different groups for resolution? Sometimes the issue requires an on-site presence or additional access not granted the service desk in which case multiple contacts through escalations are unavoidable. In other cases, the Level 1 team could have resolved the issue on first contact, but need additional training or documentation to facilitate the resolution. Depending on the ticketing system being used, the Level 2 or Level 3 teams should have the ability to reroute the ticket back to Level 1 in such instances or, upon resolution, flag the ticket as having been resolvable upon the first contact. Usually, it’s up to the service desk manager or team lead to review the ticket notes and routing summary and determine if additional agent training or documentation would have made FCR possible.
- Is additional research for troubleshooting procedures deferring resolution to a later contact? If so, is it documented and added to a searchable knowledgebase to benefit service desk agents faced with recurring issues?
- For that matter, is the knowledgebase available to end users via a self-help portal? Though not a measurable service level per se, the only resolution rate that tops first contact is zero contact assuming the end user resolved their own issue just as promptly as if they engaged a service desk agent.
What is your customer satisfaction rating?
All prior performance metrics lead to this one. If agent availability and resolution rates are high, satisfaction scores should follow suit. The only instances in which there may be a discrepancy between the former and the latter are when technically proficient agents may lack the soft skills necessary to ensure positive end-user feedback. Good fodder for review of service quality are the calls themselves which should always be recorded and reviewed by a supervisor or QA specialist.
Understandably, with an internal solution in which agents and the colleagues they support sit side-by-side, there is more often a team culture that, though commendable from a morale standpoint, may discourage candid feedback where performance is lacking. This is not to say a white labeled service desk solution provided by a third party does not have an element of “esprit de corps” that gains momentum with frequent interaction, but end users who view them as a separate, remotely staffed entity are less prone to pull punches when evaluating the service. A good satisfaction benchmark is anything over 95% of those participating with participation encouraged in one of two ways: gift cards for randomly selected respondents or automatically generating a survey for 100% of end users at closure of their support ticket. Should negative feedback be received, a service desk industry best practice is for the supervisor to review the call along with the completed survey and provide recommendations for improvement to the agent.
Transforming IT support is easier said than done. There is no silver bullet. It takes a concerted, process-oriented discipline to implement and maintain the more granular aspects of a continual service improvement strategy, focusing on root causes of escalated incidents and long wait times in the current model. Defining valuable performance metrics is a good place to start and not a bad place to revisit as the service desk continues to look for meaning behind the numbers. As the instructions say, you’ve got to “repeat for desired results.”