SMART Objectives: Service Level Agreements and Memoranda of Understanding

For many service desk outsourcing organizations, maintaining client satisfaction is all done by the numbers. Establishing SMART (Specific, Measurable, Achievable, Relevant, and Time-Bound) objectives and leveraging the people, processes, and technology to deliver on those objectives is the perpetual operational mantra of the service desk team. Before those objectives are put in place, it’s important to thoroughly document the scope of services and determine an achievable level of commitment.  An important distinction is whether or not Service Level Agreements (SLAs) or Memoranda of Understanding (MOU) are appropriate for the client.

While the target metrics can be identical, the difference between SLAs and the less commonly used term MOU pertain to contractual accountability. For SLAs, performance penalties can be assessed or the “nuclear option” of canceling the service entirely if one or more are missed over consecutive months without successful remediation. MOU are mutually understood to be delivered on a best effort basis. For both, the very real consequences of coming up short are a dissatisfied client and potentially lost business.

In a shared staffing model, all contacts are handled in the order received so there is no lesser priority given to smaller MOU clients than their enterprise scale SLA counterparts. As a result, target metrics tend to be consistent across the board for all clients regardless of contractual obligation. Due to the low incident forecasted, the statistical normalization that occurs over a greater population may not be realized, resulting in a percentage measurement that varies noticeably from the stated goal. For example, if a client has a 95% customer satisfaction goal and only submits 10 surveys in any given month, recovering from one negative review for the remainder of the month becomes statistically impossible and, more importantly, not a realistic standard. On the other hand, a client that submits 100 satisfaction surveys a month leaves room for the service desk to recover from up to 5 negative reviews. While the overall performance may be comparable for either client, the math doesn’t always add up the same.

“We staff according to actual volume for all shared staff so service quality and prioritization is not diminished for smaller accounts,” says ABS Service Desk Manager Tyler Dameron. “But if the sample size of statistical data isn’t sufficient enough, performance measurements cannot be accurately gauged which is the logic behind establishing those percentages in the first place.”

On the other hand, high monthly ticket volume does not make guaranteed SLAs a lock. Even for clients who generate thousands of tickets per month in support requests, if the staffing model is purely dedicated, MOU typically still apply. The reason being, assuming the account is not grossly overstaffed and consequently underutilized, a finite number of agents may be hard-pressed to meet monthly target metrics during inordinately busier months. So availability metrics such as Average Speed to Answer (ASA) and Abandon rates can become a more of a pronounced challenge. So unless there is a hybrid element to the solution that engages a shared team of agents during peak periods, guaranteeing those service levels without some reasonable caveats may be a fool’s errand.

In terms of setting achievable support expectations, service desks prefer to under promise and over deliver rather than set aggressive numbers and either fall short or drive up costs exponentially in order to maintain them. Abandon rates are a perfect example. Though in practice they tend to fall below 3% of inbound voice contacts after 90 seconds, in a worst case scenario if all callers hang up on the 91stsecond that number would be closer to 7%. While the logic may seem driven by risk aversion, in a cost-effective high utilization pricing model, there has to be reasonable parameters. They’re the rules of a game that both sides can win. For the same reason, auto responses on emails and chat sessions are discarded from the ASA calculation. While automation tools are useful in acknowledging support requests and generating ticket numbers, anything short of live answer from an agent is a cheat on ASA. It’s important to measure metrics that matter, that are true indicators of service quality. The ultimate goal is to play fair, keep score, and continue to improve performance for all service desk clients by consistently meeting and frequently surpassing all objectives. That’s the SMART way to do it.

Related Posts