One of the things Zendesk Explore does not do is calculate Next Reply Time (“NRT”) as a separate metric from SLA-related calculations, as it already does for First Reply Time. In this article, we will share a method to generate and calculate Next Reply Time using both Zendesk Support and Explore.
⚠️ The following calculations do not apply to Messaging/Chat tickets, as there isn’t a way to detect public user comments with triggers at the moment.
We’ll tackle this challenge in two phases: initial business rule configuration in Support and standard calculated metric creation in Explore.
We’ll start by creating a custom drop-down field to store our reply metric’s current status throughout a ticket’s history and until its resolution. We’ll then use a few triggers to activate the relevant value in that field, depending on where we’re at in terms of ticket interactions.
Custom ticket field (drop-down)
In order to track Next Reply Time setup in Support, we need to create a custom drop-down. We’ll name it “⏱️ Reply Metric” (I like to add emojis to all admin-level custom fields so that they’re easy to identify in conditions and actions drop-downs, especially when creating automations).
This is what it should look like:
As you can see, we have two drop-down options for First Reply Time and two values for Next Reply Time, as each can be either active or fulfilled/inactive. We don’t necessarily need to enable FRT in triggers, it’s just a nice-to-have (to measure NRT alone, without FRT, we’d use the Agent replies trigger condition).
Reply metric triggers
Now that we have the ticket field to store the status of the metric, we can create the triggers that will be measuring the different metric scenarios:
What are these scenarios?
- When an inbound ticket is created and agents still haven’t replied, FRT is active
- When an agent updates the ticket with the first public comment, FRT is fulfilled (and there is no NRT to be considered)
- When the end-user replies, NRT is activated
- When active and an agent responds, NRT changes to inactive
- If the end-user replies, NRT becomes active once again
- If a ticket is updated as Solved and any metric is active, fulfilled or inactive, we will set the drop-down value to ‘Done’
Example conditions for the third trigger (that activates Next Reply Time):
Note that the final [‘done’] trigger contemplates both creation and update ticket events, as some workflows may notify the Requester and solve the ticket in one go during creation, for example.
Pro-tip: activate the triggers from last to first. The last trigger depends on the second-to-last, which in turn depends on the third-to-last, and so on. Therefore, we’d start by activating the “deactivate on ticket resolution” trigger first and the “FRT” one last, avoiding tickets with incomplete data/metric statuses.
To adequately measure our Next Reply Time, the second phase is focused on creating standard calculated metrics and attributes in Explore.
We’ll leverage the Updates history dataset and use:
- The “[Changes …]” attributes to identify our custom field’s options and relevant scenarios
- The “Field changes time” metric to calculate the time between the custom field value changes
That said, our Next Reply Time uncut* (sec) metric as follows:
IF [Changes - Field name]="⏱️ Reply Metric" AND ([Changes - Previous value] = "kpi_reply_nrt-active" OR [Changes - Previous value] = "kpi_reply_done") AND ([Changes - New value] = "kpi_reply_nrt-inactive" OR [Changes - New value] = "kpi_reply_done") THEN VALUE(Field changes time (min))*60 ENDIF
⚠️ This calculation will never be entirely accurate. Feel free to use this method to calculate NRT but analyze the results first.
On result accuracy
Zendesk Explore doesn’t have a native Field changes time (sec), which means that we must create a calculated metric that multiplies Field changes time (min) by 60… And this calculation does not return the exact number of seconds.
We’ll analyze four tickets in more detail to demonstrate this:
Then we exported the same data to Google Sheets so that we could compare Explore by doing some manual calculations:
Numbers are quite different when we manually calculate the actual difference in seconds. Check the last row on ticket 123, for example: the real duration between the end-user’s comment and the agent’s reply is 118 seconds, not 60.
Date differences in Google Sheets are returned in day units, and what Explore seems to calculate is the equivalent of having the difference between dates multiplied by 24 (to obtain a value in hours) and by 60 (to then obtain it in minutes), to finally only consider the integer part of the value.
This means that in some cases the difference is negligible, not so much in others:
- 1,067 seconds becomes 17.783 minutes, which Explore returns as 17 = 1,020 seconds (a 4.4% difference)
- 118 seconds becomes 1.967 minutes, which Explore returns as 1 = 60 seconds (a 49.2% difference)
- 63,195 seconds becomes 1053.25 minutes, which Explore returns as 1,053 = 63,180 seconds (a 0.02% difference)
Accurate, no. Reliable? Perhaps.
In one case, the difference between real vs Explore seconds is 49%. So there is an error margin in these NRT calculations. It will likely be diluted, however: if we sum the total seconds for the Explore column we get 66,480 versus 66,659 real seconds, which means the latter is only 0.27% higher.
To conclude, we should bear in mind that although this NRT calculation is not accurate, it’s better to measure something with a small error margin than nothing at all, at least in this case.
Side-note: existential NRT problems
Cut versus uncut
You may have noticed the metric shared in the formula above is named ‘uncut’. This is because we can have a similar NRT metric – the ‘cut’ version – whose only difference in calculation is to calculate the field change duration only if there’s a public agent comment after the last end-user reply:
What this means is that we’re not calculating NRT in scenarios where the end-user comments and we solve the ticket. In other words, we’re excluding the event where the requester commented last and there isn’t a public agent reply after our metric is ‘done’.
Here’s an example ticket:
As you can see in the example, the last public comment is the requester’s. It can be a simple “thanks!” that reopened the ticket, for example. The agent merely solved the ticket because there was no need to respond, so we will not consider those three extra minutes.
A main consequence of this is neglecting valid NRT events: imagine the last end-user’s reply was “I’m still having problems, did you even read my message?”, for example. We would definitely want to include that last agent event in the NRT calculation.
Another consequence of this: let’s say the last event didn’t take three minutes, but twenty hours. Choosing the cut or uncut version of NRT can greatly affect our analysis if that last update takes hours:
- In the first case where the user just said thanks, we could probably do without that NRT in the calculation
- In the second case, we’d probably want to include it in the NRT calculation
But without any advanced tools – for example, an AI tool that easily detects reopens that don’t require a response and allows us to quickly solve the ticket again -, it is risky to assume which is which and when to consider to include final ticket events on NRT calculations.
Which means that…
…Uncut NRT is probably better
We could have both ‘cut’ and ‘uncut’ versions for comparison, of course:
The fact that the uncut duration is lower than the cut one may seem weird at first, but it has to do with the way an average is calculated. In the example we used above (image of ticket ID 4618), we’d be considering a total duration divided by five events versus the total duration split by six events. Because the sixth event lasts a mere three minutes, it has a big impact on the uncut average calculation.
Natively, Zendesk SLAs calculate Next Reply Time between the last public comment by the requester and ticket resolution when there is no last public agent comment. This is the equivalent of our uncut metric, and that’s the one we use and would generally recommend.
Drawbacks and limitations
This NRT calculation is an approximation, not the real thing, so we’ll list its limitations and weaknesses to make sure we all know what using it means:
- As stated in the beginning, we can’t calculate this for Messaging/Chat tickets because triggers do not include conditions related to user messages, only comments. NRT will be calculated on any tickets whose channel has ‘classic’ public ticket comments (email, web form, etc.).
- The calculation isn’t retroactive. These calculations rely on firing the right trigger at the right time, which means we can’t apply these rules to past tickets. We’ll only have data available to new tickets modified by this workflow’s triggers.
- Also, this NRT calculation isn’t entirely accurate because some seconds are excluded from Explore calculations, as we’ve seen and analyzed above. Not a showstopper in our opinion, but it is important to know we are not looking at completely accurate durations. They’re very close, but not exact.
- Depending on how our agents use Zendesk, there can be outliers, situations where our triggers will not measure very specific scenarios and therefore contribute to an incorrect time measurement. For example, solving the ticket will trigger the ‘Done’ rule, so if an agent comments afterwards we will not be measuring the time between resolution and comment.
- We can’t calculate our Next Reply Time in business hours. But please do feel free to upvote this Community request!
If you ever implement this NRT setup, we’d love to know how it went. Feel free to share your feedback with us.
Bonus: First Reply Time
Zendesk Explore doesn’t calculate First Reply Times in its Updates History dataset, but it does include it in the Support: Tickets dataset. This gives us an opportunity to compare our own method vs the native calculation, mostly out of curiosity… But also in case we’d like to have a FRT and NRT comparison calculated with that dataset.
⚠️ This calculation will never be entirely accurate. It can be useful to calculate this FRT approximation in order to compare with NRT in Updates history reporting, but also make sure to analyze the results first.
For any ‘official’ reporting needs, we strongly recommend you use Zendesk’s native metrics and implement a solid SLA structure.
Analyzing our FRT
That said, let’s look at our experiment. While analyzing native versus custom FRT in one of the accounts we manage, we checked approximately 2.600 tickets.
The average FRT was as follows:
- Custom metric (ours): 12h 43m 53s
- Native metric (Zendesk’s): 12h 35m 45s
This means that our custom FRT (in this case) was 1.08% above the native metric.
In this particular account we confirmed there were a few outlier cases impacting this difference, specific cases circumventing the two FRT triggers we shared.
Consequently, this triggers the ‘done’ event and thus our custom FRT isn’t being calculated. Not that it is wrong per se, but we’d reply first and solve later according to the standard practice.
We then deep dived a bit and checked the daily average FRT for the past ten days (same group of tickets):
Visually, these two are quite similar. The average difference is actually under 2%, as we can confirm by taking a closer look at each chart’s data:
As we can see, our daily custom FRT is 1.6% higher than Explore’s native FRT metric, on average. These results might differ when implementing this on other accounts, so it’s always good to take a deep dive on all cases.
That’s a wrap 🌯
Bringing this to a close, our objective with this post was to see if it was possible to easily calculate Next Reply Time metric in Zendesk Explore. It is possible but only an approximate value.
So as long as we keep in mind that it may not be entirely accurate, as well as the other drawbacks we mentioned, it does provide some insight into this important KPI that so many users need.
If you implement this and try it yourself, feel free to share your results 🙂
Thanks for reading!
article Written by
Senior Zendesk system administrator and customer-centric project manager dedicated to CX who also contributes as a Zendesk Community Moderator. Experienced in content management & SEO, reporting and analysis. Special skill: an eagle-eye for detail.
Featured image: Zetong Li via Unsplash 🔗