KPIs for APIs
There are multiple options for measuring API Performance – so which to choose?
Critical Key Performance Indicators
There are lots of things that can be measured with an API which include:
- Pass Rate – based on HTTP codes
- Pass Rate – based on functional reporting – did the API do what you wanted it to?
- Latency – also where the latency is? Network, Server, Infrastructure
- Overall Service Quality – how consistent are your results
- Geographic Performance
The challenge is there are multiple ways to measure these and the impact they may have on your services, customers and critical interdependencies – and some of these might be outside of your conventional monitoring or outside of what you can realistically measure with current products.
Book a demonstration of how we measure APIs
Measured Pass Rate
The pass rate is the actual measured success rate for an API call from a specific location.
However, it’s possible that APIs may pass at different rates from different locations. It is also important to validate the content being returned, don’t just assume that HTTP-200 means ‘All OK’. Just because your API gateway, or APM stack logs show nothing but 200 codes is no indication that everything is working well or even at all.
Effective Pass Rate
You may have a 100% pass rate, but there may be events and performance issues which cause timeouts and other problems. You need to take into account the performance including latencies and items that may affect end users.
It is entirely possible to have two APIs, doing the same thing but with wildly different effective pass rates.
Latency, Cloud and Geographic Factors
Latency is complex for APIs which is why using a common ‘ping’ style type tool won’t tell you what you need to know. API calls include multiple steps:
- Connect Time
- DNS Look Up
- Server Side Processing Time
- Internet travel time
- Total call time
Each of these vary by geography, so careful analysis is required to take these into account.
An API that is slow might not be a problem. An API that is slow only sometimes might well be, as users and systems grow used to expected behavior. It can also mean that systems measuring mean or average performance can miss significant performance issues lasting many hours because the averages and means have been set incorrectly.
To counter this and to ensure that everybody is playing on an even playing field, APImetrics has created CASC (Cloud API Service Consistency) scoring which allows like for like comparisons between different APIs – reducing all the performance considerations to a single, 3-digit score.