Cloud API Service Consistency (CASC) Scores
There are many, many ways to measure an API, and APIs also do lots of different things. At APImetrics, we realized early on that we needed something that allowed us to do like-for-like comparisons between different APIs that would still give us meaningful answers.
This meant that we needed to come up with a system that handled all the key variables involved in an API call – its location, its speed, its reliability – and reduce that to a simple number that we could trace to actual quality.
What’s Included in a CASC Score?
Many people look at either uptime or latency to determine how good an API is, and this can cause problems.
An API could have excellent uptime as measured by a HTTP 200 code, but still be failing. Or it could have terrible latency, but due to geography or the complexity of the API, it’s actually acceptable.
We wanted to develop a simple way for managers and stakeholders to agree that something is working well.
To derive a CASC score, we look at the following:
- Pass rate
- Number of outliers detected by our machine learning systems
- Latency – and not just the latency of the whole call. We look at a half-dozen different factors
- Consistency – where our secret secret sauce comes in. We determine whether or not the API is working “well”
A Credit Score for an API
Think of the CASC Score like a combination of an API speed test and a credit rating. We take all of the metrics we have for your API, blend them together, then compare the result against our unrivaled dataset of historical API speed test records. This gives us a single number that’s benchmarked against all the other APIs monitored by APImetrics.
As for how we do the blending and the benchmarking – well, that’s our little secret! You don’t expect Coke or KFC to give away their recipe, do you? But we can tell you this – it involves state-of-the-art machine learning technology.