THE CLOUD FOR APIS: 2022 IN REVIEW
The Definitive Annual Industry Report from the team, ecosystem, and data behind APImetrics & API.expert.
Dr. Paul M Cray
Introduction
In 2022, APImetrics made over a billion API calls to more than 8,400 different API endpoints from 70 geographically diverse cloud data centers across AWS, Azure, Google, and IBM.
In this report, we build on the unique and ever-expanding API dataset generated by the APImetrics platform and API.expert portal to establish an unbiased, industry-wide baseline for API quality scoring.
The report focuses on data from leading API services, including those from prominent corporate infrastructure providers, financial services institutions, social networks, and search engines.
Five Key Findings for 2022
- Overall, API availability improved, with six APIs achieving 99.99% service availability (compared with just two in 2021).
- Service quality has also improved, with 70% of providers having a Cloud API Service Consistency (CASC) score of 9.00+, indicating very good performance.
- 28% of providers had a CASC score between 8.00 and 8.99, indicating good performance.
- Only 2% had a CASC score between 7.00 and 7.99, indicating an API in need of investigation.
- AWS had the best network performance of the major clouds.
- Only Azure showed a decrease in DNS Lookup Time; AWS and IBM Cloud showed significant increases.
- For the fourth year in a row, AWS US-East (N. Virginia) was the fastest cloud location for Time to Connect with an average time of 1.23 ms, demonstrating that many solutions continue to be hosted here – potentially bringing up a huge risk for cloud services.
No Credit Card Required
Sign Up Now For FREE!
High-level data is always FREE at API.expert
Historical data and reports come with subscriptions.
Summary
Cloud API quality has been stable over 2020- 2022, albeit with evidence of deterioration in certain aspects of performance, particularly DNS Lookup Time. This period has likely been impacted by COVID-19 and geopolitical uncertainties, and performance does still vary between clouds, regions, and locations.
QUALITY IS STABLE: Most services are rated as excellent with a CASC (quality) score of 9.00 or more. Overall quality is similar to 2020 and 2021, suggesting that improvements in performance might be tending to plateau. There is no excuse for not having a highly stable and consistently performant API.
AVAILABILITY: Five 9s is a tough target, but 99.99% is a goal that should be achievable for most APIs and six of the services studied managed to reach this level, up from two in 2021. This indicates an area of future focus for quality improvements.
CLOUD PERFORMANCE: We see significant differences in performance between clouds. In 2022, Azure was consistently >70 ms slower than AWS. In an API-first economy where every millisecond increasingly counts, can you and your customers afford to be using a slow cloud?
As an integrator to an API-based service you should consider where is best to integrate from to maximize performance. As a provider you should provide insight and resources for your customers and partners on architectural choices as part of your standard developer information.
DNS DETERIORATION: DNS resolution times have slowed across clouds except for AWS and all regions except South America. If AWS can have a median DNS Time of 3 ms, so can the other clouds.
In 2023, we want to see DNS Times getting better again across all clouds and regions.
ABSOLUTE REGIONAL DIFFERENCES HAVE DECREASED: In our first report for 2017 there was a 10x difference between South America and Europe in median TCP Connect Time – South America is now 40% quicker than Europe and over 3x faster than North America.
This is excellent and shows the APIs and the cloud providers continue to strive for improvement.
That said, you will still pay geographic penalties depending on how the APIs you call are architected and where they are hosted.

Interestingly, the relative regional differences have remained more or less the same over the six year period. Oceania and South Asia are still ~3x slower than North America for Total Time even though Total Time is down across the board; only Europe has significantly improved relative to North America.
Many services are hosted in North America and further performance improvements would be helped by multi-hosting services in additional regions to reduce distance from the end user and thus download time.
The API Supply Chain
API performance is good across a wide range of popular services, with more and better-quality availability than in 2021.
But the problem of the API Supply Chain, as John Musser calls it, remains significant. There are meaningful geographic differences, such as physical distances across oceans and continents; and cloud performance variations, such as the amount of bandwidth available through fiber optic cables and the capacity of network equipment. DNS Lookup Times, which have always been a problem, seem to be getting worse.
Using an API isn’t just relying on a black box. The API you provide or use exists in a universe of components including their own cloud service, a CDN provider, probably a gateway of their own, a backend server architecture, and potentially a security and identity service. And each of those components has its own configuration and cloud dependencies. A failure could end up costing $200,000 per incident.
What is a cASC Score
There are many metrics that can be used to understand the performance profile, but the complexities can lead to confusion. Cloud API Service Consistency (CASC) scoring provides a single score, out of 10, that shows how well any API is functioning beyond the pass rate. Think of the CASC score like a combination of an API speed test and a credit rating. We take all the metrics we have for your API, blend them together, then compare the result against our unrivaled dataset of historical API speed test records. This gives us a single number that’s benchmarked against all the APIs observed by APImetrics.
DNS Is STILL a Problem
DNS Lookup Time has increased across most cloud services and regions, suggesting there are issues with network infrastructure and configuration for Azure, Google, and IBM Cloud, particularly in Europe and North America.
For example, in 2022, DocuSign had a DNS time of 199.35 ms compared to the average of 10.05 ms. Capital One had a DNS time of 339.2 ms. In 2021 DocuSign was slowest at 252.8 ms compared to an average of 16.975 ms; Capital One was 143.75 ms.
From our analysis, most services use a CDN provider. However, a CDN doesn’t just work out of the box and needs to be configured. If you are a user or a provider of any services, you should focus on this.

Top Achievers
From January 2022 – December 2022, PagerDuty had 21.9 minutes of measurable downtime on their APIs. (The best performer in 2021 was DocuSign with 26.5 minutes of downtime). Congratulations to the DevOps team and everyone else at PagerDuty involved!
The next-best performer had a failure equivalent of just 31.6 minutes of downtime over 12 months, which is still a pretty good effort. To put that in perspective, the worst-performing API had over 3.2 days of downtime. In other words, if an API is being exercised at an average rate of a 50 calls/ second, that would mean more than 13 million attempted calls were lost.
Look For Yourself
High-level data for 2022 is provided for free at the API Directory and Ranking Lists at https:// apimetrics.io/api-directory/. If you would like to be able to dive deeper into the details, please drop us a line for licensing access.
Methodology
API calls were made on a regularly scheduled basis from 70 data centers around the world using APImetrics observer agents running on application servers provided by the cloud computing services of Amazon (AWS), Google, IBM Cloud, and Microsoft (Azure).
The sample sizes for each API called are roughly the same and are equivalent to a call from each cloud location made to each endpoint every five minutes throughout the year.
We logged the data using our API performance and quality monitoring and management platform. Latency, pass rates, and quality scores were recorded in the same way for all APIs. For most APIs, data is available for the whole of the period.
For analysis, we have grouped the APIs and endpoints we monitored into the services they represented.

Pass Rates
In calculating the pass rate, we define failures to include the following:
- 5xx server-side errors
- Network errors in which no response is returned
- Content errors where the API did not return the correct content, i.e. an empty JSON body or
incorrect data returned - Slow errors in which a response is received after an exceptionally long period
- Redirect errors in which a 3xx redirect HTTP status code is returned
We ignored call-specific application errors such as issues with the returned content and client side HTTP status code 4xx warnings caused by authentication problems such as tokens that have expired.
If an API fails, it may pass if called again immediately and succeed if the outage is transitory. However, our methodology still gives a general indication of availability issues.
n-9s Reliability
The traditional telecommunications standard for service availability is five 9s – at least 99.999% uptime of just five minutes of downtime in a year. Of 34 services analyzed in this study, no API managed to achieve a service ability of five 9s. Six services achieved four 9s, up from two in 2021.
Table 1: Number of APIs by service availability
Availability | Number of Services in 2022 | Number of Services in 2021 | Number of Services in 2020 | Range in minutes per 12-month period |
---|---|---|---|---|
100% | 0 | 0 | 0 | 0 minutes of outage |
99.999% (five 9s) or better | 0 | 0 | 0 | less than -5 minutes of outage |
99.99% (four 9s) or better | 18% | 6% | 10% | -5 to -53 minutes of outage |
99.9% (three 9s) or better | 64% | 75% | 77% | -53 to -526 minutes of outage |
Worse than 99.9% (three 9s) | 18% | 19% | 13% | -526 to -5256 minutes of outage (8 hours to over 3 days of total downtime) |
In 2022, 18% of major corporate services measured scored less than three 9s. There was a nearly 18-hour difference in unscheduled downtime observed between two leading file management services, compared to just 28 minutes between them in 2021.
Quality
- Scores over 9.00 are evidence of exceptional quality of operation
- Scores over 8.00 relate to a healthy, well-functioning API that will give few problems to users.
- Scores between 6.00–8.00 indicate some significant issues that will lead to a degraded user experience and increased engineering support costs.
- A CASC score below 6.00 is considered poor and urgent attention is required.
It is important to note that CASC scores do not fall on a normal curve. The scores are absolute, and we see no engineering reasons why prominent APIs should not consistently reach an 8.00+ CASC score.
Score | Percentage of Services 2020 | Percentage of Services 2021 | Percentage of Services 2022 |
---|---|---|---|
9+ | 71% | 53% | 69% |
8.00-8.99 | 25% | 57% | 28% |
7.00-7.99 | 4% | 0 | 3% |
6.00-6.99 | 0 | 0 | 0 |
5.00-5.99 | 0 | 0 | 0 |
4.00-4.99 | 0 | 0 | 0 |
3.00-3.99 | 0 | 0 | 0 |
Most services studied are of very good quality with CASC score of 9.00 or over, which indicates excellent performance over 2020-2022.
Performance in 2022 was very similar to 2020. In both years, only 4% of services had CASC scores between 7.00 and 7.99, which indicates significant issues that need attention. The consistency of quality over three years might indicate that a plateau in API service performance has been reached or that investment in infrastructure has been curtailed because of uncertainties around the global COVID-19 pandemic and the geopolitical situation.
Latency
Some calls will be faster than others because of the nature of the backend processing involved, so total call duration, even over a sample size of tens of millions of calls, can only give a partial view of the behavior of the APIs.
In 2022, we added a huge number of fast calls for the Serinus Cloud Watch service calls. This had the effect of reducing the Total Time average, but it doesn’t affect the DNS Time, TCP Connect Time or SSL Handshake Time, which are determined by the cloud/region/location not by the call being made. Those three components are involved in the network setup of the call and are the same for all calls (they vary with cloud/region/ location and over time, but not by type of call).
The average total time has gone down, but the trends between clouds and regions remain. Remember, we are dealing with a snapshot from the data we have from which we can extract broad trends, not claiming that everything is like-for-like over a period of six years. We can only really do that for UK Open Banking.
- AWS was the fastest cloud by median total time from mid-2021 and the whole of 2022.
- Azure has been consistently the slowest cloud by median time since 2018.
- AWS’s median DNS Lookup Time has continued to fall and is now about 3 ms.
- The DNS Lookup Times for Azure and IBM Cloud have increased from mid-2021, bucking the long-term trend and also no longer demonstrating the marked quantized behavior seen since mid-2018. This suggests that there was some fundamental change made by Azure and IBM Cloud in 2021 to the configuration of their DNS services.
- Google’s DNS Lookup Time has only increased slightly in this period, but like Azure and IBM Cloud shows more variance and is no longer as obviously quantized suggesting that they too made configuration changes.
- DNS Lookup Time by region also showed a slight increase in 2022 and more variance, which reflects the changes made by Azure, Google, and IBM Cloud.
DNS Matters
From mid-2021, DNS lookup times increased whether measured by cloud (except for AWS) or by region. They have also shown more short-time variation and, in most cases, are less obviously quantized than was the case in the 3-year period mid-2018 through mid-2021.
South America was the fastest region for median DNS Lookup Time in 2022 with a time of ~8 ms (down from ~9 ms in 2021) and South Asia the slowest at ~16 ms (up from 10 ms in 2021). Europe was the fastest region in 2021 at ~6 ms but was up to ~9ms in 2022.
South America was the only region to have faster DNS in 2022 than 2021. Given that faster performance has been achieved in the past, this is an area for focus in improving service quality for all cloud network engineering teams who should monitor DNSD times on an ongoing basis and ensure performance criteria from all regions and locations are becoming longer or less stable.

South America

South Asia
Recommendations
- Actively monitor the availability and latency of all APIs you expose and consume. If you’re not, you don’t know how your APIs are performing right now for users in the real world. And if you’re not actively monitoring your APIs, you’re not managing them.
- Benchmark the performance and quality of your APIs against those of your peers/competitors. Because you really don’t want to discover that they are so much better than you.
- Know the differences between cloud locations v. user locations. Your service might be hosted in Virginia, but your users might be in Vienna and Vietnam. Make sure your choice of cloud isn’t wasting valuable milliseconds for your users.
- Not all clouds are the same and they change over time. 70 ms or more of latency can be down to your choice of cloud. Your API users shouldn’t tens or hundreds of milliseconds of latency just because of a decision made years ago.
- Ensure no specific issues affect the DNS Lookup Time for your domain (should be 12 ms or less). DNS should always be fast. If it isn’t, do something about it because slow DNS is just money down the drain!
- Understand what factors impact call latency and where to focus improvements. What’s the latency component most impacting user experience, and what can you do to improve it?
- Track performance outliers and determine their causes. Slow outliers can greatly impact user experience. Are some calls taking 30 seconds or more to complete? How can you stop that?
- Be aware of the impact of API failures and errors on user experience and business costs All API performance and quality issues cost money. Bad APIs mean lost customers. Can your organization afford not to have the best possible APIs providing the best possible user experience?
Appendix - Detailed Findings
Failure Rates By Cloud/Region
The graph below shows the overall failure rate, excluding 4xx client -side warnings.
Failures here include 5xx server-side errors, network errors in which no response is returned, slow errors in which a response is received after an exceptionally long period and redirect errors in which a 3xx redirect HTTP status code is returned.

Figure A1: Overall failure rate by cloud, 2019-2022
Over the four-year period, 2019-2022, the overall failure rate is generally nearly identical for
all clouds.

Figure A2: Overall failure rate by region, 2019-2022
This is even more evident for regions with very little difference in failure rates between regions in 2022.
Latency Data Per Cloud/Region
The variation in median total time shows a roughly similar pattern between clouds. This is to be expected, as the variation is primarily driven by changes on the server-side for API providers. (The median is a better metric here as it excludes the effect of outliers.)
The fall in median latency at the beginning of 2022 is owing to the introduction of the Serinus Cloud Monitor tests. These tests are both fast and high frequency, so have the effect of reducing average total time (the Time to First Byte is a generally representative measure of the server-side processing time and the Serinus Cloud Watch tests have led to the average value of this component being reduced).

Figure A3: Median Total Time by cloud, 2019-2022
The tables below show the median Total Times, DNS Times and Connect Times for the four clouds for the period 2017-2022. Median Connect Time has decreased for all four clouds for the last three years (2020-2023), but DNS Time stayed the same or increased for three clouds and only AWS managed a close to optimal median DNS Time for 2022 at 3 ms, down from 4 ms in 2021.
2019 | 2020 | 2021 | 2022 | |||||
---|---|---|---|---|---|---|---|---|
Provider | DNS/ms | Total/ms | DNS/ms | Total/ms | DNS/ms | Total/ms | DNS/ms | Total/ms |
AWS | 12 | 337 | 4 | 327 | 4 | 357 | 3 | 157 |
Azure | 12 | 398 | 12 | 405 | 12 | 436 | 20 | 222 |
12 | 341 | 12 | 336 | 12 | 395 | 12 | 179 | |
IBM | 12 | 345 | 12 | 360 | 5 | 391 | 13 | 203 |
2019 | 2020 | 2021 | 2022 | |
---|---|---|---|---|
Provider | Connect Time/ms | Connect Time/ms | Connect Time/ms | Connect Time/ms |
AWS | 32 | 31 | 9 | 3 |
Azure | 12 | 13 | 13 | 9 |
16 | 18 | 12 | 7 | |
IBM | 24 | 57 | 15 | 10 |

Figure A4: Median DNS Time by cloud, 2019-2022
From mid-2018 to mid-2021, DNS Time is quantized for all four clouds, but improvements to the APImetrics observer network implemented in 2021 have allowed for a more granular analysis.
Azure and IBM Cloud in particular show marked increases in DNS Time. Google only increases slightly, but best in class AWS manages to decrease.

Figure A5: Comparison of Median Total Time by year by cloud, 2019-2022

Figure A6: Comparison of Median DNS Time by year by cloud, 2019-2022

Figure A7: Comparison Median Connect Time by cloud, 2019-2022

Figure A8: Median DNS Time by region, 2019-2022
Of the six regions, Europe and North America have consistently been the fastest over the four year period. This is likely because of a combination relative geographical compactness and infrastructure investment. In 2022, East Asia and South America comprise an intermediate group with Oceania and South Asia a slow group. All regions show a decrease in Connect Time in 2022 compared to 2021, although there is likely still room for improvement for Oceania and South America locations. Only South America decreased DNS Time in 2022 (Oceania was constant) and other regions increased, so this is clearly an area where improvements are possible to return to the optimal performance seen in 2021.
2019 | 2020 | 2021 | 2022 | |||||
---|---|---|---|---|---|---|---|---|
Provider | DNS/ms | Total/ms | DNS/ms | Total/ms | DNS/ms | Total/ms | DNS/ms | Total/ms |
Europe | 12 | 244 | 4 | 273 | 6 | 265 | 9 | 144 |
North America | 12 | 303 | 12 | 299 | 12 | 302 | 13 | 121 |
South America | 12 | 739 | 12 | 674 | 10 | 660 | 8 | 292 |
East Asia | 12 | 663 | 12 | 604 | 10 | 724 | 11 | 298 |
South Asia | 12 | 758 | 12 | 766 | 10 | 773 | 16 | 401 |
Oceania | 12 | 846 | 12 | 849 | 9 | 909 | 9 | 368 |
2019 | 2020 | 2021 | 2022 | |
---|---|---|---|---|
Provider | Connect Time/ms | Connect Time/ms | Connect Time/ms | Connect Time/ms |
East Asia | 32 | 31 | 12 | 4 |
Europe | 12 | 13 | 10 | 5 |
North America | 16 | 18 | 13 | 10 |
Oceania | 24 | 57 | 14 | 3 |
Sourth America | 107 | 113 | 5 | 3 |
South Asia | 63 | 62 | 40 | 23 |

Figure A9: Median DNS Time by cloud, 2019-2022
We see the same behavior as noted for clouds of a slight upward trend in DNS Time from mid-2021
with the clear quantized pattern seen from mid-2018 no longer evident.

Figure A10: Comparison of Median Total Time by year by cloud, 2019-2022

Figure A11: Comparison of Median DNS Time by year by region, 2019-2022

Figure A12: Comparison of Median Connect Time by year by region, 2019-2022

Figure A13: Median DNS Time by Region by Year Relative to Oceania January 2019 – October 2022
Fastest and Slowest Cloud Locations for Time to Connect | ||||
---|---|---|---|---|
Fastest Location | Connect Time/ms | Slowest Location | Connect Time/ms | |
2019 | AWS US-East (N. Virginia) | 1.12 | AWS South-America East (Sao Paulo) | 111.88 |
2020 | AWS US-East (N. Virginia) | 1.42 | AWS Brazil-South (Sao Paulo State) | 115.94 |
2021 | AWS US-East (N. Virginia) | 1.07 | IBM Cloud Asia-Pacific Southeast (Sydney) | 68.61 |
2022 | AWS US-East (N. Virginia) | 1.23 | AWS India West | 50.68 |
Glossary of Terms
4XX CLIENT-SIDE WARNING – 4xx HTTP status codes are generated when a request is made to an endpoint that does not exist or for which the user lacks the appropriate authorization. Because these types of issues indicate that the web server receiving the request is behaving as expected, 4xx client-side warnings generally should not be included when determining the performance and quality of an API endpoint.
5XX SERVER ERROR – A 5xx server error is an actual reported error from the application server hosting the APIs.
95TH PERCENTILE – The latency of the slowest 5% of calls expressed in milliseconds.
99TH PERCENTILE – The latency of the slowest 1% of calls expressed in milliseconds.
AGENT – The APImetrics software agent runs at various cloud locations around the world enabling synthetic calls to be generated as if they were being made by an end user or partner.
API – Although an Application Programming Interface (API) is a general concept in computer systems, in the current context we are concerned only with web APIs. A user makes an HTTP request to a published API endpoint. The request causes the web server that receives the request to return a payload containing information in a specified format or cause the state of some remote resource to be changed. APIs can thus be used to exchange useful data and information between systems.
API CALL – An API call is a single HTTP request made to a particular endpoint. Details of the request and the response are stored by APImetrics for further analysis to determine the performance and quality of the endpoint.
AUTHENTICATION AND AUTHORIZATION – Access to a particular API endpoint may depend on validating the identity of the requesting party and whether it has been granted the appropriate authorization. This might involve encrypted passwords or tokens generated and managed through a protocol such as OAuth 2.0 supplemented by a specification such as FAPI (Financial-grade API).
AVAILABILITY – Closely linked to pass rate. Strictly, the availability should always be higher than the pass rate. Calls to an endpoint may not pass because of authentication and authorization issues or because the request is malformed, but the endpoint is still available. APImetrics analyzes the results and calculates pass rate and estimates availability.
AWS – Amazon Web Services – observer agents running on AWS servers in different regions.
AZURE – Azure from Microsoft – observer agents running on Microsoft servers in different regions.
CASC SCORE – Cloud API Service Consistency (CASC) is an APImetrics-proprietary patented technology that combines various measures of API performance such as availability, latency, reliability, and a number of outliers, benchmarked against our unrivalled collection of historical API call records, to give a single blended metric much like a credit rating. The CASC score lets you see at a glance the quality of an API endpoint, whether it is getting better or worse, and how it compares to other endpoints.
CDN – Content Delivery Network – used by IT systems to optimize the performance of their APIs and websites.
CLOUD PROVIDER – An organization that provides a commercial service, hosting applications at a server. Well-known cloud providers include Google Cloud Platform (Google), Amazon Web Services (AWS), and Microsoft Azure (Azure), IBM Cloud (IBM), all of which have many locations around the world.
CLOUD – Distributed computing service using server resources from providers such as Amazon, Microsoft, Google and IBM:
- AWS – Amazon Web Service
- Azure – Microsoft Cloud Services
- Google – Google Cloud Compute
- IBM – IBM Cloud Services
CMA9 BANKS – The CMA9 are nine large UK banks that are mandated by the Competition and Markets Authority (CMA) to expose certain Open Banking APIs and regularly return certain reports on the performance of the APIs. The banks are Allied Irish Bank, Bank of Ireland, Barclays, Danske, HSBC, Lloyds Group, Nationwide, NatWest Group, and Santander.
CONFIGURATION – Internal and external network configuration, such as load balancers at the API gateway that direct requests to specific IP addresses, can have a significant impact on API performance and quality. For instance, problems with external configuration such as routing tables can cause requests to be misdirected, and load balancers can direct requests to IP addresses that do not support a particular service.
DNS LATENCY – DNS (Domain Name Server) is a global service that identifies where a particular service is located on the internet. The lookup time is the time taken for the cloud service making the API call to identify where the target server is and route the request. The different techniques used for the lookup task will affect service quality and appear as latency.
DNS LOOKUP TIME – The time the cloud provider Domain Name Service (DNS) host takes to resolve the URI for the API.
DNS – Domain Name Service – the global system that allows the internet to identify the location of a specific internet service via a human readable name
DOWNLOAD TIME – The time taken for a request to be downloaded from the web server to the agent.
ENDPOINT – The Universal Resource Indicator (web address) that is called when you make an API call. For the API call to work you will have URI + parameters of the call + security. This task is different to simply looking up the URL of a website, where it is often just the URI that is needed.
FAILURE RATE – The proportion of calls made to an API endpoint that returns an unexpected response.
FAPI – Financial-grade API (FAPI) is a technical specification that the Financial-grade API Working Group of OpenID Foundation has developed. It uses OAuth 2.0 and OpenID Connect (OIDC) as its base and defines additional technical requirements for the financial industry and other industries that require higher API security.
GET – The simplest HTTP verb (others are HEAD, POST, PUT PATCH and DELETE) that sends a request to an API endpoint that gets a resource, such as a list of account transactions. Parameters and headers allow complex requests to be made with a GET.
GOOGLE CLOUD – Observer agents running on Google servers in different regions.
HANDSHAKE TIME – The time to complete the process that sets up an HTTP connection, which is called a handshake.
HTTP 200 CODES – Generally indicates a passing result (but not always!).
HTTP 400 CODES – Indicates that something has gone wrong somewhere but the API call has not necessarily failed – these often relate to security codes expiring – but can also indicate that another system has failed. These can be misleading and with API transactions can often indicate other problems.
HTTP 500 CODES – A failure has occurred – either an error code from the service itself, or an indication that something is not working in the back-end infrastructure or that a service is down for some reason.
HTTP STATUS CODE – A standard set of codes used by computer services to identify what is happening.
HTTP – Hyper Text Transfer Protocol – the basic communication system for the internet and APIs. IBM CLOUD – Observer agents running on IBM servers in different regions.
LATENCY – The time a specific action takes to complete – usually measured in milliseconds – 1 millisecond is 1/1000 of a second – to understand this in human terms, a normal camera flash lasts 1 millisecond and light itself can travel about 300 km (186 mi) in that time.
MEAN – The average value of data in a collection.
MEDIAN TOTAL TIME – The median latency of calls from our observer agents from the point at which the call is initiated to the point at which the last byte is received expressed in milliseconds (1/1000s of a second).
MEDIAN – The mid-point value in a data collection.
METRIC – A measure of some aspects of API endpoint performance such as the availability or median length of a latency component.
NETWORK INFRASTRUCTURE – The totality of the physical network elements that make up the systems that together comprise the internet. Includes switches, routers, and connectors such as fiber and microwave links.
NON-CONFORMANCE – An API endpoint that does not respond according to its published specification is non-conformant. Typically, this might mean that the return payload has missing fields, contains incorrect information, or the endpoint is generating errors and warnings despite the call being made according to specification.
OBIE – Open Banking Implementation Entity (OBIE) is the UK entity managing standards for Open Banking within the United Kingdom.
OPEN BANKING – A new global paradigm for banking, financial, and payment services that enables innovative new products and user experiences powered by data and information exchange through APIs.
OUTLIERS – The percentage of calls over time that were decided to be a performance outlier by our AI.
PASS RATE – The % of calls in the given time that passed with either just a HTTP200 code (link) or met the criteria set in APImetrics for a successful pass.
PERFORMANCE – The set of metrics such as availability, latency, reliability, and number of outliers that define how an API endpoint has behaved over time.
PROCESSING TIME – How long the target server takes to handle the API call.
PSD2 – Payment Services Directive 2 (PSD2) is a pan-European agreement to open payments and banking services that is applicable to all financial service providers doing business in the EU and United Kingdom. Responsibility for the implementation of regulations lies with each country.
QUALITY – How good an API endpoint is from the end-user perspective. Although this can be challenging to measure, blended metrics such as the APImetrics CASC score provide a quantitative benchmark, allowing organizations to compare the quality of an API endpoint over time, or compare two API endpoints at a glance.
REGION – The global region where the call originates.
RELIABILITY – A reliable API endpoint tends to respond within a narrow range of time. A reliable endpoint may not necessarily be fast, but the variance in its latency will be relatively small.
SERVICE – A software service delivered via cloud APIs.
SPEED – The rate at which data is passed along a connection such as an intercontinental undersea fiber link. The more traffic, the slower the speed of the connection.
SSL HANDSHAKE TIME – The time taken for the Secure Sockets Layer (SSL) handshake to take place.
TCP CONNECT TIME – The time it takes for Transmission Control Protocol (TCP), or for the observer agent to connect to the server hosting the API.
TOTAL API CALL TIME – Used in this document to relate to the entire sequence of making an API call from DNS name lookup, through connection, security handshakes, to the point at which the final result is returned.
TOTAL TIME – The time between a request being made to an API endpoint and the whole of the response being received, including the name lookup (DNS) time.
UPLOAD TIME – How long the observer agent takes to upload the parameters of the call to the target server.
VERSION – APIs are often updated to make changes to the way the endpoints are invoked, or the content of the payload returned. It is important to ensure that the endpoint for the correct version is invoked. Often the URI for the endpoint will contain the version.
APImetrics Has You Covered
APImetrics provides run-time API governance solutions for organizations offering API services across the Financial Services, Open Banking, Telecoms, Software, and IoT sectors. By enabling a holistic, end-to-end view of performance, quality, and functional issues across the API surface, we allow organizations to better serve their customers and end users.
Our patented technology automates the process of producing regulator-ready reports for financial services providers around the world.

Our active monitoring platform integrates with many of the leading developer operations suites and provides an API-centric view of:
- Real-time API performance from more than 80 locations worldwide on four clouds and six continents
- Fully integrated security monitoring designed and built for the needs of the financial services industry
- Machine learning based analysis driven by a database of more than a billion real API calls
- Integrated reporting, analysis, and alerting
- 360-degree visibility with Cloud API Service Consistency scoring (CASC), allowing for at-a glance service and competitor comparisons
Contact us for A Demo
To learn more, check out APImetrics and our free performance dashboards at api.expert and @serinusmonitor on Twitter.