Testing! Testing! Testing! APImetrics for Developers and Testers
Our friends at ProgrammableWeb have posted a fascinating series of articles on API testing by the estimable Justin Rohrman. There are 11 articles in total in the series, some of which are reviews of API test products including one on APImetrics. But it’s the four articles posted this week on API testing strategy that I’d like to talk about today.
We’ve talked here before about one of the main use-cases for APImetrics. You’re a software team. You’ve designed some software that exposes an API, coded it, tested it and shipped it. The API worked when you tested it in the development environment. But does it still worked now it is in the production environment (which probably has a very different size and velocity to the development environment) and being used by real users in potentially very different circumstances to those under which it was tested?
That’s where APImetrics comes in. You can see how the API performs from different geographies and locations in terms of availability and latency, and whether it really returns what it is supposed to when interacting with the systems in the live stack.
What matters is the end-user, so you should always be testing from the end-user perspective. As Justin indicates in the first article in his series, that is what testers do do or at least part of what they do (and is what they should be thinking about at all times). As I discussed in an earlier article, API used to indicate something different to what it (generally) does today.
But we have moved on from the days of monolithic system architectures. in a modern API-orientated architecture, functionality is abstracted into the APIs. If you have a user interface that invokes the API, it makes sense to test the API thoroughly before you start testing the UI. Of course, in practice, you could try and test both the API and the UI simultaneously by using mocked APIs.
The danger there though is that the mocked API and the real API may well behave differently. For instance, it’s not just that the real API might be slower, it is how it is slower (such as the number of outlier calls) that can affect the user experience and the mocked API could easily have bugs of its own.
There are risks associated with Shift Left (testers end up spending a lot of time doing low-level tests and forget about the user) and Test-Driven Design (the code never fails, but that’s because you are trapping all the exceptions. Over-vigorous exception trapping can lead to 200 HTTP status codes being returned by an API when a 5xx status should be). But the biggest risk is assuming that because a test passed once, it will stay passed.
Now, testers know about regression testing. If the code changes, you need to make sure that the changes haven’t broken anything. But with an API, you are often dealing with a stack that is outside your domain of control. Something that works now might not work in ten minutes or ten hours or ten days or ten weeks or ten months. Which is one area where APImetrics can come in.
Typically, the synthetic API calls that APImetrics makes originate from locations in commercial cloud services. But you can deploy the APImetrics agent within your organization’s intranet, either installed on a convenient server or on a dedicated machine such as Raspberry Pi. The code might not change, but the backend that the code is interacting with may well, so understanding whether they are variations in behavior over time is vital in building up a complete picture of the API.
But even if an API is designed purely for internal use, the users are going to be on the other side of the gateway. Like the stack, the users exist in a domain that is outside the control of the developers. But it is their experience that will determine whether the product is a success.And just because you don’t control the domain doesn’t mean you can’t test it.
If your users are going to be in the cloud somewhere then you will need to test to see how the quality of the API varies between geographies and cloud services. It is often found that performance can vary widely between cloud locations. If it is possible, you want to open up the development and test environments to properly authenticated and authorized calls from the exterior. So APImetrics can be used to actively monitor the APIs while they are still in developing and testing. This gives a much more rounded and reliable picture of overall API behavior and a higher degree of confidence that everything really is working as it should from the user’s perspective before the product is deployed to the production environment. You could also do this kind of testing using APImetrics from within the corporate network.
Just as a testbed environment might well different in size and speed from a development environment, a testbed environment might well differ as much again from the live product environment. It’s this realisation that means API testing must be a continuous process. Latencies can be very different between the three kinds of environment and the very nature of the backend or stack can be radically different.
In a development environment, the backend might be completely mocked, but even if the testbed or sandbox is connected to a real stack, the stack might be faster or the data it contains cleaner and less varied than in the live production environment. And both the stack and the network in which the users reside can change at any time without warning.
So to make user you are maximizing the user experience and the APIs really are always working, at each stage of the software cycle, you should be actively monitoring your APIs. We can help you with that at APImetrics.