We’ve often made the point that 200 OK is not necessarily OK. But I don’t think the point can be made enough: 200 OK is not necessarily OK – at all!
Just because you are getting a 200 back doesn’t mean that there isn’t a problem with the API. You need to check what you get back.
Sometimes that’s easy. You know exactly what you are going to get back. In that case, it might enough just to check the size of the returned payload. But in other case, the size might change.
If I am interested in the weather forecast for Preston or Seattle, it is likely to be rain, but just occasionally it might not be. And the temperature might be double figures or not, so it might be best to check to see if the fields you expect are there in the returned JSON and whether the values for the fields make sense. In some cases, there might only be certain values that should be returned. If there is anything else unexpected in the field, there’s a problem.
But there are more complicated situations.
Let’s say I am exercising an API that returns a list of available of hotel rooms for the next day in Portland. The number of rooms that are available and the hotels they are in will vary constantly, so also will the size of the payload. The information in the fields might vary a lot. For instance, there might be many of ways of describing an effectively identical room, but also freeform information about the facilities in the room or at the hotel can vary greatly.
So how do you know that you are getting the correct payload without manually examining each returned response?
You can use a heuristic. Is the payload over a returned size? Does the first field of the first entry of payload have the right kind of form?
You could also apply deep learning here. A neural net could learn what the expected response looks like by being fed a large collection of correct payloads and then examine each new response to how closely it matches expectation.
We are planning on building this functionality into APImetrics, but you could do it right now by extracting the responses from APImetrics via our API and then running your own model on them to see which ones are problematic.
But sometimes the expected response is very terse, perhaps just an “OK” or, sometimes, just silence (and a 200 OK). In that case, how do you know that the call worked? Well, you could use a workflow.
Let’s say you create a resource and then delete it. The API might use return a 200 when it is created. That’s not good practice, API designers, but is what is often the case. You could then list the resources, get the ID of the resource you just created and then get the details of the resources. These should match what you thought you were creating. You can delete the resource and then do another get to see if the resource is still there, which it shouldn’t be. You can thus use the workflow to see whether each stage of the process has really actually succeeded or not.
The moral of this story is plainly seen. Your APIs are only as “200 OK” as what the user gets back. If it isn’t what the user is expecting, the call has failed. So, don’t trust that 200 OK. You should do at least some checks to see if what is coming back is what is supposed to be coming back.