I just want to reiterate something that I wrote a blog post about last April. When it comes to monitoring your APIs, it really isn’t a good idea to rely on cURL scripts.
Let’s say you just want a heartbeat for an API. OK, you write a cURL command that exercises the API. Simples. You set up some kind of cron job. You are probably going to need some kind of log.
What happens if the API doesn’t return a 200?
So you will need to hook the script up some kind of alerting system. And then where is the script running? Your users might not be inside your corporate network. So you’ve got to figure out some way of getting the script to run in a cloud service.
So, by the time you’ve done all that, that cURL script wasn’t quite as simple as it had started out as. Still, there it is. You have an API heartbeat.
Except now your boss wants to see what the rate-determining component of the latency is. Or she wants to test another API, but that one handles token refresh in some weird way. Or she wants to see whether the returned payload contains a particular field. Or do some back-to-back tests. Or import some API definitions. Or raise an alert via a webhook if something goes wrong. Or do some statistical analysis on the logs. Or see what a reasonable Service Level Objective might be for the API. Or whether the API has met the SLO.
Which, of course, is why you shouldn’t write a cURL and should instead turn to APImetrics for all your active API monitoring needs. We can offer a testing microprocurement package to you that you can have up and running today for less than $10k including ongoing monitoring for a year for multiple APIs and all the setup. A cURL script is only step up from “nonitoring“. Choose the state of the art, choose the best of breed, choose APImetrics.