APIstrat was a blast, of course. Portland is like the smaller, quieter little sibling of Seattle. It even has a Pioneer Square and a three-dimensional model of the Washington state metropolis. And the weirdness is hidden away. It would be a good place to live.
One thing I hadn’t come across before in the API world was mocking. But it was all over the place, albeit at a somewhat subliminal level.
“Of course, mocking.”
It makes sense. An API isn’t always going to be there yet when you need to test it, so it makes sense to simulate it. You’d go back and test it again when the real API becomes available. You would do that, wouldn’t you?
There was a lot of discussion of testing. But what worries me is that there wasn’t much (or any, really, expect when I raised the issue) discussion of monitoring.
An API is a complex, complicated piece of software. It’s going to have bugs. Sure, you’ve tested it thoroughly before publishing and it passed all the test cases. But that doesn’t guarantee it’ll go on working perfectly forever.
An API is dynamic. You don’t know what users are going to throw down the wire at the API, and you don’t know what the back end might dredge up.
For instance, if you’re pulling stuff out of a database, you might not always have complete control over what people have put in it. Sure, they might be supposed to follow the spec. But how do you know that what’s they are actually doing right now?
So, API testing needs to be an ongoing process. It’s not just a question of test it, ship it. You need to be monitoring your APIs proactively. If something goes wrong – and there are very few software systems ever written that don’t contain bugs – you want to know as soon as possible. Ideally before your customers know!