My API testing journey!
API Management

In search of a testing approach

 

APIs. We all encounter them almost on a daily basis, either by designing, developing or consuming them. So most of us will have encountered at one point or another an API that does not behave as advertised, often leading to frustration. But how do we ensure delivery of a quality API? That’s where API testing comes into the picture.

Before we go further, let’s set some expectations here. I’m not a dedicated tester; I’m an integration analyst/architect that was asked to perform testing on APIs. As I have been spoiled in the past by having dedicated testers available, I had never put too much thought into it, to be honest. So I went searching for an approach that could work based on my current and past work experiences.

Let’s just say I learned the hard way a clear testing approach can be really helpful.

Needless to say I didn’t have much of an approach in the beginning. After development was done, I manually launched some random requests – both happy and unhappy scenarios based on the API specification – and documented the result. When bugs were encountered and fixed, I repeated the same process, which turns out was also time-consuming. It didn’t stop there because when the first consumer started using our API it turned out my testing hadn’t been extensive enough as more bugs were detected, leading to a higher project cost.

There had to be a better way to do this, and after some research, two approaches kept coming back:

  • Shift Left where testing is performed as early as possible and continuously to avoid bugs and critical issues post development
  • Shift Right where testing is performed as late as possible (in production) to prepare for the undefined and unexpected, ensuring the functioning and performance in a real world environment

 

 

SHIFT LEFT

The definition states that testers should be involved before development starts, allowing them to design tests from a consumer point of view. Developers can then develop based on these tests, allowing them to meet the consumers needs and find bugs and issues early in the process.

Continuous testing through automation makes this approach even more effective as it allows us to test as early and often as possible. Mocking can be used to deal with unavailable systems, if required. The continuous testing and feedback loops make this an ideal approach when working with agile teams.

SHIFT RIGHT

The definition states that tests are to ensure stability, performance and usability criteria. Feedback and reviews are collected from targeted users to ensure customer satisfaction. This provides the ability to test usage scenarios and real load levels that aren’t possible to create in test environments.

I personally haven’t seen any organization yet using this approach. Often, it’s more a flavour of shift right, where production logs are used to help structurally resolve discovered or reported issues.

One example I could find that is using it is Netflix, where real-time device logs are assessed to see if new deployments have broken anything or affect the user experience. This in combination with canary-deployments, allows them to fix errors before releasing to the general public.

 

 

Define your testing scope 

 

Now that I learned all about testing approaches, I faced a next question

What do I want/need to test exactly?

For me it was clear both integration testing and functional testing had to be covered as we were responsible for both. Unfortunately there is no answer that fits every situation and organization.

Imagine you’re virtualizing an application API, making it available to other applications in your organization or to partners outside your organization. It’s a given that you’ll be responsible for integration tests, but probably won’t care much about functional testing as that can be considered the responsibility of the team delivering the application API. Of course, it’s a different story if you expose e.g. a composite API or if you perform message transformation.

Once I had a stable version of my API, I considered doing some performance tests:

  • Are the response times respected?
  • Can it handle the expected loads?
  • What is the limit before it starts failing?

Before starting I aligned with other teams in my organization, as often the operations team and underlying applications will be impacted by these big loads. From past experiences I remembered to preferably perform these tests on an environment configured similarly to your production environment to get more real results.

Last but not least security tests were performed on our API. A must nowadays in every situation that we want to ensure only the allowed users or applications have access to the agreed data and nothing more. I didn’t perform these tests myself, instead they were executed by our security team. Who had the needed knowledge and expertise on the latest trends and methods used by e.g. hackers to exploit data, and let’s just say it wasn’t a waste of time.

 

 

Putting the two together

 

Looking at my test scope and both approaches it was clear to me we fell into the shift left approach. It’s also the one matching best my client’s current vision on application testing. That doesn’t mean shift right is excluded completely, but more effort will be needed in order to consider it in the future.

Most of our tests seem to better fit the shift left approach, but let’s compare the two:

  • Integration testing: I can’t imagine not having performed these tests at least once before going to production. Even if you’re releasing to a limited audience, you don’t want to receive connection errors and so on.
  • Functional testing: this might work in a shift right approach when releasing to a limited audience, but I can imagine when e.g. sensitive data is involved shift left is preferred.
  • performance testing: personally I would like to know how the system behaves under stress before going live, however I could see these tests moving to a shift right approach over time.
  • Security testing: the one test I don’t see how it can work in a shift right approach, unless you consider releasing your API first to a group of ethical hackers. Not sure a lot of security officers will like that idea.

That said I still see improvements on the shift left approach in my team. It would be nice getting to the point where I can provide a test set, along with the API specification, to our developers for them to develop against. But we’re not there yet. Improvement of the logs and monitoring is also a next step for us, leaving options open for moving some tests to a shift right approach.

 

 

What about Testing Tools

 

 

Many different tools exist to perform API Testing. Some have free subscriptions, others don’t, but most likely you’ll get more features in return for the extra price paid. I’m not going to list up the different options as there are already plenty and in the end you have to choose the one that fits you best.

In my situation there were certain factors to take into consideration:

  • cost is always a factor it seems but apart from licenses, don’t forget the time needed to get the rest of the team getting to know the tool. It might be more interesting using a tool most team members are familiar with.
  • separate environments for development, testing, and so on. I learned that some tools offer environment variables which make it very easy to launch the same request to different environments.
  • validation options allowing you to automatically check if the response returned has the correct status code or is in the correct data format, can make your life easier. Because lets face it, we are all human and make mistakes when having to check this manually over and over again.
  • automation wasn’t a big factor yet in my case, but I took it into account anyways as I could see the future value of it. Several tools allow either scheduled test runs or as part of your build and deploy chain, which will help identify regressions that might have slipped into your API. In a shift right approach this will be an important factor as you could consider running automated tests in production periodically to ensure everything is working fine.
  • load or stress testing, as I found not every tool offers a solution for simulating different users launching requests at the same time.

Conclusion:

Based on my own experiences I would say that most organizations follow the shift left approach, but in truth might even be somewhere in the middle, as one way or the other production environments are always being monitored. So why not a combination of both?

From my point of view you need a clear logging & monitoring setup in combination with canary-deployments in order to consider moving more towards the shift right approach. Experience tells me a proper logging & monitoring setup is something that evolves over time and unfortunately at the start is often neglected, so it’s more something you grow into.

There is no fixed answer as to which approach or tool is the better one, as every situation and need is different. One thing that I do know for sure is that testing done well takes time and effort, but the cost of doing it outweighs the cost of not doing it.

ALWAYS LOOKING FORWARD TO CONNECTING WITH YOU!

We’ll be happy to get to know you.