Integration Testing of Microservices, Part 1

By Gerald Mücke | August 10, 2017

Integration Testing of Microservices, Part 1

Integration Testing is the second-most important phase in Continuous Integration and Delivery. It’s the first time,
multiple components interact with each other. The current trend towards microservice software-architectures require a new thinking regarding integration testing of distributed systems. In this article I want to reflect on the challenges for testing those architectures.

In monolithic applications, components or parts are tightly bundled. Incompatibilities can often be detected already during compilation. Components interact with each other through messages that are transmitted locally, i.e. in the same JVM. Messaging is mostly synchronously. The type-system of those messages is usually strong and static between releases. Networking is used mostly for LDAP or database connections.

Basic Infrastructure Characteristics

Modern micro-services architectures are different.

Components are distributed over different nodes. Each node might be a distinct physical machine, but also co-located with other services in a virtual environment or as separate process in the same operating system. Each component may have different deployment characteristics such as scalability, capacity, availability, fault-tolerance and performance.

The network is the default medium for communication between components. Messages that are transmitted over the network are serialized and often use a format like JSON that doesn’t enforce strong typing. The format of those messages as well as the protocol behind them may change at runtime due to deployment of new component versions. The exchange of these messages are often done asynchronously using queues with varying characteristics, like availability, size, persistence or delivery guarantees.

Finding Error Hotspots

A common approach for testing is to start with common failure hotspots. And microservice architecture have plenty of them.

The biggest challenge of microservice architectures or distributed system in general is the complexity. While there are good technical and business reasons to choose such an architecture like simpler deployments, independent lifecycles, clearly defined domain borders and interfaces, shorter time to market, increased resilience or improved scalability, one huge misconception is, that microservices are less complex.

Actually, the opposite is the case. The complexity has moved one level down where it’s much harder to deal with. Especially because more technical components are involved that are outside the development process, component lifecycles are decoupled and the asynchronous communication pattern introduces an entirely new class of errors.

Further, the CAP Theorem articulates a fundamental limitation of distributed systems, basically you can only achieve two of three guarantees – Consistency, Availability, Partition Tolerance – for the data storage.

On top of all, are the Fallacies of distributed computing, which are kind of a distributed-systems-version of Murphy’s Law:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn’t change.
  • There is one administrator.
  • Transport cost is zero. (not only monetary, but also regarding bandwith, capacity, energy)
  • The network is homogeneous.

Testing distributed system

While for most developers testing equals developing automated test, testers know it’s not. Developers usually focus on functionality and proving that it’s working – which is ok. Testers on the opposite try to gain new insights and information from the system, especially by breaking it, not to show it is not working, but to show when and how the system fails. With these information stakeholders can make an informed decision.

For distributed systems such as above described microservice architectures the main approach would be to

  1. Prioritize on business capabilities – Which Use-Cases are most critical?
  2. Identify components – What components – hardware and software – are required for these business capabilities?
  3. Asset which failures can occur in these components - there is an infinite number of things that can go wrong (in an infinite number of ways)
  4. Assess Risk of failure – What is the likelihood of a failure and what’s its impact?
  5. Starting with High-Risk failures and try to induce them (this is the tricky part) and observe how the system behaves
  6. If the behavior is somewhat unexpected, describe the steps to reproduce and bring it up to discussion (could be a bug, an emergent feature, or simply unspecified behavior)

Steps 1,2 and 4 are individual to each project and depend on a lot of factors such as business goals, requirements, technologies, in short - the context.

In Steps 3 and 5 the knowledge and experience of the tester is a crucial factor.


In this article I described the characteristics of a microservice architecture from a testing perspective and discussed typical problem hotspots of a microservice architecture. Finally, I described the general approach for testing a microservice architecture. In upcoming articles I will have a deeper look on the actual things that can go wrong and how to test for that.