Author: tadmin

Consumer Driven Contracts – Test What You Can’t See

In the last article, the general concept of microservices as just another type of modularization was introduced. Benefits as well as disadvantages were shown.

One of these disadvantages is the higher effort of testing. For a monolithic application, integration tests are way easier to implement than for a number of distributed microservices. Another issue can be that dependents of one service are not always known or at least the specific requirements onto the used interface. Just think about an environment with >100 services, each of them individually developed with different teams involved. This environment results in the inability to freely advance the service because changes might break things on the other end.

Consumer-Driven Contracts help to mitigate this issue by letting the clients of your service formally define the usage, i.e. the contract of your API. this contract can than be verified during continuous integration, giving you safety to stay compatible to all dependents while being able to change things. There are basically two main approaches to this. During the test-phase of the consumer, a second artifact is created, containing explicit tests (e.g. JUnit-tests) that verify the provider-api. These tests are later used on the provider-side for verification. Another approach is to formally specify the contract in a format which is later interpreted on provider-side for verification.

In this article, I want to show how to implement Consumer-Driven Contracts from end-to-end, i.e. contract-specification, contract-verification and contract-distribution using Pact framework. It’s a framework for implementing consumer driven contract specification and verification based on contracts defined in pact-files. There are versions for Ruby as well as for the JVM. It allows to easily define Contracts (so-called ‘Pacts’) during the testing phase of the consumer. While these tests are running, a separate server is started that serves the specified responses upon receiving matching request and recording everything into the pact-file. After the tests are run, the whole contract is contained in this file. The pact-file can than be used on the producer-side later on to replay these requests and verify the responses, assuring that the contract still holds. The following picture shows an overview of the process:

Pact Verification Process
Pact Verification Process (source)

A good indepth explanation of the ruby-specific implementation can be viewed here.

As we at Affinitas mainly use Java, we are going to use the JVM-based implementation of Pact. An example of how to specify a contract in a JUnit test on consumer-side would look as follows.

public class ExampleJavaConsumerPactTest  {
    @Rule
    public PactRule rule = new PactRule("localhost", 8080, this);

    @Pact(state="test state", provider="test_provider", consumer="test_consumer")
    public PactFragment createFragment(PactDslWithState builder) {
        return builder
            .uponReceiving("ExampleJavaConsumerPactRuleTest test interaction")
                .path("/")
                .method("GET")
            .willRespondWith()
                .status(200)
                .body("{\"responsetest\": true}")
            .toFragment();
    }

    @Test
    @PactVerification("test state")
    public void runTest() {
        Map expectedResponse = new HashMap();
        expectedResponse.put("responsetest", true);
        assertEquals(new ConsumerClient("http://localhost:8080").get("/"), expectedResponse);
    }
}

First, a PactFragment needs to be created which defines the contract from consumer-side. This will then be used as mock-server and returnes the specified response during the execution of the `runTest`-method.

All that is left is a mean to transfer these files from a consumer to a provider in an asynchronous fashion, so that the provider can then be tested against this contract. It would be also nice to dynamically distribute these contracts so that the provider does not need to know beforehand which services are consumers. This can be done using the pact-broker or using git-repositories (the up- and download usign git-repositories can be integrated into the build-process with the pactbroker-maven-plugin).

The final pact-verification itself on provider-side can be done using the corresponding pact-jvm-provider-maven.

Consumer-Driven Contracts allow to decouple testing-dependencies and still achieve deep integration-testing. It allows to develop and advance microservices with a safety-net and without the hassle to have to spin up all services at once just for integration-testing.

From Components and Microservices

141106_Tech-Blog_Article-picture01

Have you ever thought about constructing a car? Probably not. But how would you start anyway? You cannot design the whole thing in one complete step. But you can split the problem into minor, manageable ones. So you probably need an engine, some wheels, a chassis. These can then be produced by independent teams that can concentrate on designing a specific solution to a specific problem.

In software engineering, building big software artifacts without dividing the problem domain is nearly impossible. By following the principle of separation of concerns, smaller components that address a specific problem (and doing that well) will be combined to form a solution to the overall problem. Developer of a single component can now concentrate on developing and testing a single component without keeping the whole problem space in their head.

Have you finished the engine by now? Does it fit into your chassis? If not you probably forgot to define the interfacing part between both components.

Component-based software engineering defines a component as a module with a set of related functions communicating with other components via a well-defined interface. This enables among others reusability and replaceability of the component and decouples the dependencies to internals of the component. (Think of replacing an engine of your car with a different type).

I kept the terms deliberately very general and fuzzy because that is what they are. A component can be a function (in procedural programming), an object (in object-oriented programming), or even a whole webservice. The communication can be done via a function call, a remote procedure call (RPC) or a normal HTTP message. So from the software-architectural point of view, it does not matter how components are composed or how they communicate. Microservices are no exception in that they are just components that communicate in some way and that are composed in some way.

So what’s so special about microservices? There are advantages and disadvantages as with everything in life. They probably also depend on your environment as well, so here are our advantages:

  • They are dynamic. Because they are small and independent, they can be changed and redeployed very easily.
  • They are rigid. These services only communicate via their explicit interfaces. Callers are very decoupled and not affected by internal changes. This can also be done in normal “library”-based development but happens to be harder to achieve.
  • They are testable. Their limited and well-defined functionality together with their defined interface allows for easy testing.

But microservices do have disadvantages and they are probably not that easy to spot at first.

Maybe the most obvious disadvantage (in our case) is that you need to have the right development environment to actually be able to effectively use microservices and their advantages. You need to be able to (re-)deploy a service fully automated (that is, you need a full continuous deployment pipeline). Otherwise the overhead of managing lots of these services will become too much. Your environment needs to allow you to set up a new service in no time, otherwise the threshold to create these services is too high (e.g. let administrators set up servers etc). And you should be able to scale your service instances easily (e.g. to react to higher load because of reusage of microservices).

Another disadvantage is that you push a lot of problems “out of sight” because they lie in-between your components. As long as your components are functions or objects, as stated above, intended behavior can be verified by integration tests which are, for example with Spring, easy to create. They will be executed some time before deployment because afterwards, the application will not change that much. But how do you create integration tests for distributed microservices? You system now consists of several services that all interacts with each other (directly or indirectly). Running integration tests on this system means creating a whole test-environment with all services just for the purpose of integration testing. This means a higher effort of implementing and maintaining integration tests.

And finally, due to the underlying technology, a whole new category of problems is created (well not new but not that important to deal with in non-distributed systems). When did your last function-call failed? (Except if you are programming in non-deterministic languages). Because of rising numbers of remote-calls, you have to put higher effort in reliability and resilience of the overall system. And then again you need to monitor the overall distributed system for slowdowns or outages.

People that move towards microservices are conscious about these problems and there exist approaches to each of the mentioned disadvantages. Following blog posts will describe such approaches and best practices (as far as one can talk of best practices in this early stage of microservices).