Building microservice architectures is a staple of cloud-native development. And with good reasons. There are significant benefits to splitting up your software into smaller, more manageable parts, allowing each team to work on their services independently.

That all sounds great on paper. But as anyone moving to microservices knows, the journey comes with its own pitfalls across team workflows, the dependability of services and tooling. Your people, process and technology. In this blog series, we unpack some common misconceptions around these three key areas, starting with team workflows in this post.

We’ll look at how organizing the development process around independently-functioning teams is great for development speed and quality, but without a data-driven release strategy, teams can run into significant issues once services enter production.

Misconception #1:  A microservice team can work in full isolation

Isolating work in the development cycle is smart. It positively impacts dev speed (deployment frequency, deployment lead time) and quality (MTTR and deployment failure rate). A team can work in isolation during ideation, backlog refinement, feature development and testing.

So far, so good. But when a release goes into production, it will inevitably, and by design, interact with other services in the landscape, which means it can fail for any number of reasons.

On the one hand, testing for all failure scenarios in a pre-production environment is nearly impossible if each team works in isolation. On the other hand, moving to a highly-coordinated testing environment to remove variance and create production-like circumstances takes away the advantages of working in independently-functioning teams.

If pre-production integration and regression testing is unviable in a microservices paradigm, how do we test safely? And where do we test, without unacceptably big risks?

Image result for testing in production
also find this meme here

Testing in Production

You guessed it! Testing in production is the right answer. Testing in production increases the level of certainty of a release’s quality without conceding each teams autonomy in the development cycle. For each new release, a team can validate not only whether the release works well, but also validate that the release does not negatively impact other services in the landscape (by testing via a canary release, for instance).

But freely testing in production without the right mindset and tooling does increase risk for the end user. A good strategy for testing-in-production is adding automated, data-driven controls to each release. Teams are able to put their new code in production independently, but with control over the roll-out.

Having data-driven control over the roll-out means that any team can gather feedback from the release immediately, without impacting the entire user base. By evaluating what impact the code change has on both the performance and stability of their own service, as well as how it impacts other services, any team can decide independently to increase the rollout, or refine their code for observed imperfections.

Now you’ve designed your application environment to support multiple releases, so multiple teams can go to production simultaneously, ensuring the team autonomy that a microservices architecture requires is preserved in every stage of your development lifecycle.

If you’re interested in how your teams can add automated, data-driven controls to their releases, go to Vamp trial and use the documentation here to learn how you can safely test live-in-production.

Or to learn more about how a data-driven release strategy can support your migration to microservices, download our free whitepaper.