In this series of blog posts, we talked about various misconceptions of microservice architectures and the organizational structures around them. We touched on testing-in-production as a strategy to ensure teams can continue working autonomously throughout the development process, and ensure that they can design their system for resiliency and quality.
In this post, we’ll dive into how sharing data about their services allows teams to build better software. In a microservices environment, teams generally build software by working independently of one another. While software is tested in pre-production, the true test is always in actual production. Top-notch CI/CD and observability systems fight that fear of releasing and make teams more comfortable to release something on that figurative 'Friday afternoon'.
Misconception #3: teams don't need to share operational telemetry
The core misconception is that because teams work pretty much in isolation up to production, they don't depend on each other for a healthy production environment. They do. They need to share performance and health data, telemetry and other metrics.
Having a shared, common understanding of the collection of your own and adjacent services, and the ability to act on that telemetry is vital to release often, more quickly and with less defects. It allows you to make data-driven release decisions.
Pre-requisites to share operational telemetry: a common, shared foundation
Now, there's a couple of technical practices that make sharing operational telemetry easier. First, it helps to have a single observability system (monitoring/metrics, tracing, logging) across all teams. This makes it easy to see and share operational telemetry for adjacent services. Just use a widely supported commercial SaaS service for observability. There's many good ones out there.
In addition it's easier to read the telemetry if you re-use and share (standardize) patterns for cloud infrastructure, provisioning and configuration tooling, load balancing, code and artifact repositories, unit, integration, security and performance testing, and many, many other commodity and/or infrastructure layers across teams. There's no value in re-inventing the wheel there. Use commodity, off-the-shelf SaaS services for many, if not all, moving parts of the pipeline, including infrastructure and anything that's not uniquely creating value for your customers.
Thirdly, it might even make sense to give responsibility for infrastructure and tooling to a dedicated team. This reduces complexity for teams creating software and allows them to spend more time on 'new' work: delivering new features, automation and tests instead of being bogged down with toil and complexity that comes with infrastructure and tooling.
Making the common language actionable where it matters
So gathering useful operational telemetry across teams is easier to do if they share and re-use underlying patterns for infrastructure. Sharing that telemetry is easier when teams use the same observability systems and patterns.
Creating a shared window into your production environment is important when many services only really interact after going into production. That makes it much easier to pinpoint issues and attribute root causes to your own service, or to a fault in an adjacent service that's affecting your release. This helps fix issues more quickly, and with the right release strategies (like canary releases), prevent outages entirely.
With less time spent on fire-fighting, more time can be spent on creating and releasing new work. Which is what DevOps is all about!
If you’re interested in standardizing the way teams observe how releases perform in a live production environment to improve team workflows, resiliency and quality, go to Vamp trial and use the documentation here.
Or to learn more about how a data-driven release strategy can support your migration to microservices, download our free whitepaper.