Why Containers Can’t Solve All Your DevOp Problems

Containers and the cloud

Docker and other container services are appealing for a good reason – they are lightweight and flexible. For many organizations, they enable the next step of platform maturity by reducing the needs of a runtime to the bare essentials (at least, that’s the intent).

When you dig into the benefits afforded by containers, it’s easy to see why so many companies have started projects to:

  • Containerize their apps and supporting services
  • Achieve isolation
  • Reduce friction between environments
  • Potentially improve deployment cycle times

The software development pattern of small things, loosely coupled, can go even further with an architecture built around containerization.

However, I’ve discovered that there is no shortage of misunderstandings about Docker (no surprise given the rapid growth and pace of change) and other container services in terms of:

  • How their benefits are realized
  • The impact on infrastructure/operations
  • The implications on overall SDLC and Ops processes

Containers certainly offer plenty of benefits, and it makes good sense to explore whether and how they could work for your organization. But it is also a good idea to take off the rose-colored glasses first and approach this technology realistically.

Why Docker? Why Now?

Many organizations today are running tons of AWS instances to spin up new apps, services, databases, etc., and otherwise grow their businesses. While it’s super simple to scale, they’ve realized this comes with various types of overhead:

  • Replicated compute resources to run a host OS
  • Tons of processes that aren’t relevant to your app
  • More instances to manage

This can lead to sprawl, inconsistencies in core images, and process and budgetary challenges. Finance wants to know how Ops teams are modeling their growth and spend, security teams are trying to keep their arms around the growth to ensure that the organization is meeting the goals of its security strategy, and engineering wants to know they can get the flexibility they need to deploy new components quickly. At the same time, the business needs to grow.

So a key question becomes: How can we optimize our processes, optimize our AWS environment (to save money), and still do what we need to do?

Docker — at least on the surface — appears to offer an answer to this conundrum.

One common theme I’ve heard from organizations with a lot of AWS instances is the thinking that they can reduce the number of raw instances by increasing the size of individual instances and running containers.

For example, if you have 600 AWS instances that are 1 CPU and 4–5 GB of RAM each, maybe you’re thinking you could use Docker containers to reduce that to 100 instances, with 32 CPUs, and 64GB of RAM. Then you can significantly reduce your AWS costs, since you’ll have fewer instances. Great, right? Well… It’s not that simple.

What to Consider Before Moving to Containers

In the short term, the shift described above may work for some use cases. But in the long term, as with many technology choices, you’re trading one set of complexities for another. Why?

A New Tech Stack

Well, as soon as you start to run containers at scale, you need to invest in a management and orchestration platform to manage your containers and their resources. This requires a whole tech stack of its own.

And because container usage patterns are still a relatively new subject, there aren’t a lot of best practices to rely on, so figuring out the strategy will mean continuous iteration towards production and organizational buy-in that this may impact delivery schedules, depending on the implementations.

Management Obstacles

Obstacles with containers include the following:

  • How to manage them
  • How to maintain visibility in them
  • How to know when containers are an appropriate solution (and when they aren’t)

As far as that third bullet, today, I am seeing a lot of “Docker rationalization.” By this I mean that a lot of organizations are moving to containers because they can, and figuring out the use cases as they go along. This isn’t an inherently bad thing, but when it comes to determining the impact to your platform’s availability, security, and cost-efficiency, it’s better to lay out a clear set of use cases with goals ahead of time.

Security Risks

While it may make sense on the surface to move your workloads to containers, the devil is always in the details. You need visibility into what exactly is going on in your container, when, and from where. Security can be really challenging when it comes to containers, because there just aren’t tried-and-true best practices that you can rely on yet.

As you scale, you’ll want to understand how to exert appropriate controls over the images you’re using, how they’re built, and the scope of access provided to processes. You should know the answers to questions like these in advance:

  • Should a developer ever be allowed to log into a running container in prod?
  • Are we going fully immutable across all containers?
  • How will we manage its size to avoid container sprawl?

Layout out clear answers to these questions at the beginning will help tremendously in keeping your implementation clean and clear.

Be Honest About the Challenges

Companies are opening up about the challenges they are facing with the complexity of containers, and we hear all the time from people who are running into difficult questions such as:

  • How many and what type of Instance types do I need to run these containers? And, what are my true performance bottlenecks?
  • Since the characteristics of my workload are changing over time, how do I know when my container infrastructure needs to adapt and be re-modeled?
  • How do I deal with Scaling? Keep going up? Scale out? How do I reduce my surface area but not introduce SPoFs?
  • How do I handle security for containers, my processes running in them, and what they can access outside of the container?

It is important to keep in mind that while Docker can indeed help you run faster, having a powerful engine doesn’t get you far if you don’t have the rest of the car built to support it.

The Future Is Bright

It’s clear that containers are a huge part of the future of cloud infrastructure, however, like any other technology, it should be approached with a healthy mix of optimism and skepticism. If you properly optimize your processes, you’ll be able to fully take-advantage of everything Docker (and other containers) have to offer.

By Chris Gervais

Alex Dean
Enabling Privacy and Personalization Most businesses today rely on data collected online to better understand their customers and deliver more personalized products, services and experiences. These insights can be transformative for an organization, especially when ...
Nikolaos Nikou
The Future of Enrollment Systems Enrollment systems play a crucial role in various industries, from higher education institutions to online courses and professional certifications. These systems streamline the enrollment process, manage student data, and contribute ...
Vulnerabilities
Cyber Threat Intelligence In an era of rapid digital transformation, we have witnessed a concerning evolution in the cyber threat landscape. Recent data analyses, as illustrated in the "Cyber Threat Intelligence Index: Q3 2023" report, ...
Tiago Ramalho
More equitable future for food distribution with AI At best, only 70% of food gets used in the United States. The rest goes to waste. Although devastating, the good news is this massive waste of ...
Cloudtweaks Comic Ai
How AI Is Important for Businesses Shifting to Remote Work The Coronavirus Pandemic has taught us that organizations must have remote work choices. It is no longer possible to work in a digital environment. The ...
David Cantor
These are monumental topics that command volumes of diligent research, backed by empirical evidence and citations from subject-matter experts. Yet, I’m afraid we don’t have the time for this. In 2022, I had a video ...