Why Containers Can’t Solve All Your DevOp Problems

Containers and the cloud

Docker and other container services are appealing for a good reason – they are lightweight and flexible. For many organizations, they enable the next step of platform maturity by reducing the needs of a runtime to the bare essentials (at least, that’s the intent).

When you dig into the benefits afforded by containers, it’s easy to see why so many companies have started projects to:

  • Containerize their apps and supporting services
  • Achieve isolation
  • Reduce friction between environments
  • Potentially improve deployment cycle times

The software development pattern of small things, loosely coupled, can go even further with an architecture built around containerization.

However, I’ve discovered that there is no shortage of misunderstandings about Docker (no surprise given the rapid growth and pace of change) and other container services in terms of:

  • How their benefits are realized
  • The impact on infrastructure/operations
  • The implications on overall SDLC and Ops processes

Containers certainly offer plenty of benefits, and it makes good sense to explore whether and how they could work for your organization. But it is also a good idea to take off the rose-colored glasses first and approach this technology realistically.

Why Docker? Why Now?

Many organizations today are running tons of AWS instances to spin up new apps, services, databases, etc., and otherwise grow their businesses. While it’s super simple to scale, they’ve realized this comes with various types of overhead:

  • Replicated compute resources to run a host OS
  • Tons of processes that aren’t relevant to your app
  • More instances to manage

This can lead to sprawl, inconsistencies in core images, and process and budgetary challenges. Finance wants to know how Ops teams are modeling their growth and spend, security teams are trying to keep their arms around the growth to ensure that the organization is meeting the goals of its security strategy, and engineering wants to know they can get the flexibility they need to deploy new components quickly. At the same time, the business needs to grow.

So a key question becomes: How can we optimize our processes, optimize our AWS environment (to save money), and still do what we need to do?

Docker — at least on the surface — appears to offer an answer to this conundrum.

One common theme I’ve heard from organizations with a lot of AWS instances is the thinking that they can reduce the number of raw instances by increasing the size of individual instances and running containers.

For example, if you have 600 AWS instances that are 1 CPU and 4–5 GB of RAM each, maybe you’re thinking you could use Docker containers to reduce that to 100 instances, with 32 CPUs, and 64GB of RAM. Then you can significantly reduce your AWS costs, since you’ll have fewer instances. Great, right? Well… It’s not that simple.

What to Consider Before Moving to Containers

In the short term, the shift described above may work for some use cases. But in the long term, as with many technology choices, you’re trading one set of complexities for another. Why?

A New Tech Stack

Well, as soon as you start to run containers at scale, you need to invest in a management and orchestration platform to manage your containers and their resources. This requires a whole tech stack of its own.

And because container usage patterns are still a relatively new subject, there aren’t a lot of best practices to rely on, so figuring out the strategy will mean continuous iteration towards production and organizational buy-in that this may impact delivery schedules, depending on the implementations.

Management Obstacles

Obstacles with containers include the following:

  • How to manage them
  • How to maintain visibility in them
  • How to know when containers are an appropriate solution (and when they aren’t)

As far as that third bullet, today, I am seeing a lot of “Docker rationalization.” By this I mean that a lot of organizations are moving to containers because they can, and figuring out the use cases as they go along. This isn’t an inherently bad thing, but when it comes to determining the impact to your platform’s availability, security, and cost-efficiency, it’s better to lay out a clear set of use cases with goals ahead of time.

Security Risks

While it may make sense on the surface to move your workloads to containers, the devil is always in the details. You need visibility into what exactly is going on in your container, when, and from where. Security can be really challenging when it comes to containers, because there just aren’t tried-and-true best practices that you can rely on yet.

As you scale, you’ll want to understand how to exert appropriate controls over the images you’re using, how they’re built, and the scope of access provided to processes. You should know the answers to questions like these in advance:

  • Should a developer ever be allowed to log into a running container in prod?
  • Are we going fully immutable across all containers?
  • How will we manage its size to avoid container sprawl?

Layout out clear answers to these questions at the beginning will help tremendously in keeping your implementation clean and clear.

Be Honest About the Challenges

Companies are opening up about the challenges they are facing with the complexity of containers, and we hear all the time from people who are running into difficult questions such as:

  • How many and what type of Instance types do I need to run these containers? And, what are my true performance bottlenecks?
  • Since the characteristics of my workload are changing over time, how do I know when my container infrastructure needs to adapt and be re-modeled?
  • How do I deal with Scaling? Keep going up? Scale out? How do I reduce my surface area but not introduce SPoFs?
  • How do I handle security for containers, my processes running in them, and what they can access outside of the container?

It is important to keep in mind that while Docker can indeed help you run faster, having a powerful engine doesn’t get you far if you don’t have the rest of the car built to support it.

The Future Is Bright

It’s clear that containers are a huge part of the future of cloud infrastructure, however, like any other technology, it should be approached with a healthy mix of optimism and skepticism. If you properly optimize your processes, you’ll be able to fully take-advantage of everything Docker (and other containers) have to offer.

By Chris Gervais

Episode 16: Bigger is not always better: the benefits of working with smaller cloud providers
The benefits of working with smaller cloud providers A conversation with Ryan Pollock, VP Product Marketing and Developer Relationships for Vultr.com - Everyone knows who the big players are in the cloud business. But sometimes, ...
Martin Mendelsohn
The Colonial Pipeline Dilemma The Colonial Pipeline is one of a number of essential energy and infrastructure assets that have been recently targeted by the global ransomware group DarkSide, and other aspiring non-state actors, with ...
Frank Suglia
Managing Data Sprawl Over the last two years, our world experienced a dramatic acceleration of digital transformation. The COVID-19 pandemic upended normal operations for many businesses and shifted the pace of technology adoption into warp ...
Bi Tools
BI Tools For Data Scientists Many data scientists prefer to use open-source framework to code scripts; after all, it’s something they already trust to work. Business intelligence tools like Qlik Sense, Power BI, or Tableau, ...
Gilad David Maayan
Azure Storage Pricing Introduction to Azure Storage Services Azure Storage is a set of cloud storage services provided by Microsoft as part of the Azure public cloud. It offers highly scalable object storage, file systems ...

PROXY SERVICES

  • Smartproxy

    Smartproxy

    Smartproxy is a rising star in the constantly growing proxy market. Smartproxy offers awarded customer service, impressive performance, and is serious about your anonymity (yes, cybersecurity matters). The latest features developed by Smartproxy are 30 minute long sticky sessions and Google Proxies. Rumor has it, the latter guarantee 100% success rate

  • Bright Data

    Bright Data

    Bright Data’s network is one of the most robust of its kind globally. Here are its stark advantages: Extremely stable connection for long sessions (99.99% uptime guaranteed). Free to integrate with our Proxy Manager which allows you to define custom rules for optimized results. Send unlimited concurrent requests increasing speed, cost-effectiveness, and overall efficiency.

  • Rsocks

    Rsocks

    RSocks team offers a huge amount of residential plans which were developed for plenty of tasks and, most importantly, has been proved to be quite efficient. Such variety has been created on purpose to let everyone choose a plan for a reasonable price, online, rotation and other parameters.

  • Storm Proxies

    Storm Proxies

    Storm Proxies' network is optimized for high performance and fast multi-threaded tools. You get unlimited bandwidth. No hidden costs, no limits on bandwidth. Try Storm Proxies 100% Risk Free. If you are not happy with the service email us within 24 hours of purchase and we will refund you.