Microservices Sprawl at Sea
Why would anyone want to talk about microservices when they can talk about cruise ships? Taking a cruise is a popular life event for hundreds of thousands of vacationers every year. The image of a huge ship, filled with activities, endless buffets, and an ever-changing view is enticing and far removed, it seems, from the world of IT.
But the trip cannot happen if the data isn’t there.
Cruise ship guests expect to be well looked after. They pay to get away from it all, and they rightfully expect every aspect of the trip, from check-in to check-out, including luggage, restaurant reservations, excursions, and activities, to be seamless and memorable. A typical large cruise ship today carries between 2,000, and 4,000 guests, with the newest models offering capacity for over 5,000. The logistics of getting everything right for each guest, cross-matched against crew, supplies, itineraries, and weather, make the hidden IT effort behind each voyage huge on a logarithmic scale.
In addition to the IT horsepower required, cruise ships face unique challenges of space and distance. There is very little room available on board for an IT department, and communicating with a home base must rely on remote technologies like satellite, radio, and phone, rather than cable.
As the ships have become larger and more luxurious, their IT systems have had to evolve as well, and inevitably they have started to move towards microservices to help manage the myriad small actions required to keep customers satisfied and safe.
In an article entitled Royal Caribbean Delivers Real-time Microservices to the Open Seas with DC/OS, writer Laura Kelso delivers a fascinating summary of the processes that have been put in place by Royal Caribbean International Cruise Lines to break apart a monolithic process and replace it with resilient microservices based middleware, intended to ensure a better, more efficient and more cost-effective approach to delivering a successful cruise.
The article is a promotional piece from Mesosphere, the vendor chosen to provide the new system, but it describes a real-world situation that is easy to visualize anywhere.
“At the core of Royal Caribbean’s technology stack is its legacy reservation system. Any future solution needs to be able to extract data from the legacy system to enable modern, mobile experiences for passengers….by creating a reliable, mobile experience — whether at the port or on the ship — Royal Caribbean stands to unlock new revenue streams by delivering timely, in-context offers to a new generation of passengers who expect to be able to check on-board activities, make restaurant and event reservations, and complete purchases from their mobile device.”
But What About the Sprawl?
So the Royal Caribbean post is a great case study, and its practicality can be translated to any company or organization that is investigating how to break down a legacy system into something more modular and containerized. The microservices proposition of being able to fix, upgrade, or replace any component without bringing the whole show to a stop is highly attractive.
But the question remains, how does a company manage the inevitable sprawl that comes from decomposition on this level?
Experts warn that if the transition is not done in totality, that is to say, integrating end-to-end management in league with the deployment of code, then much of the subsequent activity will revolve around troubleshooting issues rather than speeding up processes.
This, then, could be considered one of the main pitfalls of microservices: inadequate oversight, or maybe inadequate oversight planning.
As new and existing applications get containerized, most companies’ IT monitoring tools do not currently provide sufficient visibility over each piece. Anything less than 100 percent monitoring of each and every microservices component is a recipe for tremendous backlog and delay, as teams spend time trying to isolate errors rather than pushing the enterprise forwards.
End-to-end means sharing metrics as they relate to a specific container as well as to the whole environment. As Saba Anees writes in Dzone.com, “While performance attributes of a specific container might be interesting, that information only becomes truly useful when it can be compared to everything else that is happening across the IT environment.”
It comes down to reinventing a DevOps culture in which developers and management stay the course across the entire lifecycle, rather than focusing on their own individual pieces out of context. This in turn requires a revised approach to process management, so that software development allows for reviews of failures, and provides alternate, parallel development paths. There should also be sufficient metrics to provide a clear and detailed picture of every component, incident, and outcome.
Can You Herd 3,000 Cats?
The inevitable expansion or sprawl that will occur as each stand-alone, self-contained microservice enters the spaces once held by a legacy SOA platform demands a new understanding and practical management approach. As JP Morgenthal writes in Microservices Journal, “A microservice follows specific tenets of design. One of these tenets is smart endpoints and dumb pipes… for me, it’s clearly a rebellion against SOA strategies.” He continues, “Microservices architectures by nature focus less on tooling (they’re polyglots by design) and more on the contextual bindings to business domains… [they] deliver smaller, more well-focused entities representing subsets of the business domain.
Contextual binding. As the collection of dedicated microservices expands into the thousands, IT management must remain aware that as independent and expendable as each microservice may be to the overall operations of a system, it is still their collective presence and communications ability that keeps the entire ship afloat and on course. And that requires a new level of ongoing vigilance.”
By Steve Prentice
Series sponsored by Sumo Logic.