Ariel Maislos

3 Challenges of Network Deployment in Hyperconverged Infrastructure

Hyperconverged Infrastructure

In this article, we’ll explore three challenges that are associated with network deployment in a hyperconverged private cloud environment, and then we’ll consider several methods to overcome those challenges.

The Main Challenge: Bring Your Own (Physical) Network

Some of the main challenges of deploying a hyperconverged infrastructure software solution in a data center are the diverse physical configurations. The smart network layer may be the leading component that is tasked with the need to automatically learn the physical network layer’s topology and capabilities. Modern data center operations are expected to be automated and fast. There is no place for traditional, customized and cumbersome installation and integration processes. When deploying hyperconverged smart software on top of a data center infrastructure, running a fast and automated deployment is necessary.

Hyperconverged Infrastructure Deployment

In every organization, IT operations leaders have their own philosophy about how to deploy, integrate and manage network traffic. From our discussions with enterprise network experts, I’ve found that every leader has their own specific “network philosophy” that generally includes the following phrases:

“We believe in running internal and guest networks over the same physical network.”

“We believe in running the external communications over the 1G on-board configuration interface, while the rest of the traffic runs on 10G.”

“We like to keep things super simple and run everything on a single Interface.”   

  1. Deploying Logical Over Physical

Physical networks consist of groups of appliances that are connected using protocols. Logical networks are constructed out of different types of traffic and are completely agnostic to physical networks, but they still need to run on them.

For example, let’s assume that data center traffic can be segmented into three types: red, green and blue. Let’s also assume that according to the network admin’s philosophy, red is 1G, routed externally, and green and blue are both 10G, isolated and non-routable. It is important to ensure that each node is linked to each of the three different logical networks on certain physical interfaces. We can only connect the logical layer when the physical one is connected. This can be done by separating the types of traffic from the physical source (the node), then allocating each logical type of traffic to a physical network. In the end, each of the networks (red, green and blue) is connected to the related physical interface.

  1. Automatic and Scalable Deployment

In comparison to custom deployments that tend to involve cumbersome processes mainly completed by integrators, building a hyperconverged smart solution needs to deploy an environment with hundreds of nodes in a matter of minutes.  To achieve this, the deployment must be automatic, easy and bulletproof. Additionally, deployment techniques should not require user intervention per node (users should not have to manually configure the network, or analyze how each server is physically connected to the network). Smart hyperconverged solutions need to automatically discover and analyze an underlying network’s infrastructure.

Automatic network deployment also requires an ‘infection’ mode, where several high-availability network seeders infect all of the servers that connect with them, and in turn, immediately infect their networks. Once all of the nodes are infected, the hyperconverged solution has access to them and can retrieve and analyze information accordingly. After the seeder absorbs all of the network philosophy from the infecting servers, the current state of the physical network is analyzed. Once the scale goes beyond the capacity of normal broadcast domains, the cluster should cross over broadcast domains and start deploying over L3 and IP networks.

  1. Resilient Deployment

When deploying hundreds of nodes in a short period of time, the deployment process needs to adjust to faults and changes. Automatic deployment must assume that the nodes may fail during installation, but cluster deployment should still continue. In addition to making the system prone to errors, it is important to make relevant services highly available when dealing with deployment issues  to auto-detect and notify admins.

Returning to our example, let’s say that one of the servers is not connected to the red network, or that one of the servers has the red and green networks crossed. If not corrected in deployment, these errors must be passed to the admin for intervention without affecting the deployment of the rest of the cluster. It is important to note that this is an ongoing process. The system must be able to auto-tune itself according to physical changes and faults to maintain its reliability.

Final Note

To align with the data center leaders’ philosophy, a smart hyperconverged solution should enable the input of specific configuration preferences at the start of the process. Once the system goes into its “infection” mode, this specific philosophy can be embedded into the network.

By Ariel Maislos

Ariel Maislos

Ariel brings over twenty years of technology innovation and entrepreneurship to Stratoscale. 

In 2006 Ariel founded Pudding Media, an early pioneer in speech recognition technology, and Anobit, the leading provider of SSD technology acquired by Apple (AAPL) in 2012. At Apple, he served as a Senior Director in charge of Flash Storage, until he left the company to found Stratoscale. Ariel is a graduate of the prestigious IDF training program Talpiot, and holds a BSc from the Hebrew University of Jerusalem in Physics, Mathematics and Computer Science (Cum Laude) and an MBA from Tel Aviv University. 

GDPR Compliance: A Network Perspective

GDPR Compliance: A Network Perspective

GDPR Compliance Regulations can be a tricky thing. For the most part, they’re well thought out in terms of mandating actions that a company can or cannot take - they do a really good job ...
How Blockchain Has Unexpectedly Improved Big Data Integrity

How Blockchain Has Unexpectedly Improved Big Data Integrity

Big Data Integrity Blockchain technology was developed to improve the integrity of bitcoin. However, as bitcoin became more popular, its underlying technology is gaining more attention as well. Perhaps the most significant development in IT ...
Business Intelligence And Analytics In The Cloud, 2017

Business Intelligence And Analytics In The Cloud, 2017

Business Intelligence In The Cloud, 2017 78% are planning to increase the use of cloud for BI and data management in the next twelve months. 46% of organizations prefer public cloud platforms for cloud BI, ...
Over 100 New Ransomware Families Discovered Last Year

Over 100 New Ransomware Families Discovered Last Year

100 New Ransomware Families The world in 2016 sees a rapid rise of ransomware attacks that are increasingly targeting specific businesses and entire industries. A report by David Balaban for privacy-pc.com shows that ransomware attacks ...
5 Reasons to Head for the Cloud – Part 1

5 Reasons to Head for the Cloud – Part 1

Head for the Cloud Last month Salesforce held its 14th annual Dreamforce event in San Francisco. It has become the largest software conference in the U.S. and is the second largest tech conference after CES ...
The Lighter Of The Cloud - Virtual Lunch Break
The Lighter Side Of The Cloud - Checking It Twice
The Ligther Side Of The Cloud - Speed Browsing
The Lighter Side Of The Cloud - YTF
The Lighter Side Of The Cloud - Fear Of Heights
The Lighter Side Of The Cloud - Due Diligence
Star Wars IoT CES
The Lighter Side Of The Cloud - Playing It Safe
The Lighter Side Of The Cloud - Turmoil

CLOUDBUZZ NEWS

Security in the Cloud—A Little Known Advantage, Actually

Security in the Cloud—A Little Known Advantage, Actually

Okay, I’ll go ahead and say it: Public cloud infrastructures are more secure, and the security is more cost-effective, than the majority of on-premises data centers. That should get the blood flowing. With the word ...
Oracle Buys DataScience.com

Oracle Buys DataScience.com

Adds Leading Data Science Platform to the Oracle Cloud, Enabling Customers to Fully Utilize Machine Learning Oracle today announced that it has signed an agreement to acquire DataScience.com, whose platform centralizes data science tools, projects ...
Scale your Windows Azure application

Azure the cloud for all – highlights from Microsoft BUILD 2018

Last week, the Microsoft Build conference brought developers lots of innovation and was action packed with in-depth sessions. During the event, my discussions in the halls ranged from containers to dev tools, IoT to Azure ...