3 Challenges of Network Deployment in Hyperconverged Infrastructure

Hyperconverged Infrastructure

In this article, we’ll explore three challenges that are associated with network deployment in a hyperconverged private cloud environment, and then we’ll consider several methods to overcome those challenges.

The Main Challenge: Bring Your Own (Physical) Network

Some of the main challenges of deploying a hyperconverged infrastructure software solution in a data center are the diverse physical configurations. The smart network layer may be the leading component that is tasked with the need to automatically learn the physical network layer’s topology and capabilities. Modern data center operations are expected to be automated and fast. There is no place for traditional, customized and cumbersome installation and integration processes. When deploying hyperconverged smart software on top of a data center infrastructure, running a fast and automated deployment is necessary.

Hyperconverged Infrastructure Deployment

In every organization, IT operations leaders have their own philosophy about how to deploy, integrate and manage network traffic. From our discussions with enterprise network experts, I’ve found that every leader has their own specific “network philosophy” that generally includes the following phrases:

“We believe in running internal and guest networks over the same physical network.”

“We believe in running the external communications over the 1G on-board configuration interface, while the rest of the traffic runs on 10G.”

“We like to keep things super simple and run everything on a single Interface.”   

  1. Deploying Logical Over Physical

Physical networks consist of groups of appliances that are connected using protocols. Logical networks are constructed out of different types of traffic and are completely agnostic to physical networks, but they still need to run on them.

For example, let’s assume that data center traffic can be segmented into three types: red, green and blue. Let’s also assume that according to the network admin’s philosophy, red is 1G, routed externally, and green and blue are both 10G, isolated and non-routable. It is important to ensure that each node is linked to each of the three different logical networks on certain physical interfaces. We can only connect the logical layer when the physical one is connected. This can be done by separating the types of traffic from the physical source (the node), then allocating each logical type of traffic to a physical network. In the end, each of the networks (red, green and blue) is connected to the related physical interface.

  1. Automatic and Scalable Deployment

In comparison to custom deployments that tend to involve cumbersome processes mainly completed by integrators, building a hyperconverged smart solution needs to deploy an environment with hundreds of nodes in a matter of minutes.  To achieve this, the deployment must be automatic, easy and bulletproof. Additionally, deployment techniques should not require user intervention per node (users should not have to manually configure the network, or analyze how each server is physically connected to the network). Smart hyperconverged solutions need to automatically discover and analyze an underlying network’s infrastructure.

Automatic network deployment also requires an ‘infection’ mode, where several high-availability network seeders infect all of the servers that connect with them, and in turn, immediately infect their networks. Once all of the nodes are infected, the hyperconverged solution has access to them and can retrieve and analyze information accordingly. After the seeder absorbs all of the network philosophy from the infecting servers, the current state of the physical network is analyzed. Once the scale goes beyond the capacity of normal broadcast domains, the cluster should cross over broadcast domains and start deploying over L3 and IP networks.

  1. Resilient Deployment

When deploying hundreds of nodes in a short period of time, the deployment process needs to adjust to faults and changes. Automatic deployment must assume that the nodes may fail during installation, but cluster deployment should still continue. In addition to making the system prone to errors, it is important to make relevant services highly available when dealing with deployment issues  to auto-detect and notify admins.

Returning to our example, let’s say that one of the servers is not connected to the red network, or that one of the servers has the red and green networks crossed. If not corrected in deployment, these errors must be passed to the admin for intervention without affecting the deployment of the rest of the cluster. It is important to note that this is an ongoing process. The system must be able to auto-tune itself according to physical changes and faults to maintain its reliability.

Final Note

To align with the data center leaders’ philosophy, a smart hyperconverged solution should enable the input of specific configuration preferences at the start of the process. Once the system goes into its “infection” mode, this specific philosophy can be embedded into the network.

By Ariel Maislos

Gary Bernstein
AI-powered identity verification Even if you don’t want to admit it, doing business online in today’s environment poses a greater risk. Criminals are constantly on the lookout for vulnerabilities to exploit, including hacking, data breaches, ...
Gary Bernstein
Artificial Intelligence (AI) has emerged as a transformative force that is reshaping industries, improving our daily lives, and pushing the boundaries of human potential. This cutting-edge technology is no longer confined to science fiction; it ...
Vulnerabilities
Cyber Threat Intelligence In an era of rapid digital transformation, we have witnessed a concerning evolution in the cyber threat landscape. Recent data analyses, as illustrated in the "Cyber Threat Intelligence Index: Q3 2023" report, ...
Ronald van Loon
The increasing adoption of technology and AI in business continues to drive concerns regarding sensitive data and the protection of assets. Organizations must implement tools to protect data while also leveraging that data to identify ...
Bailey Smith
Intelligent Deals: The Role of AI in M&A It’s no secret that artificial intelligence (AI) is revolutionizing many industries with its fast capabilities and predictive nature. From writing code to drafting documents, AI has become ...
Gilad David Maayan
What Is Object Storage? Object storage, in the simplest terms, is a data storage architecture that manages data as objects, as opposed to traditional block storage or file storage architectures. These objects include the data, ...