February 7, 2014

An Antidote To Overprovisioning

By Steve Prentice

 Overprovisioning There are many situations in this world – both the virtual world of computing, and the real world, where preparedness means having more than you need. Stocking up on food, medicines, or printer paper and toner makes it easy to get things done in a timely fashion rather than having to run back out […]

 Overprovisioning

There are many situations in this world – both the virtual world of computing, and the real world, where preparedness means having more than you need. Stocking up on food, medicines, or printer paper and toner makes it easy to get things done in a timely fashion rather than having to run back out to the store.

But this preparedness comes with a price. Materials that sit on shelves waiting to be used occupy space and devour available funds. They must have great and tangible value to justify their existence, which is why so many large manufacturers rely instead on just-in-time delivery and lean techniques to ensure a smooth flow of supplies without the costly overhead.

These same efficiencies are also essential in maintaining virtual systems, but this still feels uncomfortable to administrators charged with the responsibility of keeping systems both functional and up-to-date. In the days when much of a network depended on hard physical assets such as servers and memory, it was common for admins to over-purchase as a practical alternative to the cost in labor, time and funds to buy and install upgrades on spec, or the danger involved in keeping extra parts on the shelf. It was much easier to simply buy more than needed and install it all at once.

1 Overprovisioning

Well, five points for Griffindor for proactivity perhaps, but minus several million for devouring the IT budget in one pragmatic shopping spree.

Today, upgrading, and maintaining efficiencies is much easier. Data is king, and useful information on optimum use of servers and systems is now available by the second, allowing administrators to maintain a far more dynamic and ideal system in which resources are allocated in sync with actual requirements. This means time and money saved.

Mike Raab, VP of customer service at CopperEgg , puts some numbers to this: “Let’s look at the Amazon Web Service (AWS) pricing model, for example. If you choose the US east region, you will see a 25 cent difference in the per hour charge for a small versus large Instance. That is over $183 a month if it were to run full time. For Extra Large to Large it’s even worse, over $300 per month difference. Even the difference between a micro and small adds up to be over $60 per month. No small amount any way you slice it for a single instance. Now multiply that by 5, 10 or 20 instances?”

“For clients using the Amazon Elastic Compute Cloud (AWS EC2) platform, there is no longer any need to guess or to overprovision,” Raab says, “when the data is clearly available.” Server benchmarking, for example, allows an admin to ensure the best corresponding EC2 instance recommendation by running over a 24-hour period and measuring performance against the actual application and user load. CopperEgg’s Cloud Sizing tool is unique in the industry for allowing admins to benchmark their servers for an AWS EC2 recommendation. This results in specialized reporting that recommends instances by name, and custom-tailors EC2 instances prior to migration.

Eric Anderson, CTO and co-founder of CopperEgg summarizes this neatly: “The tools that many IT managers currently have are not built for the dynamic nature of cloud, or for the new way developers are building applications. Admins need to be ready for a new wave of application architectures that do not fit well with traditional monitoring and performance tools.” He adds, “admins are getting more pressure to see more fine grained visibility into the applications they are keeping alive, and their systems are becoming more complex, driving them to need higher frequency and more detailed tools to bring clarity when there are application level issues.”

Capitalizing on highly specific data is a technique that allows for the lean functioning of an IT department that at once saves money and optimizes performance, both internally and of course for the enhanced convenience of the end user. CopperEgg offers a free trial of their monitoring and optimization solutions here.

By Steve Prentice

Post Sponsored By Copperegg

Steve Prentice

Steve Prentice is a project manager, writer, speaker and expert on productivity in the workplace, specifically the juncture where people and technology intersect. He is a senior writer for CloudTweaks.

Exploring SaaS Directories: The Path to Optimal Software Selection

Exploring the Landscape of SaaS Directories SaaS directories are vital in today’s digital age, serving [...]
Read more

AI at the Gate: Navigating the Future of Cybersecurity with SonicWall’s Bobby Cornwell

Navigating the Future of Cybersecurity In the face of the digital age’s advancements, AI’s role [...]
Read more

A.I. is Not All It’s Cracked Up to Be…At Least Not Yet!

Exploring AI’s Potential: The Gap Between Aspiration and Reality Recently Samsung releases its new Galaxy [...]
Read more
Metasploit-Penetration-Testing-Software-Pen-Testing-Security

Leading Cloud Vulnerability Scanners

Vulnerability Scanners Cyber security vulnerabilities are a constant nuisance and it certainly doesn’t help with [...]
Read more
Katrina Thompson

Why Zombie APIs are Such an Important Vulnerability

Zombie APIs APIs have a lifecycle, the same as anything else. They are born, they [...]
Read more

Azure Free Tier vs. AWS Free Tier: Which Provides More Value?

Cloud computing has become a cornerstone for the digital transformation of businesses. From startups to [...]
Read more

SPONSORS

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.