Auto Scaling In Cloud Computing

Auto Scaling In Cloud Computing

Auto Scaling In Cloud Computing

traffic-flow

Cloud computing with its dynamic scaling feature which allows one to scale, that is to increase or decrease, the amount of resources depending on the demand has become a great boon for IT professionals everywhere. This is especially true for environments with very unpredictable traffic flow, like the whole internet for instance. With traditional servers that were preconfigured to handle a certain amount of load, a website might go down especially when traffic has suddenly surged to levels above the capacity of the server. This happens when some sort of news or event leads people to a specific web location. The solution in a Cloud context is to allocate more resources, and in this case allocate more server instances. But do the costs of the additional resources merit the possible profits generated by the increased traffic? Or will they actually be enough to accommodate all of the traffic, and even has a margin for more?

Traffic to a website could be intentionally routed there, as in a traffic campaign, so there might be a certain amount of expected traffic resulting from that campaign. But sometimes the campaign might have done better than expected and website traffic can go way above expected levels, even beyond the capacity of the servers of the newly allocated servers, so the website goes down losing revenue and potential customers. But with auto scaling, a web admin can set governors or settings that will constantly monitor the traffic to look for patterns that indicate that a lot of traffic will be coming in soon, then allocate proper resources to accommodate such traffic before it arrives.

But one of the biggest pitfalls of auto scaling is the recognition part, if it can really recognize legitimate traffic from false requests like those being generated for a denial of service attack. Auto scaling works by sensing the traffic levels and increasing resources automatically by provisioning more instances before other instances would crash. But if the system cannot distinguish between legitimate traffic and those from a denial of service (DoS) attack, then this can go truly bad for the owner of the site. During a DoS attack, the servers are bombarded with a massive amount of requests meant to look legitimate. If the system cannot detect that this is an attack, then it will continue to provision more instances and other resources to keep up with the traffic demand from the attack. There is an almost unlimited resource here so the web site will never go down, but the incurred costs from the pay-per-use model is sure to kill the website. In this case the denial of service attack has become a ”bankrupting” attack. It is times like this that the pay-per-use model of Cloud Computing can be detrimental.

By Abdul Salam

(Image Source: Shutterstock)

Abdul

Abdul is a senior consultant with Energy Services, and author of numerous blogs, books, white papers, and tutorials on cloud computing and accomplished technical writer with CloudTweaks. He earned his bachelor’s degree in Information Technology, followed by an MBA-IT degree and certifications by Cisco and Juniper Networks.

He has recently co-authored: Deploying and Managing a Cloud Infrastructure: Real-World Skills for the CompTIA Cloud+ Certification (Wiley).

Sorry, comments are closed for this post.

The Lighter Side Of The Cloud – iPatch
The Lighter Side Of The Cloud – Deconstruction
The Lighter Side Of The Cloud – Frobisher
1 2 3 63Next →

CloudTweaks is recognized as one of the leading influencers in cloud computing, big data and internet of things (IoT) information. Our goal is to continue to build our growing information portal, by providing the best in-depth articles, interviews, event listings, whitepapers, infographics and much more.

CloudTweaks Comic Library

Advertising