Mainframes in the Age of Cloud Computing

Mainframes in the Age of Cloud Computing

Mainframes may be 1960s technology, but they still manage a huge portion of the world’s data. This is mainly because of two reasons – i) continuous improvement of the technology and ii) the huge costs involved in changing them en masse. However, in today’s world of cloud computing when companies are looking at decreasing their IT footprints, bulky mainframes may seem archaic. But not so if they are used in the proper manner.

Some time back I had written about how cloud computing represented the logical flow of advancement from bulky mainframes to smaller PCs to a computing paradigm when no software needs to be present on a machine (Mainframes -> PCs -> Cloud Computing? ). However, today, in a step backward that can actually take us forward, however strange that may sound, I am going to write on how mainframes can be used for cloud computing.

Modern mainframes (no, that’s not a printing error) can potentially be used to build private clouds, those nebulous networks with closely delimited borders but offering several of the advantages of cloud computing ( Which is the Safer Cloud – Public or Private?).

Such usage of mainframes is possible due to the increasing use of Linux, the favored cloud computing operating system. This allows virtual x86 servers to be run on a mainframe, the same way they are used in traditional (again, not a printing error) cloud computing. In a CA Technologies-sponsored survey of 200 U.S. mainframe executives last year, 73% of the respondents said that their mainframes were a part of their future cloud plans.

However, two problems remain. One is the requirement for explicit authorization for provisioning of mainframe resources, something that detracts from one of the very fundamentals of cloud computing – elasticity. However, according to Reed Mullen, IBM‘s System z cloud computing expert, this lack of self-provisioning in mainframes is cultural, not technological. According to him, self-provisioning is easy to implement using IBM’s Tivoli Service Automation Manager or through custom development.

Mullen agrees that even then, self-provisioning would depend on the latitude given by a company’s IT department, which he describes as the “old habits” of the mainframe world. At the same time, he insists, even cloud computing in the usual sense of the term uses approval processes, however transparent they may be.

The second problem relates to the high cost of mainframe equipment and licensing. However, Mullen mentions that the latest generation of IBM mainframes has a relatively unknown “on-off” feature, which allows administrators to turn a processor core on for a limited time, paying short-term day rates for IBM software rather than buying an expensive annual license based on the number of processor cores. “We are looking at taking advantage of this infrastructure to make it even more suitable for a cloud environment where there is a lot of unpredictable usage,” he said.

For companies that want to use mainframes for their dependability but at the same time want to be on the cloud as well, this is certainly an interesting development. As it is for companies that already have mainframes that they are reluctant to mothball. Whether it makes sense to buy mainframes for the primary purpose of cloud computing is however, difficult to justify. Studies will have to be conducted, and not necessarily by IBM which has a vested interest in pushing mainframes, before such questions can be answered definitely.

By Sourya Biswas

Sorry, comments are closed for this post.

Sponsor Programs
Cloud Thought Leaders And Contributors

Write For Us - Find Out More!

CloudTweaks is recognized as one of the leading influencers in cloud computing, infosec, big data and the internet of things (IoT) information. Our goal is to continue to build our growing information portal by providing the best in-depth articles, interviews, event listings, whitepapers, infographics and much more.