News

Goodbye Legacy Systems, Hello “Composable IT”

We all remember how much fun it was to build stuff with Lego blocks, right? Imagine how much more fun it would be if the blocks could duplicate themselves whenever you wanted, and if they were programmed so that you could quickly construct, say, an ocean liner, then deconstruct it at the touch of a button and turn it into a skyscraper.

Now imagine if you could do something similar with your IT infrastructure. Think disaggregated, fluid pools of compute, storage and network fabric that you could quickly assemble and re-assemble to meet the exact needs of whatever application you want to deploy. You could spin up resources for a mission-critical business transaction application or a new cloud-native app with equal ease, all from the same pool of fluid resources.

In most cases, today’s traditional infrastructure isn’t up to the task of providing speedy, flexible service. It lacks the capacity to ramp up or down as business units start and stop projects—sometimes at a moment’s notice. Traditionally, it has taken IT departments weeks, or even months to plan and then to install new server hardware to accommodate the needs of internal business units.

If data is the new currency, businesses will need to make a digital transformation to monetize that data they are using. They will need to adjust their IT system to become more flexible so that they can automatically adapt to changing market demands without disrupting or delaying the organization’s workflow. This is Composable Infrastructure (CI).

Essentially, CI is a new category of infrastructure to where compute, storage, and networks become a shared resource that can be accessed anytime, anywhere. IT department can now be enabled to quickly “compose” or recompose its resources based on the needs of the application. This provides the business with agility that it needs to decrease the overall expense of infrastructure and increase the speed of bring new services to market.

Here’s an example: Traditional IT infrastructure made up of compute, storage, and networking usually run on separate platforms. This division creates islands of hard-to-manage, underutilized resources. In contrast, CI pools resources perform much like a cloud computing model. When a business unit requires IT resources, a software developer can simply request the infrastructure capacity needed for a project by submitting a template request. This capacity then becomes available in minutes. When the business unit no longer requires the infrastructure, that extra capacity gets “recomposed” and returned to the pool. This results with the IT department making greater utilization of the existing infrastructure and eliminates island or underutilized equipment.

How composability boosts your ROI

The beauty of CI is that it reduces the operational complexity for the IT department, which in turn lowers the total cost of ownership by reducing capital expenditure and operating expenses. Here’s how:

  • It covers multiple priorities simultaneously – With CI, management no longer has to choose between funding legacy applications that are business-critical and investing in new apps that can lead to innovation and growth. The CI environment is robust enough to support both at a lower cost than legacy systems.
  • It moves IT from being a cost center to becoming a strategic business partner –With CI, IT has the tools to work with the business units to find creative ways to lower costs while improving service.
  • It allows more efficient management of resources – Apps have different requirements to run optimally. For example, some apps require high-performance storage, while others have low-performance requirements. CI’s fluid pool provides the right resources for an app at any one time, eliminating the need to overprovision as would you need to do in a traditional model.

Case study: CI’s real-world benefits

HudsonAlpha Institute for Biotechnology is a creator of one of the world’s first genomic medicine programs designed to diagnose rare diseases. It wants to help eradicate childhood genetic disorders, cancer, and a host of other maladies but was constrained by IT. The organization needed a more robust and flexible infrastructure, one that could handle the massive amounts of data that genomics research produces. HudsonAlpha generates more than one petabyte of data per month—roughly four times the size of the Library of Congress’ database. Furthermore, as a nonprofit, it had to be able to crunch all this data while watching costs.

HudsonAlpha’s CIO Peyton McNully and his team found it increasingly difficult to provide researchers with the data they needed when they needed it. Part of the problem was that genomics algorithms and apps require extremely powerful computers. With roughly 800 researchers and scientists using the IT system to generate increasing amounts of genomics data, McNully knew the organization’s traditional infrastructure no longer had the firepower to meet the company’s needs so he turned to a CI solution offered by Hewlett Packard Enterprise to address its challenges.

Since deploying CI, their organization can now quickly recompose their infrastructure to meet the needs of the business. In addition HudsonAlpha’s storage capacity increased and its cost per terabyte was reduced. Now HudsonAlpha is positioned for a strong IT future with an infrastructure that can grow and flex as the organization does. And more importantly the organization continues to be on the cutting edge to finding cures for rare genetic diseases.

By: Vikram K, Senior Director, Datacenter and Hybrid Cloud, Hewlett Packard Enterprise India

Related posts

BPE Goes Global

enterpriseitworld

SAS Acquires Hazy Synthetic Data Software

enterpriseitworld

Dell Technologies Expands AI Innovations at the Edge

enterpriseitworld
x