Most enterprises are well on the way toward building a hybrid cloud these days, which is naturally leading to a wide range of designs and operation parameters.
Some organizations, for instance, utilize a private cloud on premises and reserve the public cloud for temporary spikes in workload. Others create cloud native applications that may or may not communicate with legacy data center applications. And there are those who are aiming for a fully integrated public-private ecosystem, with varying degrees of success so far.
The main issue that all of these approaches have in common is that different clouds require different application-layer support, and this is before we start to consider adding non-cloud environments to the mix.
This is one of the reasons why today’s joint announcement between Cisco and Google is so interesting. Although details are sketchy, the companies seem to be aiming for a fully integrated cloud that will allow users to support a single application development and operational environment that functions at scale across multiple architectures. The plan is to utilize Google assets like the Kubernetes container management stack and the Apigee API solution in conjunction with Cisco’s DevNet development suite to enable a unified hybrid infrastructure under the Google Cloud Platform. In this way, the companies say they can deliver an open environment that fits comfortably within virtually any enterprise’s cloud strategy while providing end-to-end security, visibility and control both at home and across public resources. The system is expected to launch in early 2018.
The speed at which the hybrid cloud has captured the enterprise imagination makes the launch of virtually any solution a no-brainer at this point. A recent study by SUSE reports that two-thirds of enterprises expect the growth of hybrid infrastructure to continue into the next decade, compared to 55 percent that believes this to be the case for private-only, and just 36 percent for public-only. At the same time, 86 percent say that DevOps is a key component of their future IT strategies and that containers will be an integral component in these efforts. So coming out with a container-based hybrid cloud solution that supports integrated development and operations sure sounds like a winner.
And yet, is the Google-Cisco approach all that different from other provider-vendor solutions? HPE recently started shipping an all-in-one solution that puts Microsoft’s Azure cloud on its ProLiant server line, giving organizations a simple way to add a fully compatible cloud environment inside their legacy data centers. One can argue that this would lock users in to the Azure cloud, but since both Microsoft and HPE offer ties to open platforms like Linux and OpenStack, it’s not like users are blocked from integrating third-party solutions into their hybrid clouds if they choose.
What’s missing in all of this is the recognition that with the shift from infrastructure-centric IT to application-centric IT, focusing on whether the cloud is hybrid enough or not is a moot point, says Cisco’s Pete Johnson. Rather, we should be focusing on hybrid applications that can utilize whatever resources they need in order to fulfill their mandates. In this way, we don’t need to worry about whether the application can run on this cloud or that cloud but whether it can select the best-of-breed services on any cloud. Imagine for a moment a single app that can write a text file on S3 that generates a text-to-speech service on Azure, which can then run on the IBM Bluemix object storage solution that hosts your website. In this way, the data gravity issues that currently bedevil today’s monolithic applications are eliminated.
Nobody expected the transition from static infrastructure to virtual, cloud-based environments to be easy, or quick, so the hybrid cloud is likely to be a work-in-progress for a while longer.
But before enterprise executives decide on exactly what type of the cloud they are going for, it might help to pause a moment to determine what types of applications need to be supported and how they can best be applied to operational and business objectives – preferably without disrupting the processes and services that are fulfilling those objectives today.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.