AWS Just Made Their Management Tools Ready for Multicloud

This post originally appeared on the Gartner Blog Network.

I am just back home after spending last week at AWS re:Invent in tiresome, noisy, vibrant and excessive Las Vegas. At Gartner, I cover cloud management and governance and I was disappointed not to hear much about it in any of the keynotes. I get it, management can be sometimes perceived as a boring necessity. However, it is also opportunity to make a cloud platform simpler. And that’s something that AWS needs. Badly.

Despite the absence of highlights in the keynotes, I spotted something interesting while digging through the myriad of November announcements. What apparently got lost in the re:Invent noise is that AWS is opening up some of their key management tools to support resources outside of the AWS cloud. Specifically, AWS CloudFormation and AWS Config now support third-party resources. And that’s a big deal.

The Lost Announcements

The CloudFormation announcement reports that AWS has changed the tool’s architecture to implement resource providers, much in line with what Hashicorp Terraform is also doing. Each resource provider is an independent piece of code that enables support in CloudFormation for a specific resource type and API. A resource provider can be developed independently from CloudFormation itself and by nonAWS developers.

AWS plans to promote resource providers through the open source model and has certainly the ability to grow a healthy community around them. The announcement also says that a number of resource providers will be shortly available for third-party solutions. Upcoming solutions include AtlassianDatadogDensifyDynatraceFortinetNew Relic and Spotinst. AWS is implementing this capability also for native AWS resources such as EC2 instances or S3 buckets, hinting that this capability may not be just an exception, but a major architectural change.

In the same way, AWS Config now also supports third-party resources. The same resource providers used by CloudFormation enable AWS Config to manage inventory, but also define rules to check for compliance and create conformation packs (a.k.a. collections of rules). All of this also for nonAWS resources.

Why is This a Big Deal?

With this launch, AWS addresses one of the major shortcomings of its management tools: being limited to a single platform – the AWS cloud. From today, anyone could develop resource providers for Microsoft Azure or Google Cloud Platform resources. This possibility makes AWS CloudFormation and AWS Config de facto ready to become multicloud management tools. And we all know what AWS thinks about multicloud, don’t we?

Furthermore, AWS is now challenging the third-party management market, at least within the provisioning and orchestration, inventory and classification and governance domains (see this Gartner framework for reference). AWS CloudFormation now incorporates more capabilities of HashiCorp Terraform. It also can be used to model and execute complex orchestration workflows that organizations normally handle with platforms like ServiceNow. AWS Config can now aim to become a universal CMDB that can keep track of resource inventory and configuration history from anywhere.

Both AWS CloudFormation and AWS Config are widely-adopted tools. Customers could be incented to extend their use beyond AWS instead of selecting a new third-party tool that would require a new contract to sign and new vendor to manage. Does this mean that AWS has issued a death sentence to the third-party management market that makes much of its ecosystem? Certainly not. But these announcements speak to the greater ambition of AWS and will force third-party vendors to find new ways to continue to add value in the long term. Maybe the resource provider ecosystem will not develop, and customers will continue to prefer independent management vendors. Or maybe not.

In conclusion, it was disappointing not to hear this message loud and clear at re:Invent this year, especially compared to the amount of noise we heard around the launches of Google Anthos and Azure Arc. But there is certainly a trend for which all the major providers are preparing their management tools to stretch out of their respective domains. How far they want to go is yet to be determined.

What Blockchain and Cloud Computing Have in Common

This post originally appeared on the Gartner Blog Network.

Blockchain technologies provide ledger databases whose records are immutable and cryptographically-signed using a distributed consensus or validation protocol. These characteristics contributed to the popularity of blockchain to power transaction execution in multiparty business environments. With blockchain, multiple parties can agree on transaction details while still guaranteeing correctness and prevent tampering, without having to rely on a trusted centralized authority.

To provide such functionality and just like any other database, blockchain technologies are built around platforms, infrastructure, APIs and management tools. Cloud computing is a well-oiled model that provides easy access to all these technology components, in addition to services and capabilities for application development and integration. While cloud computing can certainly help accelerate the execution of blockchain projects, it is also a heavily centralized model, specifically around few hyperscale megavendors. Conversely, the effectiveness of blockchain relies on decentralization as one of its core principles .

Full decentralization is especially important for public blockchains (such as Bitcoin) where anybody is free to participate and transact. Conversely, enterprise blockchains may accept to trade aspects of decentralization (such as a single technology provider) in exchange of easier access to technologies and a lower management overhead.

All hyperscale cloud providers have launched blockchain cloud services in the last 18 months to help organization with their blockchain projects. These services build on the strength of each provider (in terms of infrastructure, platform and application development capabilities) but also aim to facilitate the use of open-source DLT frameworks such as Ethereum, Hyperledger Fabric and Quorum.

On my recently published research “Solution Comparison for Blockchain Cloud Services From Leading Public Cloud Providers” (paywall), I have assessed and compared the blockchain-related cloud services offered by:

  • Alibaba Cloud
  • Amazon Web Services
  • Google
  • IBM
  • Microsoft
  • Oracle

The research provides a heatmap of the capabilities provided by each vendor, allowing Gartner clients to quickly assess their strengths and weaknesses in this space. The research also provides all the details behind the attributed scores for those technical professionals who want to dig deeper into each vendor’s offering. Some example of the comparison criteria include:

  • Number of Supported DLTs
  • Blockchain Community Involvement
  • Infrastructure Supported
  • Fully Managed Ledger Service
  • Smart Contract Management

Like most blockchain technologies, also blockchain cloud services are still immature, especially in light of the rapidly evolving landscape of DLT frameworks. As a demonstration of that, many of the assessed cloud services have been launched during the conduction of this research, which required multiple re-assessements of the vendor offerings. Some vendors also launched additional services and features after the publication of this research, for example:

To know more about this topic or if you would like to discuss further, you can read the research note at “Solution Comparison for Blockchain Cloud Services From Leading Public Cloud Providers” (paywall). You can also reach out to your Gartner representative to schedule an inquiry call with me. Looking forward to hearing your comments!

Just Published: New Assessments of AWS, Azure and GCP Cloud IaaS

This post originally appears on the Gartner Blog Network.

Gartner has just published the updated cloud IaaS scores for Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). Gartner clients are normally used to see these updates coming once a year, but this time we decided to publish a quick incremental update, which is still based on last year’s 236-point Evaluation Criteria for Cloud Infrastructure as a Service (research by Elias Khnaser, @ekhnaser). Considering the pace at which the three hyperscale cloud providers are moving, we felt the need to reassess their coverage with higher frequency.

Compared to the previous assessments occurred mid-summer 2017, these new assessments show a steady growth in feature coverage by all three providers, with GCP leading the growth with an overall increment of 12 percent points. Azure follows with five additional percent points and AWS, which was the provider with the highest coverage also last year, marked an increment of four percent points. The figure below shows the details of the movements occurred within this update, broken down by required, preferred and optional criteria. It is interesting to note how some scores also went down (see Azure, required). When scores go down, it is not always due to providers removing features, but sometimes – like in this case – due to the modification of the applicability of the criteria’s scope.

What’s exactly behind these changes? Gartner for Technical Professionals (GTP) clients can access the three research notes to find out. With this update to the in-depth assessments, we have also introduced a “What’s New” summary section and a detailed “Change Log”, so that clients can quickly determine what are the provider’s updates that drove the changes in the scores.

What are the areas where providers are investing more? What are the gaps that still exist in some of their offerings? Are those gaps important or negligible for your organization? Find the answer to these and other questions by accessing the detailed research notes at:

In the meantime, Gartner is also redefining the full list of evaluation criteria for cloud IaaS in light of provider innovation and the shift in customer requirements as they adopt more public cloud services. The next update of the providers scores will most likely be based on the revised evaluation criteria. Stay tuned for new and potentially surprising results!

Serverless, Servers and Cloud Management at AWS re:Invent 2017

This post originally appeared in the Gartner Blog Network.

In the last few days, the press has been dominated by countless interpretations of the myriad of AWS re:Invent announcements. Every article I read was trying (hard) to extract some kind of trend or direction from the overall conference. However, it was simply succeeding in providing a single and narrow perspective. AWS have simply tried to position itself as the “everything IT” (as my colleague Lydia Leong said in a tweet). With so many announcements (61, according to AWS), across so many area and in such a short time, it is extremely difficult for anyone to understand their impact without a more thorough analysis.

However, I won’t refrain from giving you also my own perspective, noting down a couple of things that stood out for me.

Serverless took the driver’s seat across the conference, no doubt. But servers did not move back into the trunk as you’d have expected. Lambda got a number of incremental updates. New services went serverless, such as Fargate (containers without the need to manage the orchestrator cluster) and the Aurora database. Finally, Amazon is headed to deliver platform as a service as it should’ve been from day one. A fully multi-tenant abstraction layer that handles your code, and that you pay only when your code is running.

However, we also heard about Nitro, a new lightweight hypervisor that can deliver near-metal performance. Amazon also announced bare-metal instances. These two innovations have been developed to attract more of the humongous number of workloads out there, which still require traditional servers to run. When the future seems to be going serverless, server innovation is still relevant. Why? Because by lowering the hypervisor’s overhead, Nitro can lead to better node density, better utilization and ultimately cost benefits for end users.

With regard to my main area of research, I was not impressed that only a couple of announcements were related to cloud management. Amazon announced an incremental update to CloudTrail (related to Lambda again, by the way) and the expansion of Systems Manager to support more AWS services. Systems Manager is absolutely one step towards what should be a more integrated cloud management experience. However (disclaimer: I’ve not seen it in action yet), my first impression is that it still focuses only on gaining (some) visibility and on automating (some) operational tasks. It’s yet another tool that needs integration with many others.

My cloud management conversations with clients tell me that organizations are struggling to manage and operate their workloads in the public cloud, especially when these live in conjunction with their existing processes and environments. Amazon needs to do more in this space to feel less like just-another-technology-silo and deliver a more unified management experience.

When both Andy Jassy or Werner Vogels were asked about multicloud, they both denied it. They said that most organizations stick with one primary provider for the great majority of their workloads. The reason? Because organizations don’t accept working at the least common denominator (LCD) between providers. Nor they want to become fluent in multiple APIs.

The reality is that multicloud doesn’t necessarily mean having to accept the LCD. Multicloud doesn’t imply having a cloud management platform (CMP) for each and every management task. It doesn’t imply having to make each and every workload portable. The LCD between providers would be indeed too much of a constraint for anyone adopting public cloud services.

On the contrary, we see that many organizations are willing to learn how to operate multiple providers. They want to do that to be able to place their workloads where it makes most sense, but also as a risk mitigation technique. In case they will ever be forced to exit one providers, they want to be ready to transfer their workloads to another one (obviously, with a certain degree of effort). Nobody wants to be constrained to work at the LCD level, but this is not a good excuse to stay single-cloud.

Amazon continues to innovate at an incredible pace, which seems to accelerate every year. AWS re:Invent 2017 was no exception. Now, organizations have more cloud services to support their business. But they also have many more choices to make. Picking the right combination of cloud services and tools is becoming a real challenge for organizations. Will Amazon do something about it? Or shall we expect hundreds of more service announcements at re:Invent 2018?

New Research: How To Manage Public Cloud Costs on Amazon Web Services and Microsoft Azure

This post originally appeared on the Gartner Blog Network.

Today, I am proud to announce that I just published new research (available here) on how to manage public IaaS and PaaS cloud costs on AWS and Microsoft Azure. The research illustrates a multicloud governance framework that organizations can use to successfully plan, track and optimize cloud spending on an ongoing basis. The note also provides a comprehensive list of cloud providers’ native tools that can be leveraged to implement each step of the framework.

In the last 12 months of client inquiries, I felt a remarkable enthusiasm for public cloud services. Every organization I talked to was at some stage of public cloud adoption. Almost nobody was asking me “if” they should adopt cloud services but only “how” and “how fast”. However, these conversations also showed that only few organizations had realized the cost implications of public cloud.

In the data center, organizations were often over-architecting their deployments in order to maximize the return-on-investment of their hardware platforms. These platforms were refreshed every three-to-five years and sized to serve the maximum expected workload demand over that time frame. The cloud reverses this paradigm and demands that organizations size their deployment much more precisely or they’ll quickly run into overspending.

Futhermore, cloud providers price lists, pricing models, discounts and billing mechanisms can be complex to manage even for mature cloud users. Understanding the most cost-effective option to run certain workloads is a management challenge that organizations are often unprepared to address.

Using this framework will help you take control of your public cloud costs. It will make your organization achieve operational excellence in cost management and realize many of the promised cost benefits of public cloud.

The Gartner’s framework for cost management comprises five main steps:

  • Plan: Create a forecast to set spending expectations.
  • Track: Observe your actual cloud spending and compare it with your budget to detect anomalies before they become a surprise.
  • Reduce: Quickly eliminate resources that waste cloud spending.
  • Optimize: Leverage the provider’s discount models and optimize your workload for cost.
  • Mature: Improve and expand your cost management processes on a continual basis.

If you recognize yourself in the above challenges, this new research note is an absolute recommended read. For a comprehensive description of the framework and the correspondent mapping of AWS and Microsoft Azure cost management tools, see “How To Manage Public Cloud Costs on AWS and Microsoft Azure”.

2015: the surrendering to the cloud

I thought I’d label 2015 as the year of the surrendering to the cloud. And by this I do not mean that mass adoption that every software vendor was waiting for, but surrendering to (1) the fact that cloud is now pervasive and it is no longer up for a debate and (2) to the dominance of Amazon Web Services.

A debate had been previously going way too long, on what are the real benefits of the cloud. And I’m not talking about end customers here, I’m talking about IT professionals, for whom new technologies should be bread and butter. But around cloud computing, they somehow showed the strongest skepticism, a high dose of arrogance (how many times I heard “we were doing cloud 20 years ago, but we were just not calling it that way”) and reluctancy to embrace change. The great majority of them underestimated the phenomenon to the point of challenging its usefulness or bringing it down to virtualisation in some other data center which is not here.

I asked myself why this has happened and I came to the conclusion that cloud has been just too disruptive, even for IT pros. To understand the benefits of the cloud in full, one had to make a mental leap. People naturally learn by small logical next steps, so cloud was interpreted just like the natural next step after having virtualised their data centres. But as I wrote more than three years ago in the blog post Cloud computing is not the evolution of virtualisation, the cloud came to solve a different problem and used virtualisation just as a delivery method to accomplish its goal. But finally, in 2015 I personally witnessed that long overdue increased level of maturity with respect to cloud technologies. Conversations I had with service providers and end customers’ IT pros were no longer about “if” to cloud or not to cloud, but about “what” and “when” instead.

What has helped achieving this maturity? I think it is the fact that nobody could ignore anymore the elephant in the room. The elephant called Amazon Web Services. That cloud pioneer and now well consolidated player that is probably five years ahead of its nearest competitor, in terms of innovation and feature richness. And not only they’re not ignoring it anymore, everyone wants to have a ride on it.

Many of those IT pros I mentioned are actually employed by major software vendors, maybe even leading their cloud strategy. Their initial misunderstanding of the real opportunity behind cloud adoption led to multi-million investments on the wrong products. And in 2015 (here we come to the surrendering number 2) we saw many of these failures surfacing up and demanding real change. Sometimes these changes were addressed with new acquisitions (like the EMC acquisition of Virtustream) or with the decision to co-opt instead of compete.

To pick some examples:

On Tuesday [Oct 6th] at AWS re:Invent, Rackspace launched Fanatical Support for AWS, beginning with U.S.-based customers. Non-U.S. customers will have to wait a while, although Rackspace will offer support for them in beta mode. In addition, Rackspace will also resell and offer support services for AWS’s elastic cloud as it’s now officially become an authorized AWS reseller.
Hewlett-Packard is dropping the public cloud that it offered as part of its Helion
“hybrid” cloud platform, ceding the territory to Amazon Web Services and Microsoft’s Azure. The company will focus on private cloud and traditional IT that its large corporate customers want, while supporting AWS and Azure for public cloud needs.
HP Enterprise’s latest strategy, which dovetails with earlier plans to focus on private and managed clouds, is to partner with Microsoft and become an Azure reseller.

What does this tell us? Most software vendors are now late to the game and are trying to enter the market by holding the hand of those who understood (and somewhat contribute to create) the public cloud market. But don’t we always say the cloud market is heading to commoditisation, why there seem to be no space for a considerable number of players? Certainly HP, VMware or IBM have the investment capacity of Amazon to grow big and compete head to head.

The reality is that we’re far from this commoditisation. If virtual machines may well be a commodity, they’re not more than a tiny bit of the whole cloud services offered for example by AWS (EC2 was mentioned only once during the two main keynotes at AWS re:Invent this year!). The software to enable the full portfolio of cloud services still make a whole lot of difference and to deliver it, this requires vision, leadership, understanding and a ton of talent. Millions of investments without the rest was definitely not the way.

Happy 2016!

Why developers won’t go straight to the source

I’m so excited. On last Wednesday Flexiant has announced the acquisition of the Tapp technology platform and business. I met the guys behind it quite a while ago and I have never refrained from remarking how great their technology is (see here). I recognized a trend in their way of addressing the cloud management problem and I’m so glad to be part of, right now.

Disclaimer. I am currently working for Flexiant as Vice President Products. I have endorsed this acquisition and I am fully behind the reasons and convinced of the potential of it. This is my personal blog and whatever you read here has not been agreed with my employer in advance and therefore it represents my very personal opinion.

Right after the acquisition (read more about it here) we’ve heard tremendous noise on social networks and the press. David Meyer (@superglaze) of GigaOm in particular wrote up a few interesting comments and he picked up well the reasoning behind it, but he also ended the article with an open question:

“This [the Tapp technology platform] would help such players [Service Providers] appeal to certain developers that are currently just heading straight for EC2 or Google.
 
Of course, this is ultimately the challenge for the likes of Flexiant – can anything stop those developers going straight to the source? That question remains unanswered.”

Well, I’d like to answer that question and say why I’m actually convinced there is a lot of value to add for multi-cloud managers.

Much has been written these days from the business side of the acquisition and I don’t have anything meaningful to add. Instead, I would like to raise a few interesting points from a technology point of view (that’s my job, after all) and unveil those values that are maybe not so obvious at the first sight.

Multi-cloud management

Multi-cloud management per se has a very large meaning spectrum. There are multi-cloud managers brokerage, therefore primarily on getting you the best deal out there. Despite this is a good example about how to provide a “multi-cloud” value, I’m still wondering how they can actually find a way to compare apples with oranges. In fact, cloud infrastructure service offerings are so different and heterogeneous that being simply a cloud broker will make it extremely difficult to succeed, deliver real value and differentiate. So, point number one: Tapp isn’t a cloud brokerage technology platform.

Other multi-cloud managers deliver value by adding a management layer on top of existing cloud infrastructures. This management layer may be focused on specific verticals like scaling Internet applications (e.g. Rightscale) or providing enterprise governance (e.g. Enstratius, now Dell Multicloud Manager). By choosing a vertical, they can address specific requirements, cut off the unnecessary stuff from the general purpose cloud provider and enhance the user experience of very specific use cases. That’s indeed a fair point but not yet what Tapp is all about.

So why, when using Tapp, developers won’t “go straight to the source”? Well, first of all, let’s make clear that developers are already at the source. In fact, to use any multi-cloud manager you need an AWS account or a Rackspace account (or any other supported provider account). You need to configure your API keys in order to enable to communication with the cloud provider of choice. So if someone is using your multi-cloud manager, it means that he prefers it over the management layer provided by the “the source”.

The cloud provider lock-in

One of the reasons behind Amazon’s success is the large portfolio of services they rolled out. They’re all services that can be put together by end users to build applications, letting developers focus just on their core business logic, without worrying too much about queuing, notifying, load balancing, scaling or monitoring. However, whenever you use one of the tools like ELB, Route53, CloudWatch or DynamoDB you’re locking yourself into Amazon. The more you use multi-tenant proprietary services that exist only on a specific provider, you won’t be able to easily migrate your application away.

You may claim to be “happy” to be locked in a vendor who’s actually solving your problems so well, but there are a lot of good reasons (“Why Cloud Lock-in is a Bad Idea“) to avoid vendor lock-in as a principle. Many times, this is one of the first requirements of those enterprises that everyone is trying to attract into the cloud.

Deploying the complete application toolkit

Imagine there could be a way to replicate those services onto another cloud provider by building them up from ground up on top of some virtual servers. Imagine this could be done by a management layer, on demand, on your cloud infrastructure of choice. Imagine you could consume and control those services using always the same API. That would enable your application to be deployed in a consistent manner across multiple clouds, exclusively relying on the possibility to spin up some virtual servers, which you can find in every cloud infrastructure provider.

This is what Tapp is about. And the advantages of doing that are not trivial, these include:

1. Independency, consistency and compatibility

This is the obvious one. For instance, a user can click a button to deploy an application on Rackspace and another button to deploy a DNS manager and a load balancer. These two would provide an API that is directly integrated into the control panel and therefore consumable as-a-service. Now, the exact same thing can be also done on Amazon, Azure, Joyent or any other supported provider, obtaining the exact same result. Cloud providers became suddenly compatible.

2. Extra geographical reach

Let’s say you like Joyent but you want to deploy your application closer to a part of your user base that lives where Joyent doesn’t have a data center. But look, Amazon has one there and, despite you don’t like its pricing, you’re ready to make an exception to gain some latency advantages to serve your user base. If your application is using some of the Joyent proprietary tools, it would be extremely difficult to replicate it on Amazon. Instead, if you could deploy the whole toolkit using just some EC2 instances, then it all becomes possible.

3. Software-as-a-(single)-tenant

If multi-tenancy has been considered as a key point of Cloud Computing, I started to believe that maybe as long as an end user can consume an application as-a-service, who cares if it’s multi-tenant or single-tenant.

If you can deploy a database in a few clicks and have your connector as a result, does it really matter if this database is also hosting other customers or not? Actually, single-tenancy would become the preferred option1 as he would not have to be worried about isolation from other customers, noisy neighbors, et al. Tony Lucas (@tonylucas) wrote about this before on the Flexiant blog and I think he’s spot on, there is a “third” way and that’s what I think it’s going mainstream.

The Tapp’s way

The Tapp technology platform was built to provide all of that. A large set of application-centric tools, features and functions2 that can be deployed across multiple clouds and consumed as-a-service.

Of course it’s not just about tools. It’s also about the application core, whatever it is. The Tapp technology solves also that consistency problem by pushing the application deployment and configuration into some Chef recipes, as opposed to cloud provider-specific OS images or templates3. Every time you run those recipes you get the same result, in any cloud provider. In fact, to deploy your application you’ll just need the availability of vanilla OS images, like Ubuntu 14.04 or Windows 2012 R2 that, honestly, are offered by any cloud provider.

All those end users who want to deploy applications without feeling locked in a specific providers, today had only one way of doing it: DIY (“do-it-yourself”). They would have to maintain and operate OS images, load balancers, DNS servers, monitors, auto-scalers, etc. That’s a burden that, most of the time, they’re not ready to take. They don’t want to spend time deploying all those services that end up being all the same, all the time. Tapp takes away that burden from them. It deploys applications and service toolkits in an automated fashion and provides users just with the API to control them. And this API is consistent, independently from the chosen cloud provider. This is the key value that, I believe, will prevent developers from going straight to the source.

1. Multi-tenancy would be the preferred option for the Service Provider because this would translate into economies of scale. However, economies of scale often obtain cost optimisation and end user price reduction and, therefore, it can be considered an indirect advantage for end customers as well.

2. Tapp features include: application blueprinting with Chef, geo-DNS management and load balancing, networking load balancing, auto-scaling based on application performance, application monitoring, object storage and FDN (file delivery network).

3. It worths mentioning that pushing the deployment of application into configuration management tools like Chef or Puppet significantly affects the deployment time. That’s why it’s strongly advised to find the optimal balance between what is built-in the OS image and what is left to the configuration management tool.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.