AWS Just Made Their Management Tools Ready for Multicloud

This post originally appeared on the Gartner Blog Network.

I am just back home after spending last week at AWS re:Invent in tiresome, noisy, vibrant and excessive Las Vegas. At Gartner, I cover cloud management and governance and I was disappointed not to hear much about it in any of the keynotes. I get it, management can be sometimes perceived as a boring necessity. However, it is also opportunity to make a cloud platform simpler. And that’s something that AWS needs. Badly.

Despite the absence of highlights in the keynotes, I spotted something interesting while digging through the myriad of November announcements. What apparently got lost in the re:Invent noise is that AWS is opening up some of their key management tools to support resources outside of the AWS cloud. Specifically, AWS CloudFormation and AWS Config now support third-party resources. And that’s a big deal.

The Lost Announcements

The CloudFormation announcement reports that AWS has changed the tool’s architecture to implement resource providers, much in line with what Hashicorp Terraform is also doing. Each resource provider is an independent piece of code that enables support in CloudFormation for a specific resource type and API. A resource provider can be developed independently from CloudFormation itself and by nonAWS developers.

AWS plans to promote resource providers through the open source model and has certainly the ability to grow a healthy community around them. The announcement also says that a number of resource providers will be shortly available for third-party solutions. Upcoming solutions include AtlassianDatadogDensifyDynatraceFortinetNew Relic and Spotinst. AWS is implementing this capability also for native AWS resources such as EC2 instances or S3 buckets, hinting that this capability may not be just an exception, but a major architectural change.

In the same way, AWS Config now also supports third-party resources. The same resource providers used by CloudFormation enable AWS Config to manage inventory, but also define rules to check for compliance and create conformation packs (a.k.a. collections of rules). All of this also for nonAWS resources.

Why is This a Big Deal?

With this launch, AWS addresses one of the major shortcomings of its management tools: being limited to a single platform – the AWS cloud. From today, anyone could develop resource providers for Microsoft Azure or Google Cloud Platform resources. This possibility makes AWS CloudFormation and AWS Config de facto ready to become multicloud management tools. And we all know what AWS thinks about multicloud, don’t we?

Furthermore, AWS is now challenging the third-party management market, at least within the provisioning and orchestration, inventory and classification and governance domains (see this Gartner framework for reference). AWS CloudFormation now incorporates more capabilities of HashiCorp Terraform. It also can be used to model and execute complex orchestration workflows that organizations normally handle with platforms like ServiceNow. AWS Config can now aim to become a universal CMDB that can keep track of resource inventory and configuration history from anywhere.

Both AWS CloudFormation and AWS Config are widely-adopted tools. Customers could be incented to extend their use beyond AWS instead of selecting a new third-party tool that would require a new contract to sign and new vendor to manage. Does this mean that AWS has issued a death sentence to the third-party management market that makes much of its ecosystem? Certainly not. But these announcements speak to the greater ambition of AWS and will force third-party vendors to find new ways to continue to add value in the long term. Maybe the resource provider ecosystem will not develop, and customers will continue to prefer independent management vendors. Or maybe not.

In conclusion, it was disappointing not to hear this message loud and clear at re:Invent this year, especially compared to the amount of noise we heard around the launches of Google Anthos and Azure Arc. But there is certainly a trend for which all the major providers are preparing their management tools to stretch out of their respective domains. How far they want to go is yet to be determined.

Serverless, Servers and Cloud Management at AWS re:Invent 2017

This post originally appeared in the Gartner Blog Network.

In the last few days, the press has been dominated by countless interpretations of the myriad of AWS re:Invent announcements. Every article I read was trying (hard) to extract some kind of trend or direction from the overall conference. However, it was simply succeeding in providing a single and narrow perspective. AWS have simply tried to position itself as the “everything IT” (as my colleague Lydia Leong said in a tweet). With so many announcements (61, according to AWS), across so many area and in such a short time, it is extremely difficult for anyone to understand their impact without a more thorough analysis.

However, I won’t refrain from giving you also my own perspective, noting down a couple of things that stood out for me.

Serverless took the driver’s seat across the conference, no doubt. But servers did not move back into the trunk as you’d have expected. Lambda got a number of incremental updates. New services went serverless, such as Fargate (containers without the need to manage the orchestrator cluster) and the Aurora database. Finally, Amazon is headed to deliver platform as a service as it should’ve been from day one. A fully multi-tenant abstraction layer that handles your code, and that you pay only when your code is running.

However, we also heard about Nitro, a new lightweight hypervisor that can deliver near-metal performance. Amazon also announced bare-metal instances. These two innovations have been developed to attract more of the humongous number of workloads out there, which still require traditional servers to run. When the future seems to be going serverless, server innovation is still relevant. Why? Because by lowering the hypervisor’s overhead, Nitro can lead to better node density, better utilization and ultimately cost benefits for end users.

With regard to my main area of research, I was not impressed that only a couple of announcements were related to cloud management. Amazon announced an incremental update to CloudTrail (related to Lambda again, by the way) and the expansion of Systems Manager to support more AWS services. Systems Manager is absolutely one step towards what should be a more integrated cloud management experience. However (disclaimer: I’ve not seen it in action yet), my first impression is that it still focuses only on gaining (some) visibility and on automating (some) operational tasks. It’s yet another tool that needs integration with many others.

My cloud management conversations with clients tell me that organizations are struggling to manage and operate their workloads in the public cloud, especially when these live in conjunction with their existing processes and environments. Amazon needs to do more in this space to feel less like just-another-technology-silo and deliver a more unified management experience.

When both Andy Jassy or Werner Vogels were asked about multicloud, they both denied it. They said that most organizations stick with one primary provider for the great majority of their workloads. The reason? Because organizations don’t accept working at the least common denominator (LCD) between providers. Nor they want to become fluent in multiple APIs.

The reality is that multicloud doesn’t necessarily mean having to accept the LCD. Multicloud doesn’t imply having a cloud management platform (CMP) for each and every management task. It doesn’t imply having to make each and every workload portable. The LCD between providers would be indeed too much of a constraint for anyone adopting public cloud services.

On the contrary, we see that many organizations are willing to learn how to operate multiple providers. They want to do that to be able to place their workloads where it makes most sense, but also as a risk mitigation technique. In case they will ever be forced to exit one providers, they want to be ready to transfer their workloads to another one (obviously, with a certain degree of effort). Nobody wants to be constrained to work at the LCD level, but this is not a good excuse to stay single-cloud.

Amazon continues to innovate at an incredible pace, which seems to accelerate every year. AWS re:Invent 2017 was no exception. Now, organizations have more cloud services to support their business. But they also have many more choices to make. Picking the right combination of cloud services and tools is becoming a real challenge for organizations. Will Amazon do something about it? Or shall we expect hundreds of more service announcements at re:Invent 2018?

Insights from KubeCon EU 2016: Kubernetes vs. reality

Last week in London, the distributed systems community got together at KubeCon EU to talk containers orchestration and Kubernetes. I was there too and I would like to share with you some insights from this exciting new world.

(Sorry for recycling the picture but I simply really liked it! – Credits go to Jessica Maslyn who created it).

Insights from Kubernetes

KubeCon is the official community conference of Kubernetes, despite it was not directly organised by Google, which instead is the by far top contributor of the open source project. Google also sent a few top-notch speakers, whose presence was already a good reason to pay a visit. Kelsey Hightower (@kelseyhightower) first and foremost, with his charm and authentic enthusiasm, was one of the most brilliant speakers, capable of winning the sympathy of everyone and earning respect at his first spoken sentence.

The probably most important announcement made around the Kubernetes project was its inclusion in the CNCF (Cloud Native Computing Foundation) for its governance going forward. This was generally welcomed as a positive initiative, as it has transferred control of the project to a wider committee, but still when the project was mature enough to keep its direction and mission.

Kubernetes is moving at an incredibly fast pace

Some hidden features were revealed during the talks, that even the most advanced users did not know about, and the announced roadmap was simply impressive. We heard users saying “we’re happy to see that any new feature we’ve been thinking of, is already somehow being considered”. This gives an idea of how much innovation is happening there and how much vendors and individual contributors are betting on Kubernetes to become a pervasive thing in the near future.

Its eco-system is doing amazing things

When an open source project just gets it right, it immediately develops an eco-system that understands its value and potential and it’s eager to contribute to it, by adding value on top. This is true for Kubernetes as well, and the exhibit area of the conference brought there the most talented individuals in the industry. I’ve been personally impressed by products like Rancher, that has got really far in very short time (thing that demonstrate clear vision and strong leadership) as well as things like Datadog and Weave Scope, that have shown strong innovation in data visualization, which they definitely brought to the next level.

Has it started to eat its eco-system’s lunch?

This is unavoidable when projects are moving so fast. The border between the project’s core features and what other companies develop as add-ons is fuzzy. And it’s always changing. What some organizations see as an opportunity at first, may become pointless at the next release of Kubernetes. But in the end, this is a community driven project and it’s the community that decides what should fit within Kubernets and what should be left to someone else. That’s why it’s so important to be involved in the community on a day-to-day basis, to know what’s being built and discussed. When I asked Shannon Williams, co-founder of Rancher Labs, how does he cope with this problem, he said you have to move faster, when part of your code is no longer required, just deprecate it and move on. Sure thing, you need to know how to move *that* fast, though!

Insights from reality

As product guy, I get excited about technology but I need to feel the real need of it, in a replicable manner. That’s why my ears were all for customers, end users and use cases.

The New York Times

Luckily, we heard a few use cases at the conference, the most notable of which was the New York Times using Kubernetes in production. Eric Lewis (@ericandrewlewis) took us through their journey from how they were giving developers a server, to enabling developers provision applications using Chef, to containers with Fleet and then Kubernetes. While Kubernetes looks like an end point, and we all know something else is coming next, but according to them, that’s definitely the best thing to deliver developers’ infrastructure at present.

Not (yet) a fit for everything

What stood out the most from real use cases, is how stateful workload is not that seamless to manage using containers and Kubernetes. It was demonstrated that it is possible, but still a pain to setup and maintain. The main reason is that state requires identity, you simply can’t flash out a database node (mapped to a pod) and start a brand new one, but you need to replace it with an exact copy of the one who’s gone. Every application needs to handle state, therefore every application needs to go through this. Luckily, it was said how the Kubernetes community is already working on PetSet that should exactly address this problem. Wait and see!

But the reality today is that Kubernetes is capable of handling only parts of an application. In fact one end customer told me that a great orchestration software should be able to handle both containerised and non-containerised workload. Thumbs up to him to remind us that the rest of the world of IT still exists!

Fast pace leads to caution

This could be a real problem when you have a nascent eco-system that’s proposing equivalent but slightly different approaches to things. Which one to pick? Which horse to bet on? What if my chosen standard will be the one getting deprecated? And whilst competition is good even when it comes to open innovation, this also drives a totally understandable caution from end customers. I kind of miss the time when the standard was coming first and products were based upon them, but now we tend to welcome de facto standards instead, which take some time to prove their superiority.

In the end, what really matters is having more people using Kubernetes. More use cases will drive more innovation and will bring that stabilisation required to convince even the most cautious ones. When people on the conference stage were asked to give some advices on Kubernetes adoption, this is what they said:

  1. Make sure you have someone who supports you business wise. Don’t leave it just a technology-driven decision but make sure the reasons and the opportunities it unlocks are well understood from the business owners of your organisation.
  2. Stick at it. You’ll encounter some difficulties at the beginning but don’t give in. Stick at it and you’ll be rewarded.
  3. Focus on moving to containers. That’s the hard thing in this revolution. Once you do that, adopting Kubernetes will be just a no brainer.

Right, move to containers. We heard this for a while. And containers are one of those not yet standardized things, despite the Open Container Initiative was kicked off a while ago. Docker is trying to become the de facto standard here but this seems to be business strategy driven rather than a contribution to the open source community. In fact, where were the Docker representatives at KubeCon? I have seen none of them.

Disclaimer: I have no personal involvement with KubeAcademy, the organizers of KubeCon, or with any of the mentioned companies and products. My employer is Flexiant and Flexiant was not an official sponsor of KubeCon. Flexiant is currently building a Kubernetes-based version of Flexiant Concerto.

If cloud can’t wait, will you?

A few days ago I have participated as a panelist in the webinar titled “Cloud Can’t Wait” alongside Michael Coté (@cote), analyst at 451 Research, Jared Stauffer (@jaredstauffer), CEO at Brinkster and Jim Foley, SVP Market Development at Flexiant.

We have debated the cloud opportunity. Sounds old? Maybe. However, surprisingly enough, the majority of IT infrastructure buyers haven’t adopted it yet. Skepticism, natural resistance to change, staff self-preservations and other excuses are amongst the primary reasons for that. If you think about it, this is actually pretty normal when a technology is so much disrupting the status quo.

The title of the webinar “Cloud Can’t Wait” may sound like a way to build the hype but, with regard to cloud, I think we all concur that, by now, the hype is way over. As I’m sure we agree that, indeed, the cloud can’t wait. Those who’ve fully embraced it have demonstrated to have significant advantages over those who haven’t, and these advantages are directly affecting their competitiveness and even their ability to stay in business.

The opportunity is for everyone

We talked about the cloud focusing on the infrastructure side of it. We have deliberately excluded SaaS consumption from the statistics and the debate, as that has a totally different adoption curve and, when put in the same basked, can easily mislead the conclusions. So rule number one, treat SaaS numbers separately.

Michael Coté presented an interesting categorisation of cloud infrastructure services, segmented as follows:

  • Infrastructure-as-a-Service (IaaS): compute, storage and network “raw” infrastructure.
  • Platform-as-a-Service (PaaS): supporting developers and middleware integration they require.
  • Infrastructure-Software-as-a-Service (ISaaS): the applications required to manage IT infrastructure, including backup, archiving, disaster recovery (DR), capacity planning and, more generically, IT management as a service.

Seeing ISaaS as third category was pretty interesting to me as we all knew it existed but we never managed to label it correctly. And as Michael stated later on, expertise in this specific category is what some service providers, mostly those coming from the managed services space, can actually offer as value add on top of raw infrastructure, in order to win business in this space.

So what is this cloud opportunity we are referring to? Again, Michael explained it this way:

“[With a 29% year over year growth rate] there is the opportunity to get involved early and [as a vendor] participating in gathering lots of that cash. Instead, cloud buyers such as developers or enterprises, are not interested in participating in this growth, but in the innovation that comes out of this cloud space, they want to use this innovation and efficiency to really differentiate themselves in their own business”

So the opportunity is there and it is a win-win for everyone.

Why people are buying cloud and who are they?

If you ask yourself why people are buying cloud and what they’re using it for, you maybe won’t find the answers easily. That’s where the work of 451 Research becomes really helpful. As Michael told us, from the conversations they have everyday, it came out that most organisations use the cloud because of “the agility that it brings, the speed you can deploy IT and [afterwards] that you can use IT as a differentiator. [Because cloud] speeds time to market”.

To that, I would add that cloud also speeds the ability to deliver changes which translates into adaptability, essential for any chance of success in our rapidly transforming economy.

Michael continued on this topic:

“Over the past roughly 5 to 10 years much of the focus of IT has been on cost savings, keeping the lights on as cheaply as possible, but things are changing and qualitatively we see this in conversations we have all the time, companies are more interested in using IT to actually do something rather than just saving money, and cloud is perfectly shaped for offering that”

Great. This seems to be now well understood. The days of explaining to organisation that there is more to the cloud than the simple shift from CAPEX to OPEX, are gone.

Who are buying cloud infrastructure services today? My first answer went to:

“Developers. This word returns a lot whenever we talk about cloud. They’ve been the reason of the success of AWS, for sure. That’s because they just ‘get it’, they understand the advantages of the cloud around how they can transform infrastructure into code. For them, spinning a server is just like writing any other line of code for doing anything else. They managed to take advantage of the cloud from the very early days and they contributed to make cloud what it is today under many aspects”

With regard to enterprises, I also added:

“enterprises are [currently] investing in private clouds because that’s the most natural evolution of their traditional IT departments, but eventually, as they get to provide cloud, it’s gonna be extremely easy to get them to consume cloud [services] from third parties. That’s because cloud is more of a mindset than just a technology”

How can you profit from the cloud opportunity?

So you’re a service provider and you want to participate in the cloud opportunity. How do you do that? Michael suggests to use the “best execution venue” approach. That starts, as Michael explains, with understanding the type of workload or applications that you want to address. Then ask yourself what skills, capabilities and what assets do you have that you can leverage to address a specific type of workload? This will tell you what value you can bring on top of raw infrastructure in order to compete and take advantage of this fast-growing multi-billion market.

My comment on this was:

“Eventually service providers should not consider themselves just part of one of these [IaaS, PaaS or ISaaS, Ed.] segments. Eventually I think the segmentation of this type will not there anymore, and there will be another segmentation based more on use cases, where the service provider will specialise on something and will pick a few services to make the perfect portfolio to match a specific use case in a target market”

Yes I’m a big fan of the use case approach. As I’m a big fan of trying to understand what the cloud is exactly being used for. Even if the press tries to push the cloud as heavily commoditised service, you should never stop asking yourself what your customers are doing with it, what applications they’re running and what else you can do to make their life easier.

In any case, either you decide to leverage your existing capabilities or you try to learn what your customers want to do with your cloud, we all agreed around the following statement: it’s still very early days. As Michael again explains, there are still lots of options to get involved, it’s a great time to get involved, and the doors are definitely not closed.

I’d say they’re absolutely wide open. And many have already crossed the doorway. How about you?

You can listen to the full recording of the webinar at this link.

Docker: not just containers. Thoughts from DockerCon Europe

Developers. Developers. Developers. I guarantee this was the most spoken word at DockerCon Europe 2014, the hottest software conference that just took place in Amsterdam last week. I was so lucky to get a ticket (as it sold out in a couple of days!) and be part of this amazing event that, despite a few complaints heard regarding too much of a “marketing love fest”, offered a lot in understanding market directions, trends and opportunities for software vendors.

So what is Docker? A container technology? No. Well, yes, but there is more to Docker. Despite being known as container technology, Docker is mainly a tool for packaging, shipping and running applications. A piece of infrastructure is now a simple means to do something else and requires no infrastructure skills to consume it. With containers now mainstream, the industry has now completed a further step towards making developers the main driver of IT infrastructure demand.

But at DockerCon, Docker employees appointed the project as a “platform” with the goal of making it easy to build and run distributed applications. A platform made of different components that are “included, but removable”. In fact, during one of the keynote sessions, Solomon Hykes (@solomonstre), creator of the Docker project, announced three of these new components that are now available alongside the well-known Docker engine:

  • Docker Machine
  • Docker Swarm
  • Docker Compose

As the community demanded, these three components have not been incorporated in the same binary as the container engine. But with this launch, Docker is now officially stepping into orchestration, clustering and scheduling.

Apart from the keynote, many of the breakout sessions were run by Docker partners, showing lots of interesting projects and more building blocks for creative engineers. In other sessions, organizations like ING Bank, Société Générale and BBC, explained how they use Docker and its benefits, including how Docker helps build their continuous delivery pipeline. Besides adopting the required technology stack, continuous delivery was also described as a fundamental organizational change that companies need to go through eventually. To this point, my most popular tweet during the two days has been a simple quote from Henk Kolk, Chief Architect at ING Bank Netherlands (@henkkolk):

Here’s my paraphrased version of Kolk’s session – Break the silos, empower engineers, build small product development teams and ship decentralized micro services. Cultural and organizational change has been described as important as the revolution in software architecture or cloud adoption. There can’t be one without the other. So you’d better be ready, educated and embrace it.

Docker Machine

The project that caught most of our attention at Flexiant was Docker Machine. It enables Docker to create machines into different clouds directly from the command line. My colleague Javi (@jpgriffo), author of krane.io, has been looking at it since it was a proposal and during the announcement of Docker Machine, we managed to send the very first pull request for the inclusion of a driver for Flexiant Concerto into the project, ahead of VMware and GCE. If Flexiant Concerto driver will be merged over the next days, Docker users will be able to go from “Zero to Docker” (as it was pitched by its author Ben Firshman – @bfirsh) in any cloud, with a single consistent driver. Exciting! We’re absolutely proud of this and we believe we have much more to give to the Docker community, given our expertise in cloud orchestration. Be prepared for more pull requests to follow.

The Risk

Docker has been blowing minds since the first days of the famous video (21 months ago!). It makes so much sense that it’s been adopted with a speed we’ve never seen in any open source project before. Even those who do not understand it are trying to jump on the bandwagon just to leverage its brand and market traction. This doesn’t come without risks. With a large community, an eco-system with important stakes and a commercial entity behind (Docker, Inc.) there will be conflicts of interests, with “overstepping” onto the domain of those partners that helped make Docker what is today. We’ve already seen this with the CoreOS launch of Rocket a couple of days ago.

Docker, Inc. needs to drive revenue and, despite seeing Solomon Hykes make a lot of effort to keep an impartial and honest governance over his baby, I’m sure it’s not going to be a painless process. Good luck Solomon!

The Opportunity

High risks usually mean high potential return. The return here can be high, not just for Docker, Inc., but for the whole world of IT. Learning Docker and understanding its advantages can drive the development of applications in a totally different way. Not having to create a heavy resource-wasting virtual machine (VM) for everything will boost the rise of micro services, distributed applications and, by reflection, cloud adoption. With this, comes scalability, flexibility, adaptability, innovation and progress. I don’t know if Docker will still be such a protagonist over the next year or two, but what I know is that it will have fundamentally changed the way we build and deliver software.

This post originally appeared on Flexiant.com.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.