Insights from KubeCon EU 2016: Kubernetes vs. reality

Last week in London, the distributed systems community got together at KubeCon EU to talk containers orchestration and Kubernetes. I was there too and I would like to share with you some insights from this exciting new world.

(Sorry for recycling the picture but I simply really liked it! – Credits go to Jessica Maslyn who created it).

Insights from Kubernetes

KubeCon is the official community conference of Kubernetes, despite it was not directly organised by Google, which instead is the by far top contributor of the open source project. Google also sent a few top-notch speakers, whose presence was already a good reason to pay a visit. Kelsey Hightower (@kelseyhightower) first and foremost, with his charm and authentic enthusiasm, was one of the most brilliant speakers, capable of winning the sympathy of everyone and earning respect at his first spoken sentence.

The probably most important announcement made around the Kubernetes project was its inclusion in the CNCF (Cloud Native Computing Foundation) for its governance going forward. This was generally welcomed as a positive initiative, as it has transferred control of the project to a wider committee, but still when the project was mature enough to keep its direction and mission.

Kubernetes is moving at an incredibly fast pace

Some hidden features were revealed during the talks, that even the most advanced users did not know about, and the announced roadmap was simply impressive. We heard users saying “we’re happy to see that any new feature we’ve been thinking of, is already somehow being considered”. This gives an idea of how much innovation is happening there and how much vendors and individual contributors are betting on Kubernetes to become a pervasive thing in the near future.

Its eco-system is doing amazing things

When an open source project just gets it right, it immediately develops an eco-system that understands its value and potential and it’s eager to contribute to it, by adding value on top. This is true for Kubernetes as well, and the exhibit area of the conference brought there the most talented individuals in the industry. I’ve been personally impressed by products like Rancher, that has got really far in very short time (thing that demonstrate clear vision and strong leadership) as well as things like Datadog and Weave Scope, that have shown strong innovation in data visualization, which they definitely brought to the next level.

Has it started to eat its eco-system’s lunch?

This is unavoidable when projects are moving so fast. The border between the project’s core features and what other companies develop as add-ons is fuzzy. And it’s always changing. What some organizations see as an opportunity at first, may become pointless at the next release of Kubernetes. But in the end, this is a community driven project and it’s the community that decides what should fit within Kubernets and what should be left to someone else. That’s why it’s so important to be involved in the community on a day-to-day basis, to know what’s being built and discussed. When I asked Shannon Williams, co-founder of Rancher Labs, how does he cope with this problem, he said you have to move faster, when part of your code is no longer required, just deprecate it and move on. Sure thing, you need to know how to move *that* fast, though!

Insights from reality

As product guy, I get excited about technology but I need to feel the real need of it, in a replicable manner. That’s why my ears were all for customers, end users and use cases.

The New York Times

Luckily, we heard a few use cases at the conference, the most notable of which was the New York Times using Kubernetes in production. Eric Lewis (@ericandrewlewis) took us through their journey from how they were giving developers a server, to enabling developers provision applications using Chef, to containers with Fleet and then Kubernetes. While Kubernetes looks like an end point, and we all know something else is coming next, but according to them, that’s definitely the best thing to deliver developers’ infrastructure at present.

Not (yet) a fit for everything

What stood out the most from real use cases, is how stateful workload is not that seamless to manage using containers and Kubernetes. It was demonstrated that it is possible, but still a pain to setup and maintain. The main reason is that state requires identity, you simply can’t flash out a database node (mapped to a pod) and start a brand new one, but you need to replace it with an exact copy of the one who’s gone. Every application needs to handle state, therefore every application needs to go through this. Luckily, it was said how the Kubernetes community is already working on PetSet that should exactly address this problem. Wait and see!

But the reality today is that Kubernetes is capable of handling only parts of an application. In fact one end customer told me that a great orchestration software should be able to handle both containerised and non-containerised workload. Thumbs up to him to remind us that the rest of the world of IT still exists!

Fast pace leads to caution

This could be a real problem when you have a nascent eco-system that’s proposing equivalent but slightly different approaches to things. Which one to pick? Which horse to bet on? What if my chosen standard will be the one getting deprecated? And whilst competition is good even when it comes to open innovation, this also drives a totally understandable caution from end customers. I kind of miss the time when the standard was coming first and products were based upon them, but now we tend to welcome de facto standards instead, which take some time to prove their superiority.

In the end, what really matters is having more people using Kubernetes. More use cases will drive more innovation and will bring that stabilisation required to convince even the most cautious ones. When people on the conference stage were asked to give some advices on Kubernetes adoption, this is what they said:

  1. Make sure you have someone who supports you business wise. Don’t leave it just a technology-driven decision but make sure the reasons and the opportunities it unlocks are well understood from the business owners of your organisation.
  2. Stick at it. You’ll encounter some difficulties at the beginning but don’t give in. Stick at it and you’ll be rewarded.
  3. Focus on moving to containers. That’s the hard thing in this revolution. Once you do that, adopting Kubernetes will be just a no brainer.

Right, move to containers. We heard this for a while. And containers are one of those not yet standardized things, despite the Open Container Initiative was kicked off a while ago. Docker is trying to become the de facto standard here but this seems to be business strategy driven rather than a contribution to the open source community. In fact, where were the Docker representatives at KubeCon? I have seen none of them.

Disclaimer: I have no personal involvement with KubeAcademy, the organizers of KubeCon, or with any of the mentioned companies and products. My employer is Flexiant and Flexiant was not an official sponsor of KubeCon. Flexiant is currently building a Kubernetes-based version of Flexiant Concerto.

Docker: not just containers. Thoughts from DockerCon Europe

Developers. Developers. Developers. I guarantee this was the most spoken word at DockerCon Europe 2014, the hottest software conference that just took place in Amsterdam last week. I was so lucky to get a ticket (as it sold out in a couple of days!) and be part of this amazing event that, despite a few complaints heard regarding too much of a “marketing love fest”, offered a lot in understanding market directions, trends and opportunities for software vendors.

So what is Docker? A container technology? No. Well, yes, but there is more to Docker. Despite being known as container technology, Docker is mainly a tool for packaging, shipping and running applications. A piece of infrastructure is now a simple means to do something else and requires no infrastructure skills to consume it. With containers now mainstream, the industry has now completed a further step towards making developers the main driver of IT infrastructure demand.

But at DockerCon, Docker employees appointed the project as a “platform” with the goal of making it easy to build and run distributed applications. A platform made of different components that are “included, but removable”. In fact, during one of the keynote sessions, Solomon Hykes (@solomonstre), creator of the Docker project, announced three of these new components that are now available alongside the well-known Docker engine:

  • Docker Machine
  • Docker Swarm
  • Docker Compose

As the community demanded, these three components have not been incorporated in the same binary as the container engine. But with this launch, Docker is now officially stepping into orchestration, clustering and scheduling.

Apart from the keynote, many of the breakout sessions were run by Docker partners, showing lots of interesting projects and more building blocks for creative engineers. In other sessions, organizations like ING Bank, Société Générale and BBC, explained how they use Docker and its benefits, including how Docker helps build their continuous delivery pipeline. Besides adopting the required technology stack, continuous delivery was also described as a fundamental organizational change that companies need to go through eventually. To this point, my most popular tweet during the two days has been a simple quote from Henk Kolk, Chief Architect at ING Bank Netherlands (@henkkolk):

Here’s my paraphrased version of Kolk’s session – Break the silos, empower engineers, build small product development teams and ship decentralized micro services. Cultural and organizational change has been described as important as the revolution in software architecture or cloud adoption. There can’t be one without the other. So you’d better be ready, educated and embrace it.

Docker Machine

The project that caught most of our attention at Flexiant was Docker Machine. It enables Docker to create machines into different clouds directly from the command line. My colleague Javi (@jpgriffo), author of krane.io, has been looking at it since it was a proposal and during the announcement of Docker Machine, we managed to send the very first pull request for the inclusion of a driver for Flexiant Concerto into the project, ahead of VMware and GCE. If Flexiant Concerto driver will be merged over the next days, Docker users will be able to go from “Zero to Docker” (as it was pitched by its author Ben Firshman – @bfirsh) in any cloud, with a single consistent driver. Exciting! We’re absolutely proud of this and we believe we have much more to give to the Docker community, given our expertise in cloud orchestration. Be prepared for more pull requests to follow.

The Risk

Docker has been blowing minds since the first days of the famous video (21 months ago!). It makes so much sense that it’s been adopted with a speed we’ve never seen in any open source project before. Even those who do not understand it are trying to jump on the bandwagon just to leverage its brand and market traction. This doesn’t come without risks. With a large community, an eco-system with important stakes and a commercial entity behind (Docker, Inc.) there will be conflicts of interests, with “overstepping” onto the domain of those partners that helped make Docker what is today. We’ve already seen this with the CoreOS launch of Rocket a couple of days ago.

Docker, Inc. needs to drive revenue and, despite seeing Solomon Hykes make a lot of effort to keep an impartial and honest governance over his baby, I’m sure it’s not going to be a painless process. Good luck Solomon!

The Opportunity

High risks usually mean high potential return. The return here can be high, not just for Docker, Inc., but for the whole world of IT. Learning Docker and understanding its advantages can drive the development of applications in a totally different way. Not having to create a heavy resource-wasting virtual machine (VM) for everything will boost the rise of micro services, distributed applications and, by reflection, cloud adoption. With this, comes scalability, flexibility, adaptability, innovation and progress. I don’t know if Docker will still be such a protagonist over the next year or two, but what I know is that it will have fundamentally changed the way we build and deliver software.

This post originally appeared on Flexiant.com.

Why I picked Flexiant as my next challenge

Dear all, I am really happy and proud to announce that I am joining the Flexiant team starting this week. In the last few years, Flexiant has been building a stunning Cloud Management Platform with the goal of enabling service providers to join cloud space in few easy steps, and with the possibility to still highly differentiate their service.

The cloud infrastructure market landscape is but in its final configuration and I have the ambition to actively contribute to how it will look like in the next few years. I am joining Flexiant in a moment when the cloud industry is facing a terrific growth, with just a bunch of players out there, still immature technologies, vendors struggling to adapt their business model and a general misperception around cloud services. There is plenty of work to do!

But let me give you a little bit more of insights about why I have picked Flexiant and what great things I think we can do together.

A differentiated cloud service

I enjoyed observing the recent signs of a required differentiation in the cloud infrastructure market. After a large consensus around certain technologies, with such a big (and growing) market to conquer, competition is getting tougher, as more players try to come onboard everyday. Although price initially appears as the main competition driver, considering the impressive cloud services portfolio of Amazon Web Services, highly differentiated service offerings will be required for those who seriously aim at competing against the giant.

Why would I want to compete with that giant? Can it be enough for me to offer some complementary service in order to exploit the market reach of Amazon, instead of going against it? Well, we all know the consequences of a unique-player dominated market, we’ve seen it before (Microsoft, Oracle, etc.) and we all can concur that during those times innovation has been slower than ever, with the abuse of dominant positions that negatively affected the customer experience. The opportunity out there is big and I don’t think we want to leave the entire market to one player again, do we? And if the goals of the cloud is to commoditize technology by offering it as-a-service, it’s right there, on the service side, that there is need and opportunity to innovate.

Recently we have had a concrete proof of this need for differentiation. The acquisition of Enstratius by Dell was driven by the need for a highly differentiated cloud service that fills the gap between commodity infrastructures and enterprise requirements. I was lucky enough to have the opportunity to work with the Enstratius team and I can tell they were winning deals whenever it was about governance and compliance, all typical enterprise requirements. But the real news there was Dell dropping its previously announced OpenStack-powered cloud service, something that will never come to life instead. All those players betting on OpenStack wanted to make it the industry standard for building cloud infrastructure and now what? They suddenly remembered they have to compete with each other. And the imperative is: differentiate!

On this matter, our own Tony Lucas (@tonylucas), European pioneer of cloud services and SVP product at Flexiant (if you don’t believe check out this video of Tony talking about cloud with Jeff Barr of AWS back in 2007), has written an extensive White Paper where he scientifically goes through why cloud federation is not the optimal model for competing in the IaaS market, with differentiation as the winning alternative. Beside suggesting everyone in this industry to read it carefully, it reminded me of the biggest failure of cloud federation we have just recently witnessed: vCloud providers. The launch of VMware hybrid cloud service is the clear demonstration that federating providers with the same technology but different cultures, goals and SLAs, does not work. It can be a short term opportunity for the “federatable” cloud software vendor, but a secure failure on the mid-long term. Read Tony’s to understand exactly why.

A matching vision

For those who know me, I am a public cloud only believer. “Private cloud” was just a name given legacy vendors who didn’t want to give up on their on-premises business while having the opportunity to exploit the marketing hype and sell extra stuff to their rich customers. “Hybrid cloud” is how we are naming the period it takes to complete the journey to the public cloud.

Again, the most recent moves of the big guys confirm that public cloud is the way to go. Legacy software vendors are trying to convert themselves into service providers, mostly by acquiring companies rather than innovating from inside (e.g. yesterday’s news on IBM multi-billion acquisition of SoftLayer). So should we foresee a public cloud market dominated by AWS and challenged only by few other big whales? I don’t think so. If AWS really “gets” the cloud, the internal cultural conversion needed within traditional vendors will be painful and won’t really bring anything substantial at least for the next 3 to 5 years. Their current size and the internal resistance to give up on recurring revenue derived from on-premises business, will not let them be a real challenge to AWS in the near term. Instead, small, agile, highly innovative and differentiated niche players are those which will eventually contribute defining the next cloud infrastructure market landscape.

For more scientific evidence of why public clouds will take over the world, I can suggest another brilliant read by Alex Bligh (@alexbligh), the Internet rock star who has been behind Nominet (the UK domain registry) and currently CTO at Flexiant. His detailed methodical analysis led him to a conclusion:

And [so] will be for cloud computing: it’s not the technology that matters per se, it’s the consequent effect on economics. Private cloud is in essence an attempt to use cloud’s technology without gaining any of the efficiencies. It is for service providers to educate their customers and prospects, and the audience will often be financial or strategic as opposed to technical.

Alex Bligh, CTO at Flexiant

An enthusiastic choice

Visionaries like Tony and Alex, a mature product like Flexiant Cloud Orchestrator and the guidance and business savviness of our CEO George Knox (@GeorgeKnox) are all ingredients that will eventually lead to making some real difference in the coming months. Finding myself aligned to the company vision and culture, I am really enthusiastic to be on board and I foresee big things ahead of us. Stay tuned and ping me if you want to know more about Flexiant!

ABOUT FLEXIANT

Flexiant is a leading international provider of cloud orchestration software for on-demand, fully automated provisioning of cloud services. Headquartered in Europe, Flexiant’s cloud management software gives cloud service providers’ business agility, freedom and flexibility to scale, deploy and configure cloud servers, simply and cost-effectively. Vendor agnostic and supporting multiple hypervisors, Flexiant Cloud Orchestrator is a cloud management software suite that is service provider ready, enabling cloud service provisioning through to granular metering, billing and reseller whitelabel capabilities. Used by over one hundred organizations worldwide, from hosting providers, large MSPs and telcos, Flexiant Cloud Orchestrator is simple to understand, simple to deploy and simple to use. Flexiant was named a ‘Gartner Cool Vendor’ in Cloud Management, received the Info-Tech Research Group Trendsetter Award and called an industry double threat by 451 Group. Flexiant customers include ALVEA Services, FP7 Consortium, IS Group, ITEX, and NetGroup. Visit www.flexiant.com.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.

It’s already happening in Europe

Recently I’ve been reading an article about Europe being an unfriendly environment for entrepreneurship and specifically for startups. I liked the underlying optimism about getting a new beginning, but I think it is completely wrong to consider Europe as a whole when legislation and culture as so different country by country. And it is unfair not to see what Europe has already been doing so far.

Well then, where exactly the new beginning will start from? I’ve been trying to locate the hot spots for Internet startups in the Old Continent and I’ve actually seem much more than what is the common perception of this scenario.

New technologies are arising. Those that are specifically thought for the cloud, thought for scale. Internet and mobile applications frameworks and platforms (like Node.js, MongoDB) are getting more and more popular throughout the entire continent. Just look at the growing number of conferences such as Node Dublin, Node.js Conf in Italy, JSconf EU in Berlin or Railsberry in Krakow. And then notice they usually take place in weekends to let developers join out of their passion, leaving space to creativity and focusing on real innovation.

Moreover, it’s not only about startups. There are Internet companies in Europe that are already at the next stage. They developed a business model. They got profitable. And somebody believed in them, believed in the environment where they settled in and someone was eventually right doing that. Examples like SoundCloud (Germany), Spotify (Sweden), Wonga (UK), JustEat (Denmark) are just a few  that worth mentioning.

So is this just the new beginning? No, it is much more than that, it’s already happening and I really want to be there when that happens. I work for Joyent and we run a public cloud (IaaS) that hosts many of the successful Internet companies in the United States. Many of those have chosen Joyent because our technology is designed for those who make money through the Internet, for they who can’t afford loosing any click. Because one click means money.

But I live in Europe, and I want the next success story to be European.

This is what I work for everyday. I observe the evolving scenario of Internet companies in Europe, supporting conferences (I will be attending the Node Dublin, the most important European Node conference, next week) and helping companies driving their business in a better way by hosting their new generation applications in a new generation cloud. On top of an infrastructure that runs just fast as the bare-metal does, because it was built from the ground up, built with the cloud in mind.

It’s simply so exciting.

New blogger is around

When blogs are no longer trending – except for automatic social-media/seo eco-systems generators – I decided to set up my own. I’m neither nostalgic nor retro, I just think that now more than ever it is important to help people understand real values and make some order in the tons of information we get daily, literally “big data” that is mainly generated and driven by marketing engines.

Today, IT decision makers are puzzled about which technology would boost up their business and most of them end up just picking mainstream products because “you won’t ever get fired for buying $marketLeader”. I always believed technology can really help the economy by commoditizing processes and leaving companies to focus on their core business, but if everyone picks mainstream technology products, how can competition rise through innovation?

In this blog I’ll try to give my contribution for a better understanding of the key technology trends I’m working on, aiming at helping anyone taking the right long term decision, possibly not in a conservative way.

Stay tuned!