Tuesday, January 8, 2019

Containers, Docker, and Kubernetes

In the last couple of months I have been back in training mode and helping people understand more about containers and how to manage them. The focus has been on Azure's container offerings, although a lot of the work is very platform agnostic, and that is really the point of containers.

For those that have never used Azure before, or need a refresher on some of the great things you can do, in both, the Azure Portal, and from the command line, here are couple of short modules in the new Microsoft Learn site:

https://docs.microsoft.com/learn/modules/tour-azure-services-and-features/


I encourage everyone that uses Azure to learn how to use the command line tools. For the longest time I avoided the CLI as I like the way the Portal allows you to discover new features through exploration, which has always been the advantage of a graphical user interface over the command line. Yet the strength of learning the command line tools is how fast you can get tasks accomplished as well as the ability to automate tasks with scripts.
Microsoft Learn has a module to learn about automating Azure tasks here

https://docs.microsoft.com/learn/modules/automate-azure-tasks-with-powershell/

In order to understand the value of containers I believe it is important to know how we (as an industry) got to this point. I find that understanding the history of technology helps to explain the current situation. (It also makes it easier to extrapolate out potential futures.) From the perspective of server technologies, and hosting applications with an intention to scale them, virtualization, and virtual machines have been a standard mechanism for the longest time. If you have never used a VM (virtual machine) you can get some perspective from following this lab and setting up a VM.

https://docs.microsoft.com/en-us/learn/modules/welcome-to-azure/

A Virtual Machine abstracts the physical hardware upon which it runs. When you deploy an application to a VM you should not care what the actual hardware is. Often a physical server can be running multiple virtual machines, and this provides a higher level of potential resource utilization. Considering that data canters are starting to become a noticeable consumer of world power supplies, it should be obvious that the more energy efficient we can be with our servers the better we are conserving our global resources. Yes, virtualization might be economically efficient, and it could be considered ecological more efficient than running everything on physical hardware. A virtual machine will host an operating system, and the software you deploy will need to run on that operating system.  Virtual Machine technology is what enables most of the worlds big Cloud providers to work. For a high level understanding of how Azure works this video provides a good overview


A container provides a host to run a software application that abstracts another level of concern away from the deployment. While Virtual Machines provide an abstraction from the physical hardware, containers provide an abstraction from the operating system.

By being abstracted from the OS (operating system), there is no longer a need to be concerned about the setup of the OS, how it hosts your application, where it has your application installed, and numerous other issues. This enables a more agile approach to be taken when deploying applications. An application running in a container can be easily moved from one location to another, simply move the container and everything you need will come with it.

Containers also enable an even greater level of resource (energy) efficiency. When an application is hosted in a virtual machine, then you scale the application by creating copies of that virtual machine (scale out), or adding more computer resources to the virtual machine (scale up). The virtual machine is the unit of scaling. With a container, the unit of scaling is closer to the application, you scale out your application by creating more instances of your container. As a virtual machine can host multiple container instances you are going to be getting more from the potential resources than if you only have the application running once per virtual machine.

Containers are not limited to specific types of application, or programming language either. Most common languages and types of app can be containerized. A container can run on a local machine, your laptop, a big data canter server, or an IOT device. This means you can build and test containers locally and then deploy them to scale with the confidence they will work the same way.

One of the most popular container runtimes is called Docker. Docker provides container compatibility between Mac, Windows, and Linux machines. Using Docker, an application can be deployed to a local container, tested and then the container image can be deployed to scale by creating multiple instances of that container image on devices anywhere you desire (as long as it supports Docker).


When you create a container (or Docker) image you are defining the contents of the container, You might consider it a template definition of what the container will be running. If you are a programmer then think about a container image as a class definition. A container instance is a running version of the container image. Using the programming metaphor again, the instance would be an object instantiated from the image (class).

In order to manage your containers and make best use  of the resources you will want to perform tasks such as scheduling, monitoring, scaling, connectivity (networking), upgrades, and failure management. This is where Kubernetes comes into play. Kubernetes is an orchestration tools for containers. This means Docker and Kubernetes work together to deliver a great container experience.
To understand the basics of Containers and Kubernetes watch this video on Channel 9.


If you want to setup Kubernetes on a machine (or virtual machine) there is a fair amount of work that needs to be done, you need to set up routing tables, storage to support your applications, etc… The great thing about using Kubernetes in Azure is the setup is all managed for you.

There is a tutorial that you can follow to get a simple website running in a Docker container and then use AKS (Azure Kubernetes Service) to deploy and orchestrate the containers. It should take you around an hour to complete and should help you to better understand how Docker and Kubernetes work together using Azure services.


If you do not need the orchestration provided by Kubernetes and simply want a container running your app in the Cloud the Azure Container Instances (ACI) simplifies the setup process even further. You can think of the ACI service as providing 'serverless' containers. You do not need to care about managing servers at all. This tutorial will help you understand how to use Azure Container Instances


If you want to find out more about how to get the most from containers in your organization get in touch and we can organize a workshop or training session for you and your team.