Hello! We are:
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials
Credit is also due to multiple contributors — thank you!
You can also follow along on your own, at your own pace
We included as much information as possible in these slides
We recommend having a mentor to help you ...
... Or be comfortable spending some time reading the Kubernetes documentation ...
... And looking for answers on StackOverflow and other outlets
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
This slide has a little magnifying glass in the top left corner
This magnifying glass indicates slides that provide extra details
Feel free to skip them if:
you are in a hurry
you are new to this and want to avoid cognitive overload
you want only the most essential information
You can review these slides another time if you want, they'll be waiting for you ☺
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
Pre-requirements
(automatically generated title slide)
Be comfortable with the UNIX command line
navigating directories
editing files
a little bit of bash-fu (environment variables, loops)
Some Docker knowledge
docker run
, docker ps
, docker build
ideally, you know how to write a Dockerfile and build it
(even if it's a FROM
line and a couple of RUN
commands)
It's totally OK if you are not a Docker expert!
Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
Misattributed to Benjamin Franklin
(Probably inspired by Chinese Confucian philosopher Xunzi)
The whole workshop is hands-on
We are going to build, ship, and run containers!
You are invited to reproduce all the demos
All hands-on sections are clearly identified, like the gray rectangle below
This is the stuff you're supposed to do!
Go to http://vmware-2019-11.container.training/ to view these slides
Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
Type a slide number + ENTER to go to that slide
The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
Slides will remain online so you can review them later if needed
Each person gets a private cluster of cloud VMs (not shared with anybody else)
They'll remain up for the duration of the workshop
You should have a little card with login+password+IP addresses
You can automatically SSH from one VM to another
The nodes have aliases: node1
, node2
, etc.
Installing this stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
"The whole team downloaded all these container images from the WiFi!
... and it went great!" (Literally no-one ever)
All you need is a computer (or even a phone or tablet!), with:
an internet connection
a web browser
an SSH client
On Linux, OS X, FreeBSD... you are probably all set
On Windows, get one of these:
On Android, JuiceSSH (Play Store) works pretty well
Nice-to-have: Mosh instead of SSH, if your internet connection tends to lose packets
You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!
Mosh is "the mobile shell"
It is essentially SSH over UDP, with roaming features
It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
To install it: (apt|yum|brew) install mosh
It has been pre-installed on the VMs that we are using
To connect to a remote machine: mosh user@host
(It is going to establish an SSH connection, then hand off to UDP)
It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
Log into the first VM (node1
) with your SSH client:
ssh user@A.B.C.D
(Replace user
and A.B.C.D
with the user and IP address provided to you)
You should see a prompt looking like this:
[A.B.C.D] (...) user@node1 ~$
If anything goes wrong — ask for help!
Use something like Play-With-Docker or Play-With-Kubernetes
Zero setup effort; but environment are short-lived and might have limited resources
Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
Create a bunch of clusters for you and your friends (instructions)
Bigger setup effort; ideal for group training
If you are using your own Kubernetes cluster, you can use shpod
shpod
provides a shell running in a pod on your own cluster
It comes with many tools pre-installed (helm, stern...)
These tools are used in many exercises in these slides
shpod
also gives you completion and a fancy prompt
These remarks apply only when using multiple nodes, of course.
Unless instructed, all commands must be run from the first VM, node1
We will only check out/copy the code on node1
During normal operations, we do not need access to the other nodes
If we had to troubleshoot issues, we would use a combination of:
SSH (to access system logs, daemon status...)
Docker API (to check running containers and container engine status)
Once in a while, the instructions will say:
"Open a new terminal."
There are multiple ways to do this:
create a new window or tab on your machine, and SSH into the VM;
use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
Tmux is a terminal multiplexer like screen
.
You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.
Our sample application
(automatically generated title slide)
We will clone the GitHub repository onto our node1
The repository also contains scripts and tools that we will use through the workshop
node1
:git clone https://github.com/jpetazzo/container.training
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
Go to the dockercoins
directory, in the cloned repo:
cd ~/container.training/dockercoins
Check the files and directories:
tree
Jérôme is going to wear his developer hat ...
... start the application on his developer's machine ...
... and wait for the app to be up and running.
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
How DockerCoins works:
generate a few random bytes
hash these bytes
increment a counter (to keep track of speed)
repeat forever!
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
How DockerCoins works:
generate a few random bytes
hash these bytes
increment a counter (to keep track of speed)
repeat forever!
DockerCoins is not a cryptocurrency
(the only common points are "randomness," "hashing," and "coins" in the name)
DockerCoins is made of 5 services:
rng
= web service generating random bytes
hasher
= web service computing hash of POSTed data
worker
= background process calling rng
and hasher
webui
= web interface to watch progress
redis
= data store (holds a counter updated by worker
)
These 5 services are visible in the application's Compose file, docker-compose.yml
worker
invokes web service rng
to generate random bytes
worker
invokes web service hasher
to hash these bytes
worker
does this in an infinite loop
every second, worker
updates redis
to indicate how many loops were done
webui
queries redis
, and computes and exposes "hashing speed" in our browser
(See diagram on next slide!)
How does each service find out the address of the other ones?
How does each service find out the address of the other ones?
We do not hard-code IP addresses in the code
We do not hard-code FQDNs in the code, either
We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
worker/worker.py
redis = Redis("redis")def get_random_bytes(): r = requests.get("http://rng/32") return r.contentdef hash_bytes(data): r = requests.post("http://hasher/", data=data, headers={"Content-Type": "application/octet-stream"})
(Full source code available here)
You can check the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training
The application is in the dockercoins subdirectory
The Compose file (docker-compose.yml) lists all 5 services
redis
is using an official image from the Docker Hub
hasher
, rng
, worker
, webui
are each built from a Dockerfile
Each service's Dockerfile and source code is in its own directory
(hasher
is in the hasher directory,
rng
is in the rng
directory, etc.)
On the left-hand side, the "rainbow strip" shows the container names
On the right-hand side, we see the output of our containers
We can see the worker
service making requests to rng
and hasher
For rng
and hasher
, we see HTTP access logs
"Logs are exciting and fun!" (No-one, ever)
The webui
container exposes a web dashboard; let's view it
A drawing area should show up, and after a few seconds, a blue graph will appear.
Kubernetes concepts
(automatically generated title slide)
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
What does that really mean?
atseashop/api:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Keep processing requests during the upgrade; update my containers one at a time
Autoscaling
(straightforward on CPU; more complex on other metrics)
Ressource management and scheduling
(reserve CPU/RAM for containers; placement constraints)
Advanced rollout patterns
(blue/green deployment, canary deployment)
Batch jobs
(one-off; parallel; also cron-style periodic execution)
Fine-grained access control
(defining what can be done by whom on which resources)
Stateful services
(databases, message queues, etc.)
Automating complex tasks with operators
(e.g. database replication, failover, etc.)
Ha ha ha ha
OK, I was trying to scare you, it's much simpler than that ❤️
The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI
(Courtesy of Yongbok Kim)
The second one is a simplified representation of a Kubernetes cluster
(Courtesy of Imesh Gunaratne)
The nodes executing our containers run a collection of services:
a container Engine (typically Docker)
kubelet (the "node agent")
kube-proxy (a necessary but not sufficient network component)
Nodes were formerly called "minions"
(You might see that word in older articles or documentation)
The Kubernetes logic (its "brains") is a collection of services:
the API server (our point of entry to everything!)
core services like the scheduler and controller manager
etcd
(a highly available key/value store; the "database" of Kubernetes)
Together, these services form the control plane of our cluster
The control plane is also called the "master"
It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
Normal applications are restricted from running on this node
(By using a mechanism called "taints")
When high availability is required, each service of the control plane must be resilient
The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
The services of the control plane can run in or out of containers
For instance: since etcd
is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
In that case, there is no "master node"
For this reason, it is more accurate to say "control plane" rather than "master."
No!
No!
By default, Kubernetes uses the Docker Engine to run containers
We can leverage other pluggable runtimes through the Container Runtime Interface
We could also use (deprecated)rkt
("Rocket") from CoreOS
ctr
Yes!
Yes!
In this workshop, we run our app on a single node first
We will need to build images and ship them around
We can do these things without Docker
(and get diagnosed with NIH¹ syndrome)
Docker is still the most stable container engine today
(but other options are maturing very quickly)
On our development environments, CI pipelines ... :
Yes, almost certainly
On our production servers:
Yes (today)
Probably not (in the future)
More information about CRI on the Kubernetes blog
We will interact with our Kubernetes cluster through the Kubernetes API
The Kubernetes API is (mostly) RESTful
It allows us to create, read, update, delete resources
A few common resource types are:
node (a machine — physical or virtual — in our cluster)
pod (group of containers running together on a node)
service (stable network endpoint to connect to one or multiple containers)
How would we scale the pod shown on the previous slide?
Do create additional pods
each pod can be on a different node
each pod will have its own IP address
Do not add more NGINX containers in the pod
all the NGINX containers would be on the same node
they would all have the same IP address
(resulting in Address alreading in use
errors)
Should we put e.g. a web application server and a cache together?
("cache" being something like e.g. Memcached or Redis)
Putting them in the same pod means:
they have to be scaled together
they can communicate very efficiently over localhost
Putting them in different pods means:
they can be scaled separately
they must communicate over remote IP addresses
(incurring more latency, lower performance)
Both scenarios can make sense, depending on our goals
The first diagram is courtesy of Lucas Käldström, in this presentation
The second diagram is courtesy of Weave Works
a pod can have multiple containers working together
IP addresses are associated with pods, not with individual containers
Both diagrams used with permission.
First contact with kubectl
(automatically generated title slide)
kubectl
kubectl
is (almost) the only tool we'll need to talk to Kubernetes
It is a rich CLI tool around the Kubernetes API
(Everything you can do with kubectl
, you can do directly with the API)
On our machines, there is a ~/.kube/config
file with:
the Kubernetes API address
the path to our TLS certificates used to authenticate
You can also use the --kubeconfig
flag to pass a config file
Or directly --server
, --user
, etc.
kubectl
can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...
kubectl
is the new SSHWe often start managing servers with SSH
(installing packages, troubleshooting ...)
At scale, it becomes tedious, repetitive, error-prone
Instead, we use config management, central logging, etc.
In many cases, we still need SSH:
as the underlying access method (e.g. Ansible)
to debug tricky scenarios
to inspect and poke at things
kubectl
We often start managing Kubernetes clusters with kubectl
(deploying applications, troubleshooting ...)
At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone
Instead, we use automated pipelines, observability tooling, etc.
In many cases, we still need kubectl
:
to debug tricky scenarios
to inspect and poke at things
The Kubernetes API is always the underlying access method
kubectl get
Node
resources with kubectl get
!Look at the composition of our cluster:
kubectl get node
These commands are equivalent:
kubectl get nokubectl get nodekubectl get nodes
kubectl get
can output JSON, YAML, or be directly formattedGive us more info about the nodes:
kubectl get nodes -o wide
Let's have some YAML:
kubectl get no -o yaml
See that kind: List
at the end? It's the type of our result!
kubectl
and jq
kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity"
We can list all available resource types by running kubectl api-resources
(In Kubernetes 1.10 and prior, this command used to be kubectl get
)
We can view the definition for a resource type with:
kubectl explain type
We can view the definition of a field in a resource, for instance:
kubectl explain node.spec
Or get the full definition of all fields and sub-fields:
kubectl explain node --recursive
We can access the same information by reading the API documentation
The API documentation is usually easier to read, but:
it won't show custom types (like Custom Resource Definitions)
we need to make sure that we look at the correct version
kubectl api-resources
and kubectl explain
perform introspection
(they communicate with the API server and obtain the exact type definitions)
The most common resource names have three forms:
singular (e.g. node
, service
, deployment
)
plural (e.g. nodes
, services
, deployments
)
short (e.g. no
, svc
, deploy
)
Some resources do not have a short name
Endpoints
only have a plural form
(because even a single Endpoints
resource is actually a list of endpoints)
We can use kubectl get -o yaml
to see all available details
However, YAML output is often simultaneously too much and not enough
For instance, kubectl get node node1 -o yaml
is:
too much information (e.g.: list of images available on this node)
not enough information (e.g.: doesn't show pods running on this node)
difficult to read for a human operator
For a comprehensive overview, we can use kubectl describe
instead
kubectl describe
kubectl describe
needs a resource type and (optionally) a resource name
It is possible to provide a resource name prefix
(all matching objects will be displayed)
kubectl describe
will retrieve some extra information about the resource
node1
with one of the following commands:kubectl describe node/node1kubectl describe node node1
(We should notice a bunch of control plane pods.)
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Where are the pods that we saw just a moment earlier?!?
kubectl get namespaceskubectl get namespacekubectl get ns
kubectl get namespaceskubectl get namespacekubectl get ns
You know what ... This kube-system
thing looks suspicious.
In fact, I'm pretty sure it showed up earlier, when we did:
kubectl describe node node1
By default, kubectl
uses the default
namespace
We can see resources in all namespaces with --all-namespaces
List the pods in all namespaces:
kubectl get pods --all-namespaces
Since Kubernetes 1.14, we can also use -A
as a shorter version:
kubectl get pods -A
Here are our system pods!
etcd
is our etcd server
kube-apiserver
is the API server
kube-controller-manager
and kube-scheduler
are other control plane components
coredns
provides DNS-based service discovery (replacing kube-dns as of 1.11)
kube-proxy
is the (per-node) component managing port mappings and such
weave
is the (per-node) component managing the network overlay
the READY
column indicates the number of containers in each pod
(1 for most pods, but weave
has 2, for instance)
default
)kube-system
namespace:kubectl get pods --namespace=kube-systemkubectl get pods -n kube-system
kubectl
commandsWe can use -n
/--namespace
with almost every kubectl
command
Example:
kubectl create --namespace=X
to create something in namespace XWe can use -A
/--all-namespaces
with most commands that manipulate multiple objects
Examples:
kubectl delete
can delete resources across multiple namespaces
kubectl label
can add/remove/update labels across multiple namespaces
kube-public
?kube-public
namespace:kubectl -n kube-public get pods
Nothing!
kube-public
is created by kubeadm & used for security bootstrapping.
kube-public
kube-public
is a ConfigMap named cluster-info
List ConfigMap objects:
kubectl -n kube-public get configmaps
Inspect cluster-info
:
kubectl -n kube-public get configmap cluster-info -o yaml
Note the selfLink
URI: /api/v1/namespaces/kube-public/configmaps/cluster-info
We can use that!
cluster-info
Earlier, when trying to access the API server, we got a Forbidden
message
But cluster-info
is readable by everyone (even without authentication)
cluster-info
:curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
We were able to access cluster-info
(without auth)
It contains a kubeconfig
file
kubeconfig
kubeconfig
file from this ConfigMapkubeconfig
:curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig
This file holds the canonical address of the API server, and the public key of the CA
This file does not hold client keys or tokens
This is not sensitive information, but allows us to establish trust
kube-node-lease
?Starting with Kubernetes 1.14, there is a kube-node-lease
namespace
(or in Kubernetes 1.13 if the NodeLease feature gate is enabled)
That namespace contains one Lease object per node
Node leases are a new way to implement node heartbeats
(i.e. node regularly pinging the control plane to say "I'm alive!")
For more details, see KEP-0009 or the node controller documentation k8s/kubectlget.md
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
There is already one service on our cluster: the Kubernetes API itself.
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification
Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc
The command above should either time out, or show an authentication error. Why?
Connections to ClusterIP services only work from within the cluster
If we are outside the cluster, the curl
command will probably time out
(Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)
This is the case with most "real" Kubernetes clusters
To try the connection from within the cluster, we can use shpod
This is what we should see when connecting from within the cluster:
$ curl -k https://10.96.0.1{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403}
We can see kind
, apiVersion
, metadata
These are typical of a Kubernetes API reply
Because we are talking to the Kubernetes API
The Kubernetes API tells us "Forbidden"
(because it requires authentication)
The Kubernetes API is reachable from within the cluster
(many apps integrating with Kubernetes will use this)
Each service also gets a DNS record
The Kubernetes DNS resolver is available from within pods
(and sometimes, from within nodes, depending on configuration)
Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)
Running our first containers on Kubernetes
(automatically generated title slide)
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
In that container in the pod, we are going to run a simple ping
command
Then we are going to start additional copies of the pod
kubectl run
1.1.1.1
, Cloudflare's
public DNS resolver:kubectl run pingpong --image alpine ping 1.1.1.1
kubectl run
1.1.1.1
, Cloudflare's
public DNS resolver:kubectl run pingpong --image alpine ping 1.1.1.1
(Starting with Kubernetes 1.12, we get a message telling us that
kubectl run
is deprecated. Let's ignore it for now.)
kubectl run
kubectl run
kubectl get all
kubectl run
kubectl run
kubectl get all
We should see the following things:
deployment.apps/pingpong
(the deployment that we just created)replicaset.apps/pingpong-xxxxxxxxxx
(a replica set created by the deployment)pod/pingpong-xxxxxxxxxx-yyyyy
(a pod created by the replica set)Note: as of 1.10.1, resource types are displayed in more detail.
A deployment is a high-level construct
allows scaling, rolling updates, rollbacks
multiple deployments can be used together to implement a canary deployment
delegates pods management to replica sets
A replica set is a low-level construct
makes sure that a given number of identical pods are running
allows scaling
rarely used directly
A replication controller is the (deprecated) predecessor of a replica set
pingpong
deploymentkubectl run
created a deployment, deployment.apps/pingpong
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/pingpong 1 1 1 1 10m
replicaset.apps/pingpong-xxxxxxxxxx
NAME DESIRED CURRENT READY AGEreplicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
pod/pingpong-xxxxxxxxxx-yyyyy
NAME READY STATUS RESTARTS AGEpod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
We'll see later how these folks play together for:
Let's use the kubectl logs
command
We will pass either a pod name, or a type/name
(E.g. if we specify a deployment or replica set, it will get the first pod in it)
Unless specified otherwise, it will only show logs of the first container in the pod
(Good thing there's only one in ours!)
ping
command:kubectl logs deploy/pingpong
Just like docker logs
, kubectl logs
supports convenient options:
-f
/--follow
to stream logs in real time (à la tail -f
)
--tail
to indicate how many lines you want to see (from the end)
--since
to get logs only after a given timestamp
View the latest logs of our ping
command:
kubectl logs deploy/pingpong --tail 1 --follow
Leave that command running, so that we can keep an eye on these logs
kubectl scale
Scale our pingpong
deployment:
kubectl scale deploy/pingpong --replicas 3
Note that this command does exactly the same thing:
kubectl scale deployment pingpong --replicas 3
Note: what if we tried to scale replicaset.apps/pingpong-xxxxxxxxxx
?
We could! But the deployment would notice it right away, and scale back to the initial level.
Let's look again at the output of kubectl logs
(the one we started before scaling up)
kubectl logs
shows us one line per second
We could expect 3 lines per second
(since we should now have 3 pods running ping
)
Let's try to figure out what's happening!
kubectl logs
?Interrupt kubectl logs
(with Ctrl-C)
Restart it:
kubectl logs deploy/pingpong --tail 1 --follow
kubectl logs
will warn us that multiple pods were found, and that it's showing us only one of them.
Let's leave kubectl logs
running while we keep exploring.
The deployment pingpong
watches its replica set
The replica set ensures that the right number of pods are running
What happens if pods disappear?
In a separate window, watch the list of pods:
watch kubectl get pods
Destroy the pod currently shown by kubectl logs
:
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
kubectl delete pod
terminates the pod gracefully
(sending it the TERM signal and waiting for it to shutdown)
As soon as the pod is in "Terminating" state, the Replica Set replaces it
But we can still see the output of the "Terminating" pod in kubectl logs
Until 30 seconds later, when the grace period expires
The pod is then killed, and kubectl logs
exits
What if we wanted to start a "one-shot" container that doesn't get restarted?
We could use kubectl run --restart=OnFailure
or kubectl run --restart=Never
These commands would create jobs or pods instead of deployments
Under the hood, kubectl run
invokes "generators" to create resource descriptions
We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with kubectl apply -f
(discussed later)
With kubectl run --schedule=...
, we can also create cronjobs
A Cron Job is a job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
It requires a schedule, represented as five space-separated fields:
*
means "all valid values"; /N
means "every N"
Example: */3 * * * *
means "every three minutes"
Let's create a simple job to be executed every three minutes
Cron Jobs need to terminate, otherwise they'd run forever
Create the Cron Job:
kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10
Check the resource that was created:
kubectl get cronjobs
At the specified schedule, the Cron Job will create a Job
The Job will create a Pod
The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
kubectl get jobs
(It will take a few minutes before the first job is scheduled.)
As we can see from the previous slide, kubectl run
can do many things
The exact type of resource created is not obvious
To make things more explicit, it is better to use kubectl create
:
kubectl create deployment
to create a deployment
kubectl create job
to create a job
kubectl create cronjob
to run a job periodically
(since Kubernetes 1.14)
Eventually, kubectl run
will be used only to start one-shot pods
kubectl run
kubectl create <resource>
kubectl create -f foo.yaml
or kubectl apply -f foo.yaml
When we specify a deployment name, only one single pod's logs are shown
We can view the logs of multiple pods by specifying a selector
A selector is a logic expression using labels
Conveniently, when you kubectl run somename
, the associated objects have a run=somename
label
run=pingpong
label:kubectl logs -l run=pingpong --tail 1
pingpong
pods?-l
and -f
flags:kubectl logs -l run=pingpong --tail 1 -f
Note: combining -l
and -f
is only possible since Kubernetes 1.14!
Let's try to understand why ...
Scale up our deployment:
kubectl scale deployment pingpong --replicas=8
Stream the logs:
kubectl logs -l run=pingpong --tail 1 -f
We see a message like the following one:
error: you are attempting to follow 8 log streams,but maximum allowed concurency is 5,use --max-log-requests to increase the limit
kubectl
opens one connection to the API server per pod
For each pod, the API server opens one extra connection to the corresponding kubelet
If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
This could easily put a lot of stress on the API server
Prior Kubernetes 1.14, it was decided to not allow multiple connections
From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with --max-log-requests
)
For more details about the rationale, see PR #67573
kubectl logs
We don't see which pod sent which log line
If pods are restarted / replaced, the log stream stops
If new pods are added, we don't see their logs
To stream the logs of multiple pods, we need to write a selector
There are external tools to address these shortcomings
(e.g.: Stern)
kubectl logs -l ... --tail N
If we run this with Kubernetes 1.12, the last command shows multiple lines
This is a regression when --tail
is used together with -l
/--selector
It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
The problem was fixed in Kubernetes 1.13
See #70554 for details.
If you're wondering this, good question!
Don't worry, though:
APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.
It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC!
Accessing logs from the CLI
(automatically generated title slide)
The kubectl logs
command has limitations:
it cannot stream logs from multiple pods at a time
when showing logs from multiple pods, it mixes them all together
We are going to see how to do it better
We could (if we were so inclined) write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could (if we were so inclined) write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could do it, but thankfully, others did it for us already!
Stern is an open source project by Wercker.
From the README:
Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.
The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.
Exactly what we need!
Run stern
(without arguments) to check if it's installed:
$ sternTail multiple pods and containers from KubernetesUsage:stern pod-query [flags]
If it is not installed, the easiest method is to download a binary release
The following commands will install Stern on a Linux Intel 64 bit machine:
sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64sudo chmod +x /usr/local/bin/stern
On OS X, just brew install stern
There are two ways to specify the pods whose logs we want to see:
-l
followed by a selector expression (like with many kubectl
commands)
with a "pod query," i.e. a regex used to match pod names
These two ways can be combined if necessary
stern rng
The --tail N
flag shows the last N
lines for each container
(Instead of showing the logs since the creation of the container)
The -t
/ --timestamps
flag shows timestamps
The --all-namespaces
flag is self-explanatory
weave
system containers:stern --tail 1 --timestamps --all-namespaces weave
When specifying a selector, we can omit the value for a label
This will match all objects having that label (regardless of the value)
Everything created with kubectl run
has a label run
We can use that property to view the logs of all the pods created with kubectl run
Similarly, everything created with kubectl create deployment
has a label app
kubectl create deployment
:stern -l app
vRLI
(automatically generated title slide)
Centralize logs
Compatible with syslog
Query language
Dashboards
High ingest capacity
Declarative vs imperative
(automatically generated title slide)
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
... As long as you know how to brew tea
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
Did you know there was an ISO standard specifying how to brew tea?
Imperative systems:
simpler
if a task is interrupted, we have to restart from scratch
Declarative systems:
if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary
we need to be able to observe the system
... and compute a "diff" between what we have and what we want
With Kubernetes, we cannot say: "run this container"
All we can do is write a spec and push it to the API server
(by creating a resource like e.g. a Pod or a Deployment)
The API server will validate that spec (and reject it if it's invalid)
Then it will store it in etcd
A controller will "notice" that spec and act upon it
Watch for the spec
fields in the YAML files later!
The spec describes how we want the thing to be
Kubernetes will reconcile the current state with the spec
(technically, this is done by a number of controllers)
When we want to change some resource, we update the spec
Kubernetes will then converge that resource
They say, "a picture is worth one thousand words."
The following 19 slides show what really happens when we run:
kubectl run web --image=nginx --replicas=3
Kubernetes network model
(automatically generated title slide)
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
In detail:
all nodes must be able to reach each other, without NAT
all pods must be able to reach each other, without NAT
pods and nodes must be able to reach each other, without NAT
each pod is aware of its IP address (no NAT)
pod IP addresses are assigned by the network implementation
Kubernetes doesn't mandate any particular implementation
Everything can reach everything
No address translation
No port translation
No new protocol
The network implementation can decide how to allocate addresses
IP addresses don't have to be "portable" from a node to another
(We can use e.g. a subnet per node and use a simple routed topology)
The specification is simple enough to allow many various implementations
Everything can reach everything
if you want security, you need to add network policies
the network implementation that you use needs to support them
There are literally dozens of implementations out there
(15 are listed in the Kubernetes documentation)
Pods have level 3 (IP) connectivity, but services are level 4 (TCP or UDP)
(Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)
kube-proxy
is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables)
The nodes that we are using have been set up to use Weave
We don't endorse Weave in a particular way, it just Works For Us
Don't worry about the warning about kube-proxy
performance
Unless you:
If necessary, there are alternatives to kube-proxy
; e.g.
kube-router
Most Kubernetes clusters use CNI "plugins" to implement networking
When a pod is created, Kubernetes delegates the network setup to these plugins
(it can be a single plugin, or a combination of plugins, each doing one task)
Typically, CNI plugins will:
allocate an IP address (by calling an IPAM plugin)
add a network interface into the pod's network namespace
configure the interface as well as required routes etc.
The "pod-to-pod network" or "pod network":
provides communication between pods and nodes
is generally implemented with CNI plugins
The "pod-to-service network":
provides internal communication and load balancing
is generally implemented with kube-proxy (or e.g. kube-router)
Network policies:
provide firewalling and isolation
can be bundled with the "pod network" or provided by another component
Inbound traffic can be handled by multiple components:
something like kube-proxy or kube-router (for NodePort services)
load balancers (ideally, connected to the pod network)
It is possible to use multiple pod networks in parallel
(with "meta-plugins" like CNI-Genie or Multus)
Some solutions can fill multiple roles
(e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)
Exposing containers
(automatically generated title slide)
kubectl expose
creates a service for existing pods
A service is a stable address for a pod (or a bunch of pods)
If we want to connect to our pod(s), we need to create a service
Once a service is created, CoreDNS will allow us to resolve it by name
(i.e. after creating service hello
, the name hello
will resolve to something)
There are different types of services, detailed on the following slides:
ClusterIP
, NodePort
, LoadBalancer
, ExternalName
HTTP services can also use Ingress
resources (more on that later)
ClusterIP
It's the default service type
A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
This IP address is reachable only from within the cluster (nodes and pods)
Our code can connect to the service using the original port number
Perfect for internal communication, within the cluster
LoadBalancer
An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
This is available only when the underlying infrastructure provides some kind of "load balancer as a service"
Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
Ideally, traffic would flow directly from the load balancer to the pods
In practice, it will often flow through a NodePort
first
NodePort
A port number is allocated for the service
(by default, in the 30000-32768 range)
That port is made available on all our nodes and anybody can connect to it
(we can connect to any node on that port to reach the service)
Our code needs to be changed to connect to that new port number
Under the hood: kube-proxy
sets up a bunch of iptables
rules on our nodes
Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
ExternalName
No load balancer (internal or external) is created
Only a DNS entry gets added to the DNS managed by Kubernetes
That DNS entry will just be a CNAME
to a provided record
Example:
kubectl create service externalname k8s --external-name kubernetes.io
Creates a CNAME k8s
pointing to kubernetes.io
Since ping
doesn't have anything to connect to, we'll have to run something else
We could use the nginx
official image, but ...
... we wouldn't be able to tell the backends from each other!
We are going to use jpetazzo/httpenv
, a tiny HTTP server written in Go
jpetazzo/httpenv
listens on port 8888
It serves its environment variables in JSON format
The environment variables will include HOSTNAME
, which will be the pod name
(and therefore, will be different on each backend)
We could do kubectl run httpenv --image=jpetazzo/httpenv
...
But since kubectl run
is being deprecated, let's see how to use kubectl create
instead
kubectl get pods -w
Create a deployment for this very lightweight HTTP server:
kubectl create deployment httpenv --image=jpetazzo/httpenv
Scale it to 10 replicas:
kubectl scale deployment httpenv --replicas=10
ClusterIP
serviceExpose the HTTP port of our server:
kubectl expose deployment httpenv --port 8888
Look up which IP address was allocated:
kubectl get service
You can assign IP addresses to services, but they are still layer 4
(i.e. a service is not an IP address; it's an IP address + protocol + port)
This is caused by the current implementation of kube-proxy
(it relies on mechanisms that don't support layer 3)
As a result: you have to indicate the port number for your service
Running services with arbitrary port (or port ranges) requires hacks
(e.g. host networking mode)
IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:8888/
Too much output? Filter it with jq
:
curl -s http://$IP:8888/ | jq .HOSTNAME
IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:8888/
Too much output? Filter it with jq
:
curl -s http://$IP:8888/ | jq .HOSTNAME
Try it a few times! Our requests are load balanced across multiple pods.
Sometimes, we want to access our scaled services directly:
if we want to save a tiny little bit of latency (typically less than 1ms)
if we need to connect over arbitrary ports (instead of a few fixed ones)
if we need to communicate over another protocol than UDP or TCP
if we want to decide how to balance the requests client-side
...
In that case, we can use a "headless service"
A headless service is obtained by setting the clusterIP
field to None
(Either with --cluster-ip=None
, or by providing a custom YAML)
As a result, the service doesn't have a virtual IP address
Since there is no virtual IP address, there is no load balancer either
CoreDNS will return the pods' IP addresses as multiple A
records
This gives us an easy way to discover all the replicas for a deployment
A service has a number of "endpoints"
Each endpoint is a host + port where the service is available
The endpoints are maintained and updated automatically by Kubernetes
httpenv
service:kubectl describe service httpenv
In the output, there will be a line starting with Endpoints:
.
That line will list a bunch of addresses in host:port
format.
When we have many endpoints, our display commands truncate the list
kubectl get endpoints
If we want to see the full list, we can use one of the following commands:
kubectl describe endpoints httpenvkubectl get endpoints httpenv -o yaml
These commands will show us a list of IP addresses
These IP addresses should match the addresses of the corresponding pods:
kubectl get pods -l app=httpenv -o wide
endpoints
not endpoint
endpoints
is the only resource that cannot be singular$ kubectl get endpointerror: the server doesn't have a resource type "endpoint"
This is because the type itself is plural (unlike every other resource)
There is no endpoint
object: type Endpoints struct
The type doesn't represent a single endpoint, but a list of endpoints
ExternalIP
When creating a servivce, we can also specify an ExternalIP
(this is not a type, but an extra attribute to the service)
It will make the service availableon this IP address
(if the IP address belongs to a node of the cluster)
Ingress
Ingresses are another type (kind) of resource
They are specifically for HTTP services
(not TCP or UDP)
They can also handle TLS certificates, URL rewriting ...
They require an Ingress Controller to function
NSX-T
(automatically generated title slide)
Connect and secure Kubernetes Pods
Distributed firewall and micro-segmentation for VMs and Pods
Ingress and LoadBalancer Controller for Kubernetes
Traceflow for Pods and dynamic routing
Shipping images with a registry
(automatically generated title slide)
Initially, our app was running on a single node
We could build and run in the same place
Therefore, we did not need to ship anything
Now that we want to run on a cluster, things are different
The easiest way to ship container images is to use a registry
What happens when we execute docker run alpine
?
If the Engine needs to pull the alpine
image, it expands it into library/alpine
library/alpine
is expanded into index.docker.io/library/alpine
The Engine communicates with index.docker.io
to retrieve library/alpine:latest
To use something else than index.docker.io
, we specify it in the image name
Examples:
docker pull gcr.io/google-containers/alpine-with-bash:1.0docker build -t registry.mycompany.io:5000/myimage:awesome .docker push registry.mycompany.io:5000/myimage:awesome
Create one deployment for each component
(hasher, redis, rng, webui, worker)
Expose deployments that need to accept connections
(hasher, redis, rng, webui)
For redis, we can use the official redis image
For the 4 others, we need to build images and push them to some registry
There are many options!
Manually:
build locally (with docker build
or otherwise)
push to the registry
Automatically:
build and test locally
when ready, commit and push a code repository
the code repository notifies an automated build system
that system gets the code, builds it, pushes the image to the registry
There are SAAS products like Docker Hub, Quay ...
Each major cloud provider has an option as well
(ACR on Azure, ECR on AWS, GCR on Google Cloud...)
There are also commercial products to run our own registry
(Docker EE, Quay...)
And open source options, too!
When picking a registry, pay attention to its build system
(when it has one)
For everyone's convenience, we took care of building DockerCoins images
We pushed these images to the DockerHub, under the dockercoins user
These images are tagged with a version number, v0.1
The full image names are therefore:
dockercoins/hasher:v0.1
dockercoins/rng:v0.1
dockercoins/webui:v0.1
dockercoins/worker:v0.1
Running our application on Kubernetes
(automatically generated title slide)
Deploy redis
:
kubectl create deployment redis --image=redis
Deploy everything else:
kubectl create deployment hasher --image=dockercoins/hasher:v0.1kubectl create deployment rng --image=dockercoins/rng:v0.1kubectl create deployment webui --image=dockercoins/webui:v0.1kubectl create deployment worker --image=dockercoins/worker:v0.1
If we wanted to deploy images from another registry ...
... Or with a different tag ...
... We could use the following snippet:
REGISTRY=dockercoins TAG=v0.1 for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
💡 Oh right! We forgot to expose
.
Three deployments need to be reachable by others: hasher
, redis
, rng
worker
doesn't need to be exposed
webui
will be dealt with later
kubectl expose deployment redis --port 6379kubectl expose deployment rng --port 80kubectl expose deployment hasher --port 80
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
We should now see the worker
, well, working happily.
Now we would like to access the Web UI
We will expose it with a NodePort
(just like we did for the registry)
Create a NodePort
service for the Web UI:
kubectl expose deploy/webui --type=NodePort --port=80
Check the port that was allocated:
kubectl get svc
On PKS, replace NodePort
with LoadBalancer
.
On PKS, you will have to use the EXTERNAL-IP shown on the webui
line
(and you can connect to port 80, yay!)
On PKS, you will have to use the EXTERNAL-IP shown on the webui
line
(and you can connect to port 80, yay!)
Yes, this may take a little while to update. (Narrator: it was DNS.)
On PKS, you will have to use the EXTERNAL-IP shown on the webui
line
(and you can connect to port 80, yay!)
Yes, this may take a little while to update. (Narrator: it was DNS.)
Alright, we're back to where we started, when we were running on a single node!
Deploying with YAML
(automatically generated title slide)
So far, we created resources with the following commands:
kubectl run
kubectl create deployment
kubectl expose
We can also create resources directly with YAML manifests
kubectl apply
vs create
kubectl create -f whatever.yaml
creates resources if they don't exist
if resources already exist, don't alter them
(and display error message)
kubectl apply -f whatever.yaml
creates resources if they don't exist
if resources already exist, update them
(to match the definition provided by the YAML file)
stores the manifest as an annotation in the resource
---
kind: ... apiVersion: ... metadata: ... name: ... ... --- kind: ... apiVersion: ... metadata: ... name: ... ...
apiVersion: v1 kind: List items: - kind: ... apiVersion: ... ... - kind: ... apiVersion: ... ...
We provide a YAML manifest with all the resources for Dockercoins
(Deployments and Services)
We can use it if we need to deploy or redeploy Dockercoins
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
(If we deployed Dockercoins earlier, we will see warning messages, because the resources that we created lack the necessary annotation. We can safely ignore them.)
Setting up Kubernetes
(automatically generated title slide)
We used kubeadm
on freshly installed VM instances running Ubuntu LTS
Install Docker
Install Kubernetes packages
Run kubeadm init
on the first node (it deploys the control plane on that node)
Set up Weave (the overlay network)
(that step is just one kubectl apply
command; discussed later)
Run kubeadm join
on the other nodes (with the token produced by kubeadm init
)
Copy the configuration file generated by kubeadm init
Check the prepare VMs README for more details
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
(At least ... not yet! Though it's experimental in 1.12.)
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
(At least ... not yet! Though it's experimental in 1.12.)
"It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
AKS: managed Kubernetes on Azure
GKE: managed Kubernetes on Google Cloud
kops: customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha)
minikube, kubespawn, Docker Desktop, kind: for local development
kubicorn, the Cluster API: deploy your clusters declaratively, "the Kubernetes way"
If you like Ansible: kubespray
If you like Terraform: typhoon
If you like Terraform and Puppet: tarmak
You can also learn how to install every component manually, with the excellent tutorial Kubernetes The Hard Way
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
There are also many commercial options available!
For a longer list, check the Kubernetes documentation:
it has a great guide to pick the right solution to set up Kubernetes.
PKS
(automatically generated title slide)
Automate and streamline Kubernetes cluster deployment and operations
Fully automated installation of mainstream Kubernetes
Scale up, scale down & upgrade clusters
Highly-available control plane & self-healing features
(replace nodes automatically when needed and deploy CVE patches)
Integration with VMware SDDC (Software Defined Data Center) features
(e.g. vMotion, DRS, Shared Datastore, NSX-T, vREALIZE Suite)
Scaling our demo app
(automatically generated title slide)
Our ultimate goal is to get more DockerCoins
(i.e. increase the number of loops per second shown on the web UI)
Let's look at the architecture again:
The loop is done in the worker; perhaps we could try adding more workers?
worker
Deploymentkubectl get pods -wkubectl get deployments -w
worker
replicas:kubectl scale deployment worker --replicas=2
After a few seconds, the graph in the web UI should show up.
worker
Deployment further:kubectl scale deployment worker --replicas=3
The graph in the web UI should go up again.
(This is looking great! We're gonna be RICH!)
worker
Deployment to a bigger number:kubectl scale deployment worker --replicas=10
worker
Deployment to a bigger number:kubectl scale deployment worker --replicas=10
The graph will peak at 10 hashes/second.
(We can add as many workers as we want: we will never go past 10 hashes/second.)
It may look like it, because the web UI shows instant speed
The instant speed can briefly exceed 10 hashes/second
The average speed cannot
The instant speed can be biased because of how it's computed
The instant speed is computed client-side by the web UI
The web UI checks the hash counter once per second
(and does a classic (h2-h1)/(t2-t1) speed computation)
The counter is updated once per second by the workers
These timings are not exact
(e.g. the web UI check interval is client-side JavaScript)
Sometimes, between two web UI counter measurements,
the workers are able to update the counter twice
During that cycle, the instant speed will appear to be much bigger
(but it will be compensated by lower instant speed before and after)
If this was high-quality, production code, we would have instrumentation
(Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)
It's not!
Perhaps we could benchmark our web services?
(with tools like ab
, or even simpler, httping
)
We want to check hasher
and rng
We are going to use httping
It's just like ping
, but using HTTP GET
requests
(it measures how long it takes to perform one GET
request)
It's used like this:
httping [-c count] http://host:port/path
Or even simpler:
httping ip.ad.dr.ess
We will use httping
on the ClusterIP addresses of our services
We can simply check the output of kubectl get services
Or do it programmatically, as in the example below
HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})
Now we can access the IP addresses of our services through $HASHER
and $RNG
.
hasher
and rng
response timeshttping -c 3 $HASHERhttping -c 3 $RNG
hasher
is fine (it should take a few milliseconds to reply)
rng
is not (it should take about 700 milliseconds if there are 10 workers)
Something is wrong with rng
, but ... what?
rng
rng
service just like we scaled worker
rng
:kubectl scale deploy rng --replicas=2
The web UI graph should go past 10 hashes/second.
vROPS
(automatically generated title slide)
Manage Kubernetes and/or PKS clusters
Automatically add new PKS clusters after deployment
Supervision
Capacity management
Global view of infrastructure
Rolling updates
(automatically generated title slide)
By default (without rolling updates), when a scaled resource is updated:
new pods are created
old pods are terminated
... all at the same time
if something goes wrong, ¯\_(ツ)_/¯
With rolling updates, when a Deployment is updated, it happens progressively
The Deployment controls multiple Replica Sets
Each Replica Set is a group of identical Pods
(with the same image, arguments, parameters ...)
During the rolling update, we have at least two Replica Sets:
the "new" set (corresponding to the "target" version)
at least one "old" set
We can have multiple "old" sets
(if we start another update before the first one is done)
Two parameters determine the pace of the rollout: maxUnavailable
and maxSurge
They can be specified in absolute number of pods, or percentage of the replicas
count
At any given time ...
there will always be at least replicas
-maxUnavailable
pods available
there will never be more than replicas
+maxSurge
pods in total
there will therefore be up to maxUnavailable
+maxSurge
pods being updated
We have the possibility of rolling back to the previous version
(if the update fails or is unsatisfactory in any way)
kubectl
and jq
:kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
As of Kubernetes 1.8, we can do rolling updates with:
deployments
, daemonsets
, statefulsets
Editing one of these resources will automatically result in a rolling update
Rolling updates can be monitored with the kubectl rollout
subcommand
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=dockercoins/worker:v0.2
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=dockercoins/worker:v0.2
That rollout should be pretty quick. What shows in the web UI?
At first, it looks like nothing is happening (the graph remains at the same level)
According to kubectl get deploy -w
, the deployment
was updated really quickly
But kubectl get pods -w
tells a different story
The old pods
are still here, and they stay in Terminating
state for a while
Eventually, they are terminated; and then the graph decreases significantly
This delay is due to the fact that our worker doesn't handle signals
Kubernetes sends a "polite" shutdown request to the worker, which ignores it
After a grace period, Kubernetes gets impatient and kills the container
(The grace period is 30 seconds, but can be changed if needed)
Update worker
by specifying a non-existent image:
kubectl set image deploy worker worker=dockercoins/worker:v0.3
Check what's going on:
kubectl rollout status deploy worker
Update worker
by specifying a non-existent image:
kubectl set image deploy worker worker=dockercoins/worker:v0.3
Check what's going on:
kubectl rollout status deploy worker
Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
Why is our app a bit slower?
Because MaxUnavailable=25%
... So the rollout terminated 2 replicas out of 10 available
Okay, but why do we see 5 new replicas being rolled out?
Because MaxSurge=25%
... So in addition to replacing 2 replicas, the rollout is also starting 3 more
It rounded down the number of MaxUnavailable pods conservatively,
but the total number of pods being rolled out is allowed to be 25+25=50%
We start with 10 pods running for the worker
deployment
Current settings: MaxUnavailable=25% and MaxSurge=25%
When we start the rollout:
Now we have 8 replicas up and running, and 5 being deployed
Our rollout is stuck at this point!
If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
Connect to the dashboard that we deployed earlier
Check that we have failures in Deployments, Pods, and Replica Sets
Can we see the reason for the failure?
We could push some v0.3
image
(the pod retry logic will eventually catch it and the rollout will proceed)
Or we could invoke a manual rollback
kubectl rollout undo deploy workerkubectl rollout status deploy worker
We reverted to v0.2
But this version still has a performance problem
How can we get back to the previous version?
kubectl rollout undo
again?Try it:
kubectl rollout undo deployment worker
Check the web UI, the list of pods ...
🤔 That didn't work.
If we see successive versions as a stack:
kubectl rollout undo
doesn't "pop" the last element from the stack
it copies the N-1th element to the top
Multiple "undos" just swap back and forth between the last two versions!
kubectl rollout undo deployment worker
Our version numbers are easy to guess
What if we had used git hashes?
What if we had changed other parameters in the Pod spec?
kubectl rollout history
kubectl rollout history deployment worker
We don't see all revisions.
We might see something like 1, 4, 5.
(Depending on how many "undos" we did before.)
These revisions correspond to our Replica Sets
This information is stored in the Replica Set annotations
kubectl describe replicasets -l app=worker | grep -A3
The missing revisions are stored in another annotation:
deployment.kubernetes.io/revision-history
These are not shown in kubectl rollout history
We could easily reconstruct the full list with a script
(if we wanted to!)
kubectl rollout undo
can work with a revision numberRoll back to the "known good" deployment version:
kubectl rollout undo deployment worker --to-revision=1
Check the web UI or the list of pods
We want to:
v0.1
The corresponding changes can be expressed in the following YAML snippet:
spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10
We could use kubectl edit deployment worker
But we could also use kubectl patch
with the exact YAML shown before
kubectl patch deployment worker -p "spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10"kubectl rollout status deployment workerkubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
Namespaces
(automatically generated title slide)
We would like to deploy another copy of DockerCoins on our cluster
We could rename all our deployments and services:
hasher → hasher2, redis → redis2, rng → rng2, etc.
That would require updating the code
There has to be a better way!
We would like to deploy another copy of DockerCoins on our cluster
We could rename all our deployments and services:
hasher → hasher2, redis → redis2, rng → rng2, etc.
That would require updating the code
There has to be a better way!
As hinted by the title of this section, we will use namespaces
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources of the same kind with the same name in the same namespace
(but it's OK to have e.g. two rng
services in different namespaces)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources of the same kind with the same name in the same namespace
(but it's OK to have e.g. two rng
services in different namespaces)
Except for resources that exist at the cluster scope
(these do not belong to a namespace)
For namespaced resources:
the tuple (kind, name, namespace) needs to be unique
For resources at the cluster scope:
the tuple (kind, name) needs to be unique
kubectl api-resources
If we deploy a cluster with kubeadm
, we have three or four namespaces:
default
(for our applications)
kube-system
(for the control plane)
kube-public
(contains one ConfigMap for cluster discovery)
kube-node-lease
(in Kubernetes 1.14 and later; contains Lease objects)
If we deploy differently, we may have different namespaces
We can use kubectl create namespace
:
kubectl create namespace blue
Or we can construct a very minimal YAML snippet:
kubectl apply -f- <<EOFapiVersion: v1kind: Namespacemetadata: name: blueEOF
We can pass a -n
or --namespace
flag to most kubectl
commands:
kubectl -n blue get svc
We can also change our current context
A context is a (user, cluster, namespace) tuple
We can manipulate contexts with the kubectl config
command
kubectl config get-contexts
The current context (the only one!) is tagged with a *
What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?
NAME is an arbitrary string to identify the context
CLUSTER is a reference to a cluster
(i.e. API endpoint URL, and optional certificate)
AUTHINFO is a reference to the authentication information to use
(i.e. a TLS client certificate, token, or otherwise)
NAMESPACE is the namespace
(empty string = default
)
We want to use a different namespace
Solution 1: update the current context
This is appropriate if we need to change just one thing (e.g. namespace or authentication).
Solution 2: create a new context and switch to it
This is appropriate if we need to change multiple things and switch back and forth.
Let's go with solution 1!
This is done through kubectl config set-context
We can update a context by passing its name, or the current context with --current
Update the current context to use the blue
namespace:
kubectl config set-context --current --namespace=blue
Check the result:
kubectl config get-contexts
kubectl get all
jpetazzo/kubercoins
contains everything we need!Clone the kubercoins repository:
cd ~git clone https://github.com/jpetazzo/kubercoins
Create all the DockerCoins resources:
kubectl create -f kubercoins
If the argument behind -f
is a directory, all the files in that directory are processed.
The subdirectories are not processed, unless we also add the -R
flag.
Retrieve the port number allocated to the webui
service:
kubectl get svc webui
Point our browser to http://X.X.X.X:3xxxx
If the graph shows up but stays at zero, give it a minute or two!
Namespaces do not provide isolation
A pod in the green
namespace can communicate with a pod in the blue
namespace
A pod in the default
namespace can communicate with a pod in the kube-system
namespace
CoreDNS uses a different subdomain for each namespace
Example: from any pod in the cluster, you can connect to the Kubernetes API with:
https://kubernetes.default.svc.cluster.local:443/
Actual isolation is implemented with network policies
Network policies are resources (like deployments, services, namespaces...)
Network policies specify which flows are allowed:
between pods
from pods to the outside world
and vice-versa
blue
namespacekubectl config set-context --current --namespace=
Note: we could have used --namespace=default
for the same result.
We can also use a little helper tool called kubens
:
# Switch to namespace fookubens foo# Switch back to the previous namespacekubens -
On our clusters, kubens
is called kns
instead
(so that it's even fewer keystrokes to switch namespaces)
kubens
and kubectx
With kubens
, we can switch quickly between namespaces
With kubectx
, we can switch quickly between contexts
Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx
On our clusters, they are installed as kns
and kctx
(for brevity and to avoid completion clashes between kubectx
and kubectl
)
kube-ps1
It's easy to lose track of our current cluster / context / namespace
kube-ps1
makes it easy to track these, by showing them in our shell prompt
It's a simple shell script available from https://github.com/jonmosco/kube-ps1
On our clusters, kube-ps1
is installed and included in PS1
:
[123.45.67.89] (kubernetes-admin@kubernetes:default) docker@node1 ~
(The highlighted part is context:namespace
, managed by kube-ps1
)
Highly recommended if you work across multiple contexts or namespaces!
Volumes
(automatically generated title slide)
Volumes are special directories that are mounted in containers
Volumes can have many different purposes:
share files and directories between containers running on the same machine
share files and directories between containers and their host
centralize configuration information in Kubernetes and expose it to containers
manage credentials and secrets and expose them securely to containers
store persistent data for stateful services
access storage systems (like Ceph, EBS, NFS, Portworx, and many others)
Kubernetes and Docker volumes are very similar
(the Kubernetes documentation says otherwise ...
but it refers to Docker 1.7, which was released in 2015!)
Docker volumes allow us to share data between containers running on the same host
Kubernetes volumes allow us to share data between containers in the same pod
Both Docker and Kubernetes volumes enable access to storage systems
Kubernetes volumes are also used to expose configuration and secrets
Docker has specific concepts for configuration and secrets
(but under the hood, the technical implementation is similar)
If you're not familiar with Docker volumes, you can safely ignore this slide!
Volumes and Persistent Volumes are related, but very different!
Volumes:
appear in Pod specifications (see next slide)
do not exist as API resources (cannot do kubectl get volumes
)
Persistent Volumes:
are API resources (can do kubectl get persistentvolumes
)
correspond to concrete volumes (e.g. on a SAN, EBS, etc.)
cannot be associated with a Pod directly; but through a Persistent Volume Claim
won't be discussed further in this section
We will start with the simplest Pod manifest we can find
We will add a volume to that Pod manifest
We will mount that volume in a container in the Pod
By default, this volume will be an emptyDir
(an empty directory)
It will "shadow" the directory where it's mounted
apiVersion: v1kind: Podmetadata: name: nginx-without-volumespec: containers: - name: nginx image: nginx
This is a MVP! (Minimum Viable Pod😉)
It runs a single NGINX container.
Create the Pod:
kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml
Get its IP address:
IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
Send a request with curl:
curl $IPADDR
(We should see the "Welcome to NGINX" page.)
We need to add the volume in two places:
at the Pod level (to declare the volume)
at the container level (to mount the volume)
We will declare a volume named www
No type is specified, so it will default to emptyDir
(as the name implies, it will be initialized as an empty directory at pod creation)
In that pod, there is also a container named nginx
That container mounts the volume www
to path /usr/share/nginx/html/
apiVersion: v1kind: Podmetadata: name: nginx-with-volumespec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/
Create the Pod:
kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml
Get its IP address:
IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
Send a request with curl:
curl $IPADDR
(We should now see a "403 Forbidden" error page.)
Let's add another container to the Pod
Let's mount the volume in both containers
That container will populate the volume with static files
NGINX will then serve these static files
To populate the volume, we will clone the Spoon-Knife repository
this repository is https://github.com/octocat/Spoon-Knife
it's very popular (more than 100K stars!)
apiVersion: v1kind: Podmetadata: name: nginx-with-gitspec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure
We added another container to the pod
That container mounts the www
volume on a different path (/www
)
It uses the alpine
image
When started, it installs git
and clones the octocat/Spoon-Knife
repository
(that repository contains a tiny HTML website)
As a result, NGINX now serves this website
This one will be time-sensitive!
We need to catch the Pod IP address as soon as it's created
Then send a request to it as fast as possible
kubectl get pods -o wide --watch
Create the pod:
kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml
As soon as we see its IP address, access it:
curl $IP
A few seconds later, the state of the pod will change; access it again:
curl $IP
The first time, we should see "403 Forbidden".
The second time, we should see the HTML file from the Spoon-Knife repository.
Both containers are started at the same time
NGINX starts very quickly
(it can serve requests immediately)
But at this point, the volume is empty
(NGINX serves "403 Forbidden")
The other containers installs git and clones the repository
(this takes a bit longer)
When the other container is done, the volume holds the repository
(NGINX serves the HTML file)
The default restartPolicy
is Always
This would cause our git
container to run again ... and again ... and again
(with an exponential back-off delay, as explained in the documentation)
That's why we specified restartPolicy: OnFailure
There is a short period of time during which the website is not available
(because the git
container hasn't done its job yet)
With a bigger website, we could get inconsistent results
(where only a part of the content is ready)
In real applications, this could cause incorrect results
How can we avoid that?
We can define containers that should execute before the main ones
They will be executed in order
(instead of in parallel)
They must all succeed before the main containers are started
This is exactly what we need here!
Let's see one in action
See Init Containers documentation for all the details.
apiVersion: v1kind: Podmetadata: name: nginx-with-initspec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ initContainers: - name: git image: alpine command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/
Repeat the same operation as earlier
(try to send HTTP requests as soon as the pod comes up)
This time, instead of "403 Forbidden" we get a "connection refused"
NGINX doesn't start until the git container has done its job
We never get inconsistent results
(a "half-ready" container)
Load content
Generate configuration (or certificates)
Database migrations
Waiting for other services to be up
(to avoid flurry of connection errors in main container)
etc.
The lifecycle of a volume is linked to the pod's lifecycle
This means that a volume is created when the pod is created
This is mostly relevant for emptyDir
volumes
(other volumes, like remote storage, are not "created" but rather "attached" )
A volume survives across container restarts
A volume is destroyed (or, for remote storage, detached) when the pod is destroyed
Managing configuration
(automatically generated title slide)
Some applications need to be configured (obviously!)
There are many ways for our code to pick up configuration:
command-line arguments
environment variables
configuration files
configuration servers (getting configuration from a database, an API...)
... and more (because programmers can be very creative!)
How can we do these things with containers and Kubernetes?
There are many ways to pass configuration to code running in a container:
baking it into a custom image
command-line arguments
environment variables
injecting configuration files
exposing it over the Kubernetes API
configuration servers
Let's review these different strategies!
Put the configuration in the image
(it can be in a configuration file, but also ENV
or CMD
actions)
It's easy! It's simple!
Unfortunately, it also has downsides:
multiplication of images
different images for dev, staging, prod ...
minor reconfigurations require a whole build/push/pull cycle
Avoid doing it unless you don't have the time to figure out other options
Pass options to args
array in the container specification
Example (source):
args: - "--data-dir=/var/lib/etcd" - "--advertise-client-urls=http://127.0.0.1:2379" - "--listen-client-urls=http://127.0.0.1:2379" - "--listen-peer-urls=http://127.0.0.1:2380" - "--name=etcd"
The options can be passed directly to the program that we run ...
... or to a wrapper script that will use them to e.g. generate a config file
Works great when options are passed directly to the running program
(otherwise, a wrapper script can work around the issue)
Works great when there aren't too many parameters
(to avoid a 20-lines args
array)
Requires documentation and/or understanding of the underlying program
("which parameters and flags do I need, again?")
Well-suited for mandatory parameters (without default values)
Not ideal when we need to pass a real configuration file anyway
Pass options through the env
map in the container specification
Example:
env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!"
value
must be a string! Make sure that numbers and fancy strings are quoted.
🤔 Why this weird {name: xxx, value: yyy}
scheme? It will be revealed soon!
In the previous example, environment variables have fixed values
We can also use a mechanism called the downward API
The downward API allows exposing pod or container information
either through special files (we won't show that for now)
or through environment variables
The value of these environment variables is computed when the container is started
Remember: environment variables won't (can't) change after container start
Let's see a few concrete examples!
- name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
Useful to generate FQDN of services
(in some contexts, a short name is not enough)
For instance, the two commands should be equivalent:
curl api-backendcurl api-backend.$MY_POD_NAMESPACE.svc.cluster.local
- name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP
Useful if we need to know our IP address
(we could also read it from eth0
, but this is more solid)
- name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory
Useful for runtimes where memory is garbage collected
Example: the JVM
(the memory available to the JVM should be set with the -Xmx
flag)
Best practice: set a memory limit, and pass it to the runtime
Note: recent versions of the JVM can do this automatically
(see JDK-8146115) and this blog post for detailed examples)
This documentation page tells more about these environment variables
And this one explains the other way to use the downward API
(through files that get created in the container filesystem)
Works great when the running program expects these variables
Works great for optional parameters with reasonable defaults
(since the container image can provide these defaults)
Sort of auto-documented
(we can see which environment variables are defined in the image, and their values)
Can be (ab)used with longer values ...
... You can put an entire Tomcat configuration file in an environment ...
... But should you?
(Do it if you really need to, we're not judging! But we'll see better ways.)
Sometimes, there is no way around it: we need to inject a full config file
Kubernetes provides a mechanism for that purpose: configmaps
A configmap is a Kubernetes resource that exists in a namespace
Conceptually, it's a key/value map
(values are arbitrary strings)
We can think about them in (at least) two different ways:
as holding entire configuration file(s)
as holding individual configuration parameters
Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!
In this case, each key/value pair corresponds to a configuration file
Key = name of the file
Value = content of the file
There can be one key/value pair, or as many as necessary
(for complex apps with multiple configuration files)
Examples:
# Create a configmap with a single key, "app.conf"kubectl create configmap my-app-config --from-file=app.conf# Create a configmap with a single key, "app.conf" but another filekubectl create configmap my-app-config --from-file=app.conf=app-prod.conf# Create a configmap with multiple keys (one per file in the config.d directory)kubectl create configmap my-app-config --from-file=config.d/
In this case, each key/value pair corresponds to a parameter
Key = name of the parameter
Value = value of the parameter
Examples:
# Create a configmap with two keyskubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue# Create a configmap from a file containing key=val pairskubectl create cm my-app-config \ --from-env-file=app.conf
Configmaps can be exposed as plain files in the filesystem of a container
this is achieved by declaring a volume and mounting it in the container
this is particularly effective for configmaps containing whole files
Configmaps can be exposed as environment variables in the container
this is achieved with the downward API
this is particularly effective for configmaps containing individual parameters
Let's see how to do both!
We will start a load balancer powered by HAProxy
We will use the official haproxy
image
It expects to find its configuration in /usr/local/etc/haproxy/haproxy.cfg
We will provide a simple HAproxy configuration, k8s/haproxy.cfg
It listens on port 80, and load balances connections between IBM and Google
Go to the k8s
directory in the repository:
cd ~/container.training/k8s
Create a configmap named haproxy
and holding the configuration file:
kubectl create configmap haproxy --from-file=haproxy.cfg
Check what our configmap looks like:
kubectl get configmap haproxy -o yaml
We are going to use the following pod definition:
apiVersion: v1kind: Podmetadata: name: haproxyspec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/
k8s/haproxy.yaml
kubectl apply -f ~/container.training/k8s/haproxy.yaml
kubectl get pod haproxy -o wideIP=$(kubectl get pod haproxy -o json | jq -r .status.podIP)
The load balancer will send:
half of the connections to Google
the other half to IBM
curl $IPcurl $IPcurl $IP
We should see connections served by Google, and others served by IBM.
(Each server sends us a redirect page. Look at the URL that they send us to!)
We are going to run a Docker registry on a custom port
By default, the registry listens on port 5000
This can be changed by setting environment variable REGISTRY_HTTP_ADDR
We are going to store the port number in a configmap
Then we will expose that configmap as a container environment variable
Our configmap will have a single key, http.addr
:
kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80
Check our configmap:
kubectl get configmap registry -o yaml
We are going to use the following pod definition:
apiVersion: v1kind: Podmetadata: name: registryspec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr
k8s/registry.yaml
kubectl apply -f ~/container.training/k8s/registry.yaml
Check the IP address allocated to the pod:
kubectl get pod registry -o wideIP=$(kubectl get pod registry -o json | jq -r .status.podIP)
Confirm that the registry is available on port 80:
curl $IP/v2/_catalog
For sensitive information, there is another special resource: Secrets
Secrets and Configmaps work almost the same way
(we'll expose the differences on the next slide)
The intent is different, though:
"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."
"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."
(Source: the author of both features)
Secrets are base64-encoded when shown with kubectl get secrets -o yaml
keep in mind that this is just encoding, not encryption
it is very easy to automatically extract and decode secrets
With RBAC, we can authorize a user to access configmaps, but not secrets
(since they are two different kinds of resources)
Highly available Persistent Volumes
(automatically generated title slide)
How can we achieve true durability?
How can we store data that would survive the loss of a node?
How can we achieve true durability?
How can we store data that would survive the loss of a node?
We need to use Persistent Volumes backed by highly available storage systems
There are many ways to achieve that:
leveraging our cloud's storage APIs
using NAS/SAN systems or file servers
distributed storage systems
How can we achieve true durability?
How can we store data that would survive the loss of a node?
We need to use Persistent Volumes backed by highly available storage systems
There are many ways to achieve that:
leveraging our cloud's storage APIs
using NAS/SAN systems or file servers
distributed storage systems
We are going to see one distributed storage system in action
We will set up a distributed storage system on our cluster
We will use it to deploy a SQL database (PostgreSQL)
We will insert some test data in the database
We will disrupt the node running the database
We will see how it recovers
Portworx is a commercial persistent storage solution for containers
It works with Kubernetes, but also Mesos, Swarm ...
It provides hyper-converged storage
(=storage is provided by regular compute nodes)
We're going to use it here because it can be deployed on any Kubernetes cluster
(it doesn't require any particular infrastructure)
We don't endorse or support Portworx in any particular way
(but we appreciate that it's super easy to install!)
We're installing Portworx because we need a storage system
If you are using AKS, EKS, GKE ... you already have a storage system
(but you might want another one, e.g. to leverage local storage)
If you have setup Kubernetes yourself, there are other solutions available too
on premises, you can use a good old SAN/NAS
on a private cloud like OpenStack, you can use e.g. Cinder
everywhere, you can use other systems, e.g. Gluster, StorageOS
Kubernetes cluster ✔️
Optional key/value store (etcd or Consul) ❌
At least one available block device ❌
In the current version of Portworx (1.4) it is recommended to use etcd or Consul
But Portworx also has beta support for an embedded key/value store
For simplicity, we are going to use the latter option
(but if we have deployed Consul or etcd, we can use that, too)
Block device = disk or partition on a disk
We can see block devices with lsblk
(or cat /proc/partitions
if we're old school like that!)
If we don't have a spare disk or partition, we can use a loop device
A loop device is a block device actually backed by a file
These are frequently used to mount ISO (CD/DVD) images or VM disk images
We are going to create a 10 GB (empty) file on each node
Then make a loop device from it, to be used by Portworx
Create a 10 GB file on each node:
for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done
(If SSH asks to confirm host keys, enter yes
each time.)
Associate the file to a loop device on each node:
for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done
To install Portworx, we need to go to https://install.portworx.com/
This website will ask us a bunch of questions about our cluster
Then, it will generate a YAML file that we should apply to our cluster
To install Portworx, we need to go to https://install.portworx.com/
This website will ask us a bunch of questions about our cluster
Then, it will generate a YAML file that we should apply to our cluster
Or, we can just apply that YAML file directly (it's in k8s/portworx.yaml
)
kubectl apply -f ~/container.training/k8s/portworx.yaml
If you want to generate a YAML file tailored to your own needs, the easiest way is to use https://install.portworx.com/.
FYI, this is how we obtained the YAML file used earlier:
KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion)BLKDEV=/dev/loop4curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true
If you want to use an external key/value store, add one of the following:
&k=etcd://XXX:2379&k=consul://XXX:8500
... where XXX
is the name or address of your etcd or Consul server.
Check out the logs:
stern -n kube-system portworx
Wait until it gets quiet
(you should see portworx service is healthy
, too)
We are going to run PostgreSQL in a Stateful set
The Stateful set will specify a volumeClaimTemplate
That volumeClaimTemplate
will create Persistent Volume Claims
Kubernetes' dynamic provisioning will satisfy these Persistent Volume Claims
(by creating Persistent Volumes and binding them to the claims)
The Persistent Volumes are then available for the PostgreSQL pods
It's possible that multiple storage systems are available
Or, that a storage system offers multiple tiers of storage
(SSD vs. magnetic; mirrored or not; etc.)
We need to tell Kubernetes which system and tier to use
This is achieved by creating a Storage Class
A volumeClaimTemplate
can indicate which Storage Class to use
It is also possible to mark a Storage Class as "default"
(it will be used if a volumeClaimTemplate
doesn't specify one)
kubectl get storageclass
There should be a storage class showing as portworx-replicated (default)
.
This is our Storage Class (in k8s/storage-class.yaml
):
kind: StorageClassapiVersion: storage.k8s.io/v1beta1metadata: name: portworx-replicated annotations: storageclass.kubernetes.io/is-default-class: "true"provisioner: kubernetes.io/portworx-volumeparameters: repl: "2" priority_io: "high"
It says "use Portworx to create volumes"
It tells Portworx to "keep 2 replicas of these volumes"
It marks the Storage Class as being the default one
The next slide shows k8s/postgres.yaml
It defines a Stateful set
With a volumeClaimTemplate
requesting a 1 GB volume
That volume will be mounted to /var/lib/postgresql/data
There is another little detail: we enable the stork
scheduler
The stork
scheduler is optional (it's specific to Portworx)
It helps the Kubernetes scheduler to colocate the pod with its volume
(see this blog post for more details about that)
apiVersion: apps/v1kind: StatefulSetmetadata: name: postgresspec: selector: matchLabels: app: postgres serviceName: postgres template: metadata: labels: app: postgres spec: schedulerName: stork containers: - name: postgres image: postgres:11 volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
kubectl get events -w
kubectl apply -f ~/container.training/k8s/postgres.yaml
We will use kubectl exec
to get a shell in the pod
Good to know: we need to use the postgres
user in the pod
postgres
user:kubectl exec -ti postgres-0 su postgres
psql -l
(This should show us 3 lines: postgres, template0, and template1.)
pgbench
Create a database named demo
:
createdb demo
Populate it with pgbench
:
pgbench -i -s 10 demo
The -i
flag means "create tables"
The -s 10
flag means "create 10 x 100,000 rows"
pgbench
tool inserts rows in table pgbench_accounts
Check that the demo
base exists:
psql -l
Check how many rows we have in pgbench_accounts
:
psql demo -c "select count(*) from pgbench_accounts"
(We should see a count of 1,000,000 rows.)
kubectl get pods -o wide
kubectl get pod postgres-0 -o wide
We are going to disrupt that node.
kubectl get pods -o wide
kubectl get pod postgres-0 -o wide
We are going to disrupt that node.
By "disrupt" we mean: "disconnect it from the network".
We will use iptables
to block all traffic exiting the node
(except SSH traffic, so we can repair the node later if needed)
SSH to the node to disrupt:
ssh nodeX
Allow SSH traffic leaving the node, but block all other traffic:
sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPTsudo iptables -I OUTPUT 2 -j DROP
Check that the node can't communicate with other nodes:
ping node1
Logout to go back on node1
kubectl get events -w
and kubectl get pods -w
It will take some time for Kubernetes to mark the node as unhealthy
Then it will attempt to reschedule the pod to another node
In about a minute, our pod should be up and running again
kubectl exec -ti postgres-0 su postgres
pgbench_accounts
table:psql demo -c "select count(*) from pgbench_accounts"
kubectl get pod postgres-0 -o wide
SSH to the node:
ssh nodeX
Remove the iptables rule blocking traffic:
sudo iptables -D OUTPUT 2
In a real deployment, you would want to set a password
This can be done by creating a secret
:
kubectl create secret generic postgres \ --from-literal=password=$(base64 /dev/urandom | head -c16)
And then passing that secret to the container:
env:- name: POSTGRES_PASSWORDvalueFrom: secretKeyRef: name: postgres key: password
If we need to see what's going on with Portworx:
PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json | jq -r .items[0].metadata.name)kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status
We can also connect to Lighthouse (a web UI)
check the port with kubectl -n kube-system get svc px-lighthouse
connect to that port
the default login/password is admin/Password1
then specify portworx-service
as the endpoint
Portworx provides a storage driver
It needs to place itself "above" the Kubelet
(it installs itself straight on the nodes)
To remove it, we need to do more than just deleting its Kubernetes resources
It is done by applying a special label:
kubectl label nodes --all px/enabled=remove --overwrite
Then removing a bunch of local files:
sudo chattr -i /etc/pwx/.private.jsonsudo rm -rf /etc/pwx /opt/pwx
(on each node where Portworx was running)
What if we want to use Stateful sets without a storage provider?
We will have to create volumes manually
(by creating Persistent Volume objects)
These volumes will be automatically bound with matching Persistent Volume Claims
We can use local volumes (essentially bind mounts of host directories)
Of course, these volumes won't be available in case of node failure
Check this blog post for more information and gotchas
The Portworx installation tutorial, and the PostgreSQL example, were inspired by Portworx examples on Katacoda, in particular:
installing Portworx on Kubernetes
(with adapatations to use a loop device and an embedded key/value store)
persistent volumes on Kubernetes using Portworx
(with adapatations to specify a default Storage Class)
HA PostgreSQL on Kubernetes with Portworx
(with adaptations to use a Stateful Set and simplify PostgreSQL's setup)
vSAN
(automatically generated title slide)
Instantiate Stateful Pods
Compatible with CSI
Distributed storage for higher fault tolerance + performance
Available for Pods and VMs
Next steps
(automatically generated title slide)
Alright, how do I get started and containerize my apps?
Alright, how do I get started and containerize my apps?
Suggested containerization checklist:
And then it is time to look at orchestration!
Use a managed cluster (AKS, EKS, GKE, PKS...)
(price: $, difficulty: medium)
Hire someone to deploy it for us
(price: $$, difficulty: easy)
Do it ourselves
(price: $-$$$, difficulty: hard)
Yes, it is possible to have prod+dev in a single cluster
(and implement good isolation and security with RBAC, network policies...)
But it is not a good idea to do that for our first deployment
Start with a production cluster + at least a test cluster
Implement and check RBAC and isolation on the test cluster
(e.g. deploy multiple test versions side-by-side)
Make sure that all our devs have usable dev clusters
(whether it's a local minikube or a full-blown multi-node cluster)
Namespaces let you run multiple identical stacks side by side
Two namespaces (e.g. blue
and green
) can each have their own redis
service
Each of the two redis
services has its own ClusterIP
CoreDNS creates two entries, mapping to these two ClusterIP
addresses:
redis.blue.svc.cluster.local
and redis.green.svc.cluster.local
Pods in the blue
namespace get a search suffix of blue.svc.cluster.local
As a result, resolving redis
from a pod in the blue
namespace yields the "local" redis
This does not provide isolation! That would be the job of network policies.
(covers permissions model, user and service accounts management ...)
As a first step, it is wiser to keep stateful services outside of the cluster
Exposing them to pods can be done with multiple solutions:
ExternalName
services
(redis.blue.svc.cluster.local
will be a CNAME
record)
ClusterIP
services with explicit Endpoints
(instead of letting Kubernetes generate the endpoints from a selector)
Ambassador services
(application-level proxies that can provide credentials injection and more)
If we want to host stateful services on Kubernetes, we can use:
a storage provider
persistent volumes, persistent volume claims
stateful sets
Good questions to ask:
what's the operational cost of running this service ourselves?
what do we gain by deploying this stateful service on Kubernetes?
Relevant sections: Volumes | Stateful Sets | Persistent Volumes
Excellent blog post tackling the question: “Should I run Postgres on Kubernetes?”
Services are layer 4 constructs
HTTP is a layer 7 protocol
It is handled by ingresses (a different resource kind)
Ingresses allow:
This section shows how to expose multiple HTTP apps using Træfik
Logging is delegated to the container engine
Logs are exposed through the API
Logs are also accessible through local files (/var/log/containers
)
Log shipping to a central platform is usually done through these files
(e.g. with an agent bind-mounting the log directory)
This section shows how to do that with Fluentd and the EFK stack
The kubelet embeds cAdvisor, which exposes container metrics
(cAdvisor might be separated in the future for more flexibility)
It is a good idea to start with Prometheus
(even if you end up using something else)
Starting from Kubernetes 1.8, we can use the Metrics API
Heapster was a popular add-on
(but is being deprecated starting with Kubernetes 1.11)
Two constructs are particularly useful: secrets and config maps
They allow to expose arbitrary information to our containers
Avoid storing configuration in container images
(There are some exceptions to that rule, but it's generally a Bad Idea)
Never store sensitive information in container images
(It's the container equivalent of the password on a post-it note on your screen)
This section shows how to manage app config with config maps (among others)
We learned a lot about Kubernetes, its internals, its advanced concepts
That was just the easy part
The hard challenges will revolve around culture and people
We learned a lot about Kubernetes, its internals, its advanced concepts
That was just the easy part
The hard challenges will revolve around culture and people
... What does that mean?
Write the app
Tests, QA ...
Ship something (more on that later)
Provision resources (e.g. VMs, clusters)
Deploy the something on the resources
Manage, maintain, monitor the resources
Manage, maintain, monitor the app
And much more
The old "devs vs ops" division has changed
In some organizations, "ops" are now called "SRE" or "platform" teams
(and they have very different sets of skills)
Do you know which team is responsible for each item on the list on the previous page?
Acknowledge that a lot of tasks are outsourced
(e.g. if we add "buy/rack/provision machines" in that list)
Some organizations embrace "you build it, you run it"
When "build" and "run" are owned by different teams, where's the line?
What does the "build" team ship to the "run" team?
Let's see a few options, and what they imply
Team "build" ships code
(hopefully in a repository, identified by a commit hash)
Team "run" containerizes that code
✔️ no extra work for developers
❌ very little advantage of using containers
Team "build" ships container images
(hopefully built automatically from a source repository)
Team "run" uses theses images to create e.g. Kubernetes resources
✔️ universal artefact (support all languages uniformly)
✔️ easy to start a single component (good for monoliths)
❌ complex applications will require a lot of extra work
❌ adding/removing components in the stack also requires extra work
❌ complex applications will run very differently between dev and prod
(Or another kind of dev-centric manifest)
Team "build" ships a manifest that works on a single node
(as well as images, or ways to build them)
Team "run" adapts that manifest to work on a cluster
✔️ all teams can start the stack in a reliable, deterministic manner
❌ adding/removing components still requires some work (but less than before)
❌ there will be some differences between dev and prod
Team "build" ships ready-to-run manifests
(YAML, Helm charts, Kustomize ...)
Team "run" adjusts some parameters and monitors the application
✔️ parity between dev and prod environments
✔️ "run" team can focus on SLAs, SLOs, and overall quality
❌ requires a lot of extra work (and new skills) from the "build" team
❌ Kubernetes is not a very convenient development platform (at least, not yet)
It depends on our teams
existing skills (do they know how to do it?)
availability (do they have the time to do it?)
potential skills (can they learn to do it?)
It depends on our culture
owning "run" often implies being on call
do we reward on-call duty without encouraging hero syndrome?
do we give people resources (time, money) to learn?
We've put this last, but it's pretty important!
How do you on-board a new developer?
What do they need to install to get a dev stack?
How does a code change make it from dev to prod?
How does someone add a component to a stack?
Links and resources
(automatically generated title slide)
All things Kubernetes:
All things Docker:
Everything else:
These slides (and future updates) are on → http://container.training/
Hello! We are:
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |