+ - 0:00:00
Notes for current slide
Notes for next slide

Kubernetes Meetup
@
VMware


shared/title.md

1/459

Intros

  • Hello! We are:

  • Feel free to interrupt for questions at any time

  • Especially when you see full screen container pictures!

logistics.md

2/459

A brief introduction

  • This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials

  • Credit is also due to multiple contributors — thank you!

  • You can also follow along on your own, at your own pace

  • We included as much information as possible in these slides

  • We recommend having a mentor to help you ...

  • ... Or be comfortable spending some time reading the Kubernetes documentation ...

  • ... And looking for answers on StackOverflow and other outlets

k8s/intro.md

3/459

About these slides

4/459

About these slides

  • Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...

👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.

shared/about-slides.md

5/459

Extra details

  • This slide has a little magnifying glass in the top left corner

  • This magnifying glass indicates slides that provide extra details

  • Feel free to skip them if:

    • you are in a hurry

    • you are new to this and want to avoid cognitive overload

    • you want only the most essential information

  • You can review these slides another time if you want, they'll be waiting for you ☺

shared/about-slides.md

6/459

Image separating from the next chapter

11/459

Pre-requirements

(automatically generated title slide)

12/459

Pre-requirements

  • Be comfortable with the UNIX command line

    • navigating directories

    • editing files

    • a little bit of bash-fu (environment variables, loops)

  • Some Docker knowledge

    • docker run, docker ps, docker build

    • ideally, you know how to write a Dockerfile and build it
      (even if it's a FROM line and a couple of RUN commands)

  • It's totally OK if you are not a Docker expert!

shared/prereqs.md

13/459

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.

Misattributed to Benjamin Franklin

(Probably inspired by Chinese Confucian philosopher Xunzi)

shared/prereqs.md

14/459

Hands-on sections

  • The whole workshop is hands-on

  • We are going to build, ship, and run containers!

  • You are invited to reproduce all the demos

  • All hands-on sections are clearly identified, like the gray rectangle below

shared/prereqs.md

15/459
  • Use arrows to move to next/previous slide

    (up, down, left, right, page up, page down)

  • Type a slide number + ENTER to go to that slide

  • The slide number is also visible in the URL bar

    (e.g. .../#123 for slide 123)

  • Slides will remain online so you can review them later if needed

shared/prereqs.md

16/459

Where are we going to run our containers?

shared/prereqs.md

17/459

You get a cluster of cloud VMs

  • Each person gets a private cluster of cloud VMs (not shared with anybody else)

  • They'll remain up for the duration of the workshop

  • You should have a little card with login+password+IP addresses

  • You can automatically SSH from one VM to another

  • The nodes have aliases: node1, node2, etc.

shared/prereqs.md

19/459

Why don't we run containers locally?

  • Installing this stuff can be hard on some machines

    (32 bits CPU or OS... Laptops without administrator access... etc.)

  • "The whole team downloaded all these container images from the WiFi!
    ... and it went great!"
    (Literally no-one ever)

  • All you need is a computer (or even a phone or tablet!), with:

    • an internet connection

    • a web browser

    • an SSH client

shared/prereqs.md

20/459

SSH clients

shared/prereqs.md

21/459

What is this Mosh thing?

You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!

  • Mosh is "the mobile shell"

  • It is essentially SSH over UDP, with roaming features

  • It retransmits packets quickly, so it works great even on lossy connections

    (Like hotel or conference WiFi)

  • It has intelligent local echo, so it works great even in high-latency connections

    (Like hotel or conference WiFi)

  • It supports transparent roaming when your client IP address changes

    (Like when you hop from hotel to conference WiFi)

shared/prereqs.md

22/459

Using Mosh

  • To install it: (apt|yum|brew) install mosh

  • It has been pre-installed on the VMs that we are using

  • To connect to a remote machine: mosh user@host

    (It is going to establish an SSH connection, then hand off to UDP)

  • It requires UDP ports to be open

    (By default, it uses a UDP port between 60000 and 61000)

shared/prereqs.md

23/459

Connecting to our lab environment

  • Log into the first VM (node1) with your SSH client:

    ssh user@A.B.C.D

    (Replace user and A.B.C.D with the user and IP address provided to you)

You should see a prompt looking like this:

[A.B.C.D] (...) user@node1 ~
$

If anything goes wrong — ask for help!

shared/connecting.md

24/459

Doing or re-doing the workshop on your own?

  • Use something like Play-With-Docker or Play-With-Kubernetes

    Zero setup effort; but environment are short-lived and might have limited resources

  • Create your own cluster (local or cloud VMs)

    Small setup effort; small cost; flexible environments

  • Create a bunch of clusters for you and your friends (instructions)

    Bigger setup effort; ideal for group training

shared/connecting.md

25/459

For a consistent Kubernetes experience ...

  • If you are using your own Kubernetes cluster, you can use shpod

  • shpod provides a shell running in a pod on your own cluster

  • It comes with many tools pre-installed (helm, stern...)

  • These tools are used in many exercises in these slides

  • shpod also gives you completion and a fancy prompt

shared/connecting.md

26/459

We will (mostly) interact with node1 only

These remarks apply only when using multiple nodes, of course.

  • Unless instructed, all commands must be run from the first VM, node1

  • We will only check out/copy the code on node1

  • During normal operations, we do not need access to the other nodes

  • If we had to troubleshoot issues, we would use a combination of:

    • SSH (to access system logs, daemon status...)

    • Docker API (to check running containers and container engine status)

shared/connecting.md

27/459

Terminals

Once in a while, the instructions will say:
"Open a new terminal."

There are multiple ways to do this:

  • create a new window or tab on your machine, and SSH into the VM;

  • use screen or tmux on the VM and open a new window from there.

You are welcome to use the method that you feel the most comfortable with.

shared/connecting.md

28/459

Tmux cheatsheet

Tmux is a terminal multiplexer like screen.

You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.

  • Ctrl-b c → creates a new window
  • Ctrl-b n → go to next window
  • Ctrl-b p → go to previous window
  • Ctrl-b " → split window top/bottom
  • Ctrl-b % → split window left/right
  • Ctrl-b Alt-1 → rearrange windows in columns
  • Ctrl-b Alt-2 → rearrange windows in rows
  • Ctrl-b arrows → navigate to other windows
  • Ctrl-b d → detach session
  • tmux attach → reattach to session

shared/connecting.md

29/459

Image separating from the next chapter

30/459

Our sample application

(automatically generated title slide)

31/459

Our sample application

  • We will clone the GitHub repository onto our node1

  • The repository also contains scripts and tools that we will use through the workshop

  • Clone the repository on node1:
    git clone https://github.com/jpetazzo/container.training

(You can also fork the repository on GitHub and clone your fork if you prefer that.)

shared/sampleapp.md

32/459

Having a look at the application

  • Go to the dockercoins directory, in the cloned repo:

    cd ~/container.training/dockercoins
  • Check the files and directories:

    tree

shared/sampleapp.md

33/459

Viewing the application

  • Jérôme is going to wear his developer hat ...

  • ... start the application on his developer's machine ...

  • ... and wait for the app to be up and running.

shared/sampleapp.md

34/459

What's this application?

35/459

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢
36/459

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

37/459

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

  • How DockerCoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

38/459

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

  • How DockerCoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

  • DockerCoins is not a cryptocurrency

    (the only common points are "randomness," "hashing," and "coins" in the name)

shared/sampleapp.md

39/459

DockerCoins in the microservices era

  • DockerCoins is made of 5 services:

    • rng = web service generating random bytes

    • hasher = web service computing hash of POSTed data

    • worker = background process calling rng and hasher

    • webui = web interface to watch progress

    • redis = data store (holds a counter updated by worker)

  • These 5 services are visible in the application's Compose file, docker-compose.yml

shared/sampleapp.md

40/459

How DockerCoins works

  • worker invokes web service rng to generate random bytes

  • worker invokes web service hasher to hash these bytes

  • worker does this in an infinite loop

  • every second, worker updates redis to indicate how many loops were done

  • webui queries redis, and computes and exposes "hashing speed" in our browser

(See diagram on next slide!)

shared/sampleapp.md

41/459

Service discovery in container-land

How does each service find out the address of the other ones?

43/459

Service discovery in container-land

How does each service find out the address of the other ones?

  • We do not hard-code IP addresses in the code

  • We do not hard-code FQDNs in the code, either

  • We just connect to a service name, and container-magic does the rest

    (And by container-magic, we mean "a crafty, dynamic, embedded DNS server")

shared/sampleapp.md

44/459

Example in worker/worker.py

redis = Redis("redis")
def get_random_bytes():
r = requests.get("http://rng/32")
return r.content
def hash_bytes(data):
r = requests.post("http://hasher/",
data=data,
headers={"Content-Type": "application/octet-stream"})

(Full source code available here)

shared/sampleapp.md

45/459

Show me the code!

  • You can check the GitHub repository with all the materials of this workshop:
    https://github.com/jpetazzo/container.training

  • The application is in the dockercoins subdirectory

  • The Compose file (docker-compose.yml) lists all 5 services

  • redis is using an official image from the Docker Hub

  • hasher, rng, worker, webui are each built from a Dockerfile

  • Each service's Dockerfile and source code is in its own directory

    (hasher is in the hasher directory, rng is in the rng directory, etc.)

shared/sampleapp.md

46/459

Our application at work

  • On the left-hand side, the "rainbow strip" shows the container names

  • On the right-hand side, we see the output of our containers

  • We can see the worker service making requests to rng and hasher

  • For rng and hasher, we see HTTP access logs

shared/sampleapp.md

47/459

Connecting to the web UI

  • "Logs are exciting and fun!" (No-one, ever)

  • The webui container exposes a web dashboard; let's view it

A drawing area should show up, and after a few seconds, a blue graph will appear.

shared/sampleapp.md

48/459

Image separating from the next chapter

49/459

Kubernetes concepts

(automatically generated title slide)

50/459

Kubernetes concepts

  • Kubernetes is a container management system

  • It runs and manages containerized applications on a cluster

51/459

Kubernetes concepts

  • Kubernetes is a container management system

  • It runs and manages containerized applications on a cluster

  • What does that really mean?

k8s/concepts-k8s.md

52/459

Basic things we can ask Kubernetes to do

53/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3
54/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

55/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

56/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

57/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

58/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

59/459

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

  • Keep processing requests during the upgrade; update my containers one at a time

k8s/concepts-k8s.md

60/459

Other things that Kubernetes can do for us

  • Autoscaling

    (straightforward on CPU; more complex on other metrics)

  • Ressource management and scheduling

    (reserve CPU/RAM for containers; placement constraints)

  • Advanced rollout patterns

    (blue/green deployment, canary deployment)

k8s/concepts-k8s.md

61/459

More things that Kubernetes can do for us

  • Batch jobs

    (one-off; parallel; also cron-style periodic execution)

  • Fine-grained access control

    (defining what can be done by whom on which resources)

  • Stateful services

    (databases, message queues, etc.)

  • Automating complex tasks with operators

    (e.g. database replication, failover, etc.)

k8s/concepts-k8s.md

62/459

Kubernetes architecture

k8s/concepts-k8s.md

63/459

Kubernetes architecture

  • Ha ha ha ha

  • OK, I was trying to scare you, it's much simpler than that ❤️

k8s/concepts-k8s.md

65/459

Credits

  • The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI

    (Courtesy of Yongbok Kim)

  • The second one is a simplified representation of a Kubernetes cluster

    (Courtesy of Imesh Gunaratne)

k8s/concepts-k8s.md

67/459

Kubernetes architecture: the nodes

  • The nodes executing our containers run a collection of services:

    • a container Engine (typically Docker)

    • kubelet (the "node agent")

    • kube-proxy (a necessary but not sufficient network component)

  • Nodes were formerly called "minions"

    (You might see that word in older articles or documentation)

k8s/concepts-k8s.md

68/459

Kubernetes architecture: the control plane

  • The Kubernetes logic (its "brains") is a collection of services:

    • the API server (our point of entry to everything!)

    • core services like the scheduler and controller manager

    • etcd (a highly available key/value store; the "database" of Kubernetes)

  • Together, these services form the control plane of our cluster

  • The control plane is also called the "master"

k8s/concepts-k8s.md

69/459

Running the control plane on special nodes

  • It is common to reserve a dedicated node for the control plane

    (Except for single-node development clusters, like when using minikube)

  • This node is then called a "master"

    (Yes, this is ambiguous: is the "master" a node, or the whole control plane?)

  • Normal applications are restricted from running on this node

    (By using a mechanism called "taints")

  • When high availability is required, each service of the control plane must be resilient

  • The control plane is then replicated on multiple nodes

    (This is sometimes called a "multi-master" setup)

k8s/concepts-k8s.md

71/459

Running the control plane outside containers

  • The services of the control plane can run in or out of containers

  • For instance: since etcd is a critical service, some people deploy it directly on a dedicated cluster (without containers)

    (This is illustrated on the first "super complicated" schema)

  • In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible

    (We only "see" a Kubernetes API endpoint)

  • In that case, there is no "master node"

For this reason, it is more accurate to say "control plane" rather than "master."

k8s/concepts-k8s.md

72/459

Do we need to run Docker at all?

No!

73/459

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • We can leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

k8s/concepts-k8s.md

74/459

Some runtimes available through CRI

  • containerd

    • maintained by Docker, IBM, and community
    • used by Docker Engine, microk8s, k3s, GKE; also standalone
    • comes with its own CLI, ctr
  • CRI-O:

    • maintained by Red Hat, SUSE, and community
    • used by OpenShift and Kubic
    • designed specifically as a minimal runtime for Kubernetes
  • And more

k8s/concepts-k8s.md

75/459

Do we need to run Docker at all?

Yes!

76/459

Do we need to run Docker at all?

Yes!

  • In this workshop, we run our app on a single node first

  • We will need to build images and ship them around

  • We can do these things without Docker
    (and get diagnosed with NIH¹ syndrome)

  • Docker is still the most stable container engine today
    (but other options are maturing very quickly)

¹Not Invented Here

k8s/concepts-k8s.md

77/459

Do we need to run Docker at all?

  • On our development environments, CI pipelines ... :

    Yes, almost certainly

  • On our production servers:

    Yes (today)

    Probably not (in the future)

More information about CRI on the Kubernetes blog

k8s/concepts-k8s.md

78/459

Interacting with Kubernetes

  • We will interact with our Kubernetes cluster through the Kubernetes API

  • The Kubernetes API is (mostly) RESTful

  • It allows us to create, read, update, delete resources

  • A few common resource types are:

    • node (a machine — physical or virtual — in our cluster)

    • pod (group of containers running together on a node)

    • service (stable network endpoint to connect to one or multiple containers)

k8s/concepts-k8s.md

79/459

Scaling

  • How would we scale the pod shown on the previous slide?

  • Do create additional pods

    • each pod can be on a different node

    • each pod will have its own IP address

  • Do not add more NGINX containers in the pod

    • all the NGINX containers would be on the same node

    • they would all have the same IP address
      (resulting in Address alreading in use errors)

k8s/concepts-k8s.md

81/459

Together or separate

  • Should we put e.g. a web application server and a cache together?
    ("cache" being something like e.g. Memcached or Redis)

  • Putting them in the same pod means:

    • they have to be scaled together

    • they can communicate very efficiently over localhost

  • Putting them in different pods means:

    • they can be scaled separately

    • they must communicate over remote IP addresses
      (incurring more latency, lower performance)

  • Both scenarios can make sense, depending on our goals

k8s/concepts-k8s.md

82/459

Credits

  • The first diagram is courtesy of Lucas Käldström, in this presentation

    • it's one of the best Kubernetes architecture diagrams available!
  • The second diagram is courtesy of Weave Works

    • a pod can have multiple containers working together

    • IP addresses are associated with pods, not with individual containers

Both diagrams used with permission.

k8s/concepts-k8s.md

83/459

Image separating from the next chapter

84/459

First contact with kubectl

(automatically generated title slide)

85/459

First contact with kubectl

  • kubectl is (almost) the only tool we'll need to talk to Kubernetes

  • It is a rich CLI tool around the Kubernetes API

    (Everything you can do with kubectl, you can do directly with the API)

  • On our machines, there is a ~/.kube/config file with:

    • the Kubernetes API address

    • the path to our TLS certificates used to authenticate

  • You can also use the --kubeconfig flag to pass a config file

  • Or directly --server, --user, etc.

  • kubectl can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...

k8s/kubectlget.md

86/459

kubectl is the new SSH

  • We often start managing servers with SSH

    (installing packages, troubleshooting ...)

  • At scale, it becomes tedious, repetitive, error-prone

  • Instead, we use config management, central logging, etc.

  • In many cases, we still need SSH:

    • as the underlying access method (e.g. Ansible)

    • to debug tricky scenarios

    • to inspect and poke at things

k8s/kubectlget.md

87/459

The parallel with kubectl

  • We often start managing Kubernetes clusters with kubectl

    (deploying applications, troubleshooting ...)

  • At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone

  • Instead, we use automated pipelines, observability tooling, etc.

  • In many cases, we still need kubectl:

    • to debug tricky scenarios

    • to inspect and poke at things

  • The Kubernetes API is always the underlying access method

k8s/kubectlget.md

88/459

kubectl get

  • Let's look at our Node resources with kubectl get!
  • Look at the composition of our cluster:

    kubectl get node
  • These commands are equivalent:

    kubectl get no
    kubectl get node
    kubectl get nodes

k8s/kubectlget.md

89/459

Obtaining machine-readable output

  • kubectl get can output JSON, YAML, or be directly formatted
  • Give us more info about the nodes:

    kubectl get nodes -o wide
  • Let's have some YAML:

    kubectl get no -o yaml

    See that kind: List at the end? It's the type of our result!

k8s/kubectlget.md

90/459

(Ab)using kubectl and jq

  • It's super easy to build custom reports
  • Show the capacity of all our nodes as a stream of JSON objects:
    kubectl get nodes -o json |
    jq ".items[] | {name:.metadata.name} + .status.capacity"

k8s/kubectlget.md

91/459

Exploring types and definitions

  • We can list all available resource types by running kubectl api-resources
    (In Kubernetes 1.10 and prior, this command used to be kubectl get)

  • We can view the definition for a resource type with:

    kubectl explain type
  • We can view the definition of a field in a resource, for instance:

    kubectl explain node.spec
  • Or get the full definition of all fields and sub-fields:

    kubectl explain node --recursive

k8s/kubectlget.md

92/459

Introspection vs. documentation

  • We can access the same information by reading the API documentation

  • The API documentation is usually easier to read, but:

    • it won't show custom types (like Custom Resource Definitions)

    • we need to make sure that we look at the correct version

  • kubectl api-resources and kubectl explain perform introspection

    (they communicate with the API server and obtain the exact type definitions)

k8s/kubectlget.md

93/459

Type names

  • The most common resource names have three forms:

    • singular (e.g. node, service, deployment)

    • plural (e.g. nodes, services, deployments)

    • short (e.g. no, svc, deploy)

  • Some resources do not have a short name

  • Endpoints only have a plural form

    (because even a single Endpoints resource is actually a list of endpoints)

k8s/kubectlget.md

94/459

Viewing details

  • We can use kubectl get -o yaml to see all available details

  • However, YAML output is often simultaneously too much and not enough

  • For instance, kubectl get node node1 -o yaml is:

    • too much information (e.g.: list of images available on this node)

    • not enough information (e.g.: doesn't show pods running on this node)

    • difficult to read for a human operator

  • For a comprehensive overview, we can use kubectl describe instead

k8s/kubectlget.md

95/459

kubectl describe

  • kubectl describe needs a resource type and (optionally) a resource name

  • It is possible to provide a resource name prefix

    (all matching objects will be displayed)

  • kubectl describe will retrieve some extra information about the resource

  • Look at the information available for node1 with one of the following commands:
    kubectl describe node/node1
    kubectl describe node node1

(We should notice a bunch of control plane pods.)

k8s/kubectlget.md

96/459

Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods
97/459

Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods

Where are the pods that we saw just a moment earlier?!?

k8s/kubectlget.md

98/459

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns
99/459

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns

You know what ... This kube-system thing looks suspicious.

In fact, I'm pretty sure it showed up earlier, when we did:

kubectl describe node node1

k8s/kubectlget.md

100/459

Accessing namespaces

  • By default, kubectl uses the default namespace

  • We can see resources in all namespaces with --all-namespaces

  • List the pods in all namespaces:

    kubectl get pods --all-namespaces
  • Since Kubernetes 1.14, we can also use -A as a shorter version:

    kubectl get pods -A

Here are our system pods!

k8s/kubectlget.md

101/459

What are all these control plane pods?

  • etcd is our etcd server

  • kube-apiserver is the API server

  • kube-controller-manager and kube-scheduler are other control plane components

  • coredns provides DNS-based service discovery (replacing kube-dns as of 1.11)

  • kube-proxy is the (per-node) component managing port mappings and such

  • weave is the (per-node) component managing the network overlay

  • the READY column indicates the number of containers in each pod

    (1 for most pods, but weave has 2, for instance)

k8s/kubectlget.md

102/459

Scoping another namespace

  • We can also look at a different namespace (other than default)
  • List only the pods in the kube-system namespace:
    kubectl get pods --namespace=kube-system
    kubectl get pods -n kube-system

k8s/kubectlget.md

103/459

Namespaces and other kubectl commands

  • We can use -n/--namespace with almost every kubectl command

  • Example:

    • kubectl create --namespace=X to create something in namespace X
  • We can use -A/--all-namespaces with most commands that manipulate multiple objects

  • Examples:

    • kubectl delete can delete resources across multiple namespaces

    • kubectl label can add/remove/update labels across multiple namespaces

k8s/kubectlget.md

104/459

What about kube-public?

  • List the pods in the kube-public namespace:
    kubectl -n kube-public get pods

Nothing!

kube-public is created by kubeadm & used for security bootstrapping.

k8s/kubectlget.md

105/459

Exploring kube-public

  • The only interesting object in kube-public is a ConfigMap named cluster-info
  • List ConfigMap objects:

    kubectl -n kube-public get configmaps
  • Inspect cluster-info:

    kubectl -n kube-public get configmap cluster-info -o yaml

Note the selfLink URI: /api/v1/namespaces/kube-public/configmaps/cluster-info

We can use that!

k8s/kubectlget.md

106/459

Accessing cluster-info

  • Earlier, when trying to access the API server, we got a Forbidden message

  • But cluster-info is readable by everyone (even without authentication)

  • Retrieve cluster-info:
    curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
  • We were able to access cluster-info (without auth)

  • It contains a kubeconfig file

k8s/kubectlget.md

107/459

Retrieving kubeconfig

  • We can easily extract the kubeconfig file from this ConfigMap
  • Display the content of kubeconfig:
    curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \
    | jq -r .data.kubeconfig
  • This file holds the canonical address of the API server, and the public key of the CA

  • This file does not hold client keys or tokens

  • This is not sensitive information, but allows us to establish trust

k8s/kubectlget.md

108/459

What about kube-node-lease?

  • Starting with Kubernetes 1.14, there is a kube-node-lease namespace

    (or in Kubernetes 1.13 if the NodeLease feature gate is enabled)

  • That namespace contains one Lease object per node

  • Node leases are a new way to implement node heartbeats

    (i.e. node regularly pinging the control plane to say "I'm alive!")

  • For more details, see KEP-0009 or the node controller documentation k8s/kubectlget.md

109/459

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc
110/459

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc

There is already one service on our cluster: the Kubernetes API itself.

k8s/kubectlget.md

111/459

ClusterIP services

  • A ClusterIP service is internal, available from the cluster only

  • This is useful for introspection from within containers

  • Try to connect to the API:

    curl -k https://10.96.0.1
    • -k is used to skip certificate verification

    • Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc

The command above should either time out, or show an authentication error. Why?

k8s/kubectlget.md

112/459

Time out

  • Connections to ClusterIP services only work from within the cluster

  • If we are outside the cluster, the curl command will probably time out

    (Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)

  • This is the case with most "real" Kubernetes clusters

  • To try the connection from within the cluster, we can use shpod

k8s/kubectlget.md

113/459

Authentication error

This is what we should see when connecting from within the cluster:

$ curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}

k8s/kubectlget.md

114/459

Explanations

  • We can see kind, apiVersion, metadata

  • These are typical of a Kubernetes API reply

  • Because we are talking to the Kubernetes API

  • The Kubernetes API tells us "Forbidden"

    (because it requires authentication)

  • The Kubernetes API is reachable from within the cluster

    (many apps integrating with Kubernetes will use this)

k8s/kubectlget.md

115/459

DNS integration

  • Each service also gets a DNS record

  • The Kubernetes DNS resolver is available from within pods

    (and sometimes, from within nodes, depending on configuration)

  • Code running in pods can connect to services using their name

    (e.g. https://kubernetes/...)

k8s/kubectlget.md

116/459

Image separating from the next chapter

117/459

Running our first containers on Kubernetes

(automatically generated title slide)

118/459

Running our first containers on Kubernetes

  • First things first: we cannot run a container
119/459

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

120/459

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

  • In that container in the pod, we are going to run a simple ping command

  • Then we are going to start additional copies of the pod

k8s/kubectlrun.md

121/459

Starting a simple pod with kubectl run

  • We need to specify at least a name and the image we want to use
  • Let's ping 1.1.1.1, Cloudflare's public DNS resolver:
    kubectl run pingpong --image alpine ping 1.1.1.1
122/459

Starting a simple pod with kubectl run

  • We need to specify at least a name and the image we want to use
  • Let's ping 1.1.1.1, Cloudflare's public DNS resolver:
    kubectl run pingpong --image alpine ping 1.1.1.1

(Starting with Kubernetes 1.12, we get a message telling us that kubectl run is deprecated. Let's ignore it for now.)

k8s/kubectlrun.md

123/459

Behind the scenes of kubectl run

  • Let's look at the resources that were created by kubectl run
  • List most resource types:
    kubectl get all
124/459

Behind the scenes of kubectl run

  • Let's look at the resources that were created by kubectl run
  • List most resource types:
    kubectl get all

We should see the following things:

  • deployment.apps/pingpong (the deployment that we just created)
  • replicaset.apps/pingpong-xxxxxxxxxx (a replica set created by the deployment)
  • pod/pingpong-xxxxxxxxxx-yyyyy (a pod created by the replica set)

Note: as of 1.10.1, resource types are displayed in more detail.

k8s/kubectlrun.md

125/459

What are these different things?

  • A deployment is a high-level construct

    • allows scaling, rolling updates, rollbacks

    • multiple deployments can be used together to implement a canary deployment

    • delegates pods management to replica sets

  • A replica set is a low-level construct

    • makes sure that a given number of identical pods are running

    • allows scaling

    • rarely used directly

  • A replication controller is the (deprecated) predecessor of a replica set

k8s/kubectlrun.md

126/459

Our pingpong deployment

  • kubectl run created a deployment, deployment.apps/pingpong
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1 1 1 1 10m
  • That deployment created a replica set, replicaset.apps/pingpong-xxxxxxxxxx
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
  • That replica set created a pod, pod/pingpong-xxxxxxxxxx-yyyyy
NAME READY STATUS RESTARTS AGE
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
  • We'll see later how these folks play together for:

    • scaling, high availability, rolling updates

k8s/kubectlrun.md

127/459

Viewing container output

  • Let's use the kubectl logs command

  • We will pass either a pod name, or a type/name

    (E.g. if we specify a deployment or replica set, it will get the first pod in it)

  • Unless specified otherwise, it will only show logs of the first container in the pod

    (Good thing there's only one in ours!)

  • View the result of our ping command:
    kubectl logs deploy/pingpong

k8s/kubectlrun.md

128/459

Streaming logs in real time

  • Just like docker logs, kubectl logs supports convenient options:

    • -f/--follow to stream logs in real time (à la tail -f)

    • --tail to indicate how many lines you want to see (from the end)

    • --since to get logs only after a given timestamp

  • View the latest logs of our ping command:

    kubectl logs deploy/pingpong --tail 1 --follow
  • Leave that command running, so that we can keep an eye on these logs

k8s/kubectlrun.md

129/459

Scaling our application

  • We can create additional copies of our container (I mean, our pod) with kubectl scale
  • Scale our pingpong deployment:

    kubectl scale deploy/pingpong --replicas 3
  • Note that this command does exactly the same thing:

    kubectl scale deployment pingpong --replicas 3

Note: what if we tried to scale replicaset.apps/pingpong-xxxxxxxxxx?

We could! But the deployment would notice it right away, and scale back to the initial level.

k8s/kubectlrun.md

130/459

Log streaming

  • Let's look again at the output of kubectl logs

    (the one we started before scaling up)

  • kubectl logs shows us one line per second

  • We could expect 3 lines per second

    (since we should now have 3 pods running ping)

  • Let's try to figure out what's happening!

k8s/kubectlrun.md

131/459

Streaming logs of multiple pods

  • What happens if we restart kubectl logs?
  • Interrupt kubectl logs (with Ctrl-C)

  • Restart it:

    kubectl logs deploy/pingpong --tail 1 --follow

kubectl logs will warn us that multiple pods were found, and that it's showing us only one of them.

Let's leave kubectl logs running while we keep exploring.

k8s/kubectlrun.md

132/459

Resilience

  • The deployment pingpong watches its replica set

  • The replica set ensures that the right number of pods are running

  • What happens if pods disappear?

  • In a separate window, watch the list of pods:

    watch kubectl get pods
  • Destroy the pod currently shown by kubectl logs:

    kubectl delete pod pingpong-xxxxxxxxxx-yyyyy

k8s/kubectlrun.md

133/459

What happened?

  • kubectl delete pod terminates the pod gracefully

    (sending it the TERM signal and waiting for it to shutdown)

  • As soon as the pod is in "Terminating" state, the Replica Set replaces it

  • But we can still see the output of the "Terminating" pod in kubectl logs

  • Until 30 seconds later, when the grace period expires

  • The pod is then killed, and kubectl logs exits

k8s/kubectlrun.md

134/459

What if we wanted something different?

  • What if we wanted to start a "one-shot" container that doesn't get restarted?

  • We could use kubectl run --restart=OnFailure or kubectl run --restart=Never

  • These commands would create jobs or pods instead of deployments

  • Under the hood, kubectl run invokes "generators" to create resource descriptions

  • We could also write these resource descriptions ourselves (typically in YAML),
    and create them on the cluster with kubectl apply -f (discussed later)

  • With kubectl run --schedule=..., we can also create cronjobs

k8s/kubectlrun.md

135/459

Scheduling periodic background work

  • A Cron Job is a job that will be executed at specific intervals

    (the name comes from the traditional cronjobs executed by the UNIX crond)

  • It requires a schedule, represented as five space-separated fields:

    • minute [0,59]
    • hour [0,23]
    • day of the month [1,31]
    • month of the year [1,12]
    • day of the week ([0,6] with 0=Sunday)
  • * means "all valid values"; /N means "every N"

  • Example: */3 * * * * means "every three minutes"

k8s/kubectlrun.md

136/459

Creating a Cron Job

  • Let's create a simple job to be executed every three minutes

  • Cron Jobs need to terminate, otherwise they'd run forever

  • Create the Cron Job:

    kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10
  • Check the resource that was created:

    kubectl get cronjobs

k8s/kubectlrun.md

137/459

Cron Jobs in action

  • At the specified schedule, the Cron Job will create a Job

  • The Job will create a Pod

  • The Job will make sure that the Pod completes

    (re-creating another one if it fails, for instance if its node fails)

  • Check the Jobs that are created:
    kubectl get jobs

(It will take a few minutes before the first job is scheduled.)

k8s/kubectlrun.md

138/459

What about that deprecation warning?

  • As we can see from the previous slide, kubectl run can do many things

  • The exact type of resource created is not obvious

  • To make things more explicit, it is better to use kubectl create:

    • kubectl create deployment to create a deployment

    • kubectl create job to create a job

    • kubectl create cronjob to run a job periodically
      (since Kubernetes 1.14)

  • Eventually, kubectl run will be used only to start one-shot pods

    (see https://github.com/kubernetes/kubernetes/pull/68132)

k8s/kubectlrun.md

139/459

Various ways of creating resources

  • kubectl run

    • easy way to get started
    • versatile
  • kubectl create <resource>

    • explicit, but lacks some features
    • can't create a CronJob before Kubernetes 1.14
    • can't pass command-line arguments to deployments
  • kubectl create -f foo.yaml or kubectl apply -f foo.yaml

    • all features are available
    • requires writing YAML

k8s/kubectlrun.md

140/459

Viewing logs of multiple pods

  • When we specify a deployment name, only one single pod's logs are shown

  • We can view the logs of multiple pods by specifying a selector

  • A selector is a logic expression using labels

  • Conveniently, when you kubectl run somename, the associated objects have a run=somename label

  • View the last line of log from all pods with the run=pingpong label:
    kubectl logs -l run=pingpong --tail 1

k8s/kubectlrun.md

141/459

Streaming logs of multiple pods

  • Can we stream the logs of all our pingpong pods?
  • Combine -l and -f flags:
    kubectl logs -l run=pingpong --tail 1 -f

Note: combining -l and -f is only possible since Kubernetes 1.14!

Let's try to understand why ...

k8s/kubectlrun.md

142/459

Streaming logs of many pods

  • Let's see what happens if we try to stream the logs for more than 5 pods
  • Scale up our deployment:

    kubectl scale deployment pingpong --replicas=8
  • Stream the logs:

    kubectl logs -l run=pingpong --tail 1 -f

We see a message like the following one:

error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit

k8s/kubectlrun.md

143/459

Why can't we stream the logs of many pods?

  • kubectl opens one connection to the API server per pod

  • For each pod, the API server opens one extra connection to the corresponding kubelet

  • If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server

  • This could easily put a lot of stress on the API server

  • Prior Kubernetes 1.14, it was decided to not allow multiple connections

  • From Kubernetes 1.14, it is allowed, but limited to 5 connections

    (this can be changed with --max-log-requests)

  • For more details about the rationale, see PR #67573

k8s/kubectlrun.md

144/459

Shortcomings of kubectl logs

  • We don't see which pod sent which log line

  • If pods are restarted / replaced, the log stream stops

  • If new pods are added, we don't see their logs

  • To stream the logs of multiple pods, we need to write a selector

  • There are external tools to address these shortcomings

    (e.g.: Stern)

k8s/kubectlrun.md

145/459

kubectl logs -l ... --tail N

  • If we run this with Kubernetes 1.12, the last command shows multiple lines

  • This is a regression when --tail is used together with -l/--selector

  • It always shows the last 10 lines of output for each container

    (instead of the number of lines specified on the command line)

  • The problem was fixed in Kubernetes 1.13

See #70554 for details.

k8s/kubectlrun.md

146/459

Aren't we flooding 1.1.1.1?

  • If you're wondering this, good question!

  • Don't worry, though:

    APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.

    (Source: https://blog.cloudflare.com/announcing-1111/)

  • It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC!

k8s/kubectlrun.md

147/459

Image separating from the next chapter

148/459

Accessing logs from the CLI

(automatically generated title slide)

149/459

Accessing logs from the CLI

  • The kubectl logs command has limitations:

    • it cannot stream logs from multiple pods at a time

    • when showing logs from multiple pods, it mixes them all together

  • We are going to see how to do it better

k8s/logs-cli.md

150/459

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

151/459

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

  • We could do it, but thankfully, others did it for us already!

k8s/logs-cli.md

152/459

Stern

Stern is an open source project by Wercker.

From the README:

Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.

The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.

Exactly what we need!

k8s/logs-cli.md

153/459

Installing Stern

  • Run stern (without arguments) to check if it's installed:

    $ stern
    Tail multiple pods and containers from Kubernetes
    Usage:
    stern pod-query [flags]
  • If it is not installed, the easiest method is to download a binary release

  • The following commands will install Stern on a Linux Intel 64 bit machine:

    sudo curl -L -o /usr/local/bin/stern \
    https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
    sudo chmod +x /usr/local/bin/stern
  • On OS X, just brew install stern

k8s/logs-cli.md

154/459

Using Stern

  • There are two ways to specify the pods whose logs we want to see:

    • -l followed by a selector expression (like with many kubectl commands)

    • with a "pod query," i.e. a regex used to match pod names

  • These two ways can be combined if necessary

  • View the logs for all the rng containers:
    stern rng

k8s/logs-cli.md

155/459

Stern convenient options

  • The --tail N flag shows the last N lines for each container

    (Instead of showing the logs since the creation of the container)

  • The -t / --timestamps flag shows timestamps

  • The --all-namespaces flag is self-explanatory

  • View what's up with the weave system containers:
    stern --tail 1 --timestamps --all-namespaces weave

k8s/logs-cli.md

156/459

Using Stern with a selector

  • When specifying a selector, we can omit the value for a label

  • This will match all objects having that label (regardless of the value)

  • Everything created with kubectl run has a label run

  • We can use that property to view the logs of all the pods created with kubectl run

  • Similarly, everything created with kubectl create deployment has a label app

  • View the logs for all the things started with kubectl create deployment:
    stern -l app

k8s/logs-cli.md

157/459

Image separating from the next chapter

158/459

vRLI

(automatically generated title slide)

159/459

vRLI

Centralize logs

  • Compatible with syslog

  • Query language

  • Dashboards

  • High ingest capacity

vmware/vrli.md

160/459

Image separating from the next chapter

161/459

Declarative vs imperative

(automatically generated title slide)

162/459

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

163/459

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

164/459

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

  • ... As long as you know how to brew tea

shared/declarative.md

165/459

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

166/459

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

167/459

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

168/459

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

169/459

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

Did you know there was an ISO standard specifying how to brew tea?

shared/declarative.md

170/459

Declarative vs imperative

  • Imperative systems:

    • simpler

    • if a task is interrupted, we have to restart from scratch

  • Declarative systems:

    • if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary

    • we need to be able to observe the system

    • ... and compute a "diff" between what we have and what we want

shared/declarative.md

171/459

Declarative vs imperative in Kubernetes

  • With Kubernetes, we cannot say: "run this container"

  • All we can do is write a spec and push it to the API server

    (by creating a resource like e.g. a Pod or a Deployment)

  • The API server will validate that spec (and reject it if it's invalid)

  • Then it will store it in etcd

  • A controller will "notice" that spec and act upon it

k8s/declarative.md

172/459

Reconciling state

  • Watch for the spec fields in the YAML files later!

  • The spec describes how we want the thing to be

  • Kubernetes will reconcile the current state with the spec
    (technically, this is done by a number of controllers)

  • When we want to change some resource, we update the spec

  • Kubernetes will then converge that resource

k8s/declarative.md

173/459

19,000 words

They say, "a picture is worth one thousand words."

The following 19 slides show what really happens when we run:

kubectl run web --image=nginx --replicas=3

k8s/deploymentslideshow.md

174/459

Image separating from the next chapter

194/459

Kubernetes network model

(automatically generated title slide)

195/459

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

196/459

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

  • In detail:

    • all nodes must be able to reach each other, without NAT

    • all pods must be able to reach each other, without NAT

    • pods and nodes must be able to reach each other, without NAT

    • each pod is aware of its IP address (no NAT)

    • pod IP addresses are assigned by the network implementation

  • Kubernetes doesn't mandate any particular implementation

k8s/kubenet.md

197/459

Kubernetes network model: the good

  • Everything can reach everything

  • No address translation

  • No port translation

  • No new protocol

  • The network implementation can decide how to allocate addresses

  • IP addresses don't have to be "portable" from a node to another

    (We can use e.g. a subnet per node and use a simple routed topology)

  • The specification is simple enough to allow many various implementations

k8s/kubenet.md

198/459

Kubernetes network model: the less good

  • Everything can reach everything

    • if you want security, you need to add network policies

    • the network implementation that you use needs to support them

  • There are literally dozens of implementations out there

    (15 are listed in the Kubernetes documentation)

  • Pods have level 3 (IP) connectivity, but services are level 4 (TCP or UDP)

    (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)

  • kube-proxy is on the data path when connecting to a pod or container,
    and it's not particularly fast (relies on userland proxying or iptables)

k8s/kubenet.md

199/459

Kubernetes network model: in practice

  • The nodes that we are using have been set up to use Weave

  • We don't endorse Weave in a particular way, it just Works For Us

  • Don't worry about the warning about kube-proxy performance

  • Unless you:

    • routinely saturate 10G network interfaces
    • count packet rates in millions per second
    • run high-traffic VOIP or gaming platforms
    • do weird things that involve millions of simultaneous connections
      (in which case you're already familiar with kernel tuning)
  • If necessary, there are alternatives to kube-proxy; e.g. kube-router

k8s/kubenet.md

200/459

The Container Network Interface (CNI)

  • Most Kubernetes clusters use CNI "plugins" to implement networking

  • When a pod is created, Kubernetes delegates the network setup to these plugins

    (it can be a single plugin, or a combination of plugins, each doing one task)

  • Typically, CNI plugins will:

    • allocate an IP address (by calling an IPAM plugin)

    • add a network interface into the pod's network namespace

    • configure the interface as well as required routes etc.

k8s/kubenet.md

201/459

Multiple moving parts

  • The "pod-to-pod network" or "pod network":

    • provides communication between pods and nodes

    • is generally implemented with CNI plugins

  • The "pod-to-service network":

    • provides internal communication and load balancing

    • is generally implemented with kube-proxy (or e.g. kube-router)

  • Network policies:

    • provide firewalling and isolation

    • can be bundled with the "pod network" or provided by another component

k8s/kubenet.md

202/459

Even more moving parts

  • Inbound traffic can be handled by multiple components:

    • something like kube-proxy or kube-router (for NodePort services)

    • load balancers (ideally, connected to the pod network)

  • It is possible to use multiple pod networks in parallel

    (with "meta-plugins" like CNI-Genie or Multus)

  • Some solutions can fill multiple roles

    (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)

k8s/kubenet.md

203/459

Image separating from the next chapter

204/459

Exposing containers

(automatically generated title slide)

205/459

Exposing containers

  • kubectl expose creates a service for existing pods

  • A service is a stable address for a pod (or a bunch of pods)

  • If we want to connect to our pod(s), we need to create a service

  • Once a service is created, CoreDNS will allow us to resolve it by name

    (i.e. after creating service hello, the name hello will resolve to something)

  • There are different types of services, detailed on the following slides:

    ClusterIP, NodePort, LoadBalancer, ExternalName

  • HTTP services can also use Ingress resources (more on that later)

k8s/kubectlexpose.md

206/459

ClusterIP

  • It's the default service type

  • A virtual IP address is allocated for the service

    (in an internal, private range; e.g. 10.96.0.0/12)

  • This IP address is reachable only from within the cluster (nodes and pods)

  • Our code can connect to the service using the original port number

  • Perfect for internal communication, within the cluster

k8s/kubectlexpose.md

207/459

LoadBalancer

  • An external load balancer is allocated for the service

    (typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)

  • This is available only when the underlying infrastructure provides some kind of "load balancer as a service"

  • Each service of that type will typically cost a little bit of money

    (e.g. a few cents per hour on AWS or GCE)

  • Ideally, traffic would flow directly from the load balancer to the pods

  • In practice, it will often flow through a NodePort first

k8s/kubectlexpose.md

208/459

NodePort

  • A port number is allocated for the service

    (by default, in the 30000-32768 range)

  • That port is made available on all our nodes and anybody can connect to it

    (we can connect to any node on that port to reach the service)

  • Our code needs to be changed to connect to that new port number

  • Under the hood: kube-proxy sets up a bunch of iptables rules on our nodes

  • Sometimes, it's the only available option for external traffic

    (e.g. most clusters deployed with kubeadm or on-premises)

k8s/kubectlexpose.md

209/459

ExternalName

  • No load balancer (internal or external) is created

  • Only a DNS entry gets added to the DNS managed by Kubernetes

  • That DNS entry will just be a CNAME to a provided record

Example:

kubectl create service externalname k8s --external-name kubernetes.io

Creates a CNAME k8s pointing to kubernetes.io

k8s/kubectlexpose.md

210/459

Running containers with open ports

  • Since ping doesn't have anything to connect to, we'll have to run something else

  • We could use the nginx official image, but ...

    ... we wouldn't be able to tell the backends from each other!

  • We are going to use jpetazzo/httpenv, a tiny HTTP server written in Go

  • jpetazzo/httpenv listens on port 8888

  • It serves its environment variables in JSON format

  • The environment variables will include HOSTNAME, which will be the pod name

    (and therefore, will be different on each backend)

k8s/kubectlexpose.md

211/459

Creating a deployment for our HTTP server

  • We could do kubectl run httpenv --image=jpetazzo/httpenv ...

  • But since kubectl run is being deprecated, let's see how to use kubectl create instead

  • In another window, watch the pods (to see when they are created):
    kubectl get pods -w
  • Create a deployment for this very lightweight HTTP server:

    kubectl create deployment httpenv --image=jpetazzo/httpenv
  • Scale it to 10 replicas:

    kubectl scale deployment httpenv --replicas=10

k8s/kubectlexpose.md

212/459

Exposing our deployment

  • We'll create a default ClusterIP service
  • Expose the HTTP port of our server:

    kubectl expose deployment httpenv --port 8888
  • Look up which IP address was allocated:

    kubectl get service

k8s/kubectlexpose.md

213/459

Services are layer 4 constructs

  • You can assign IP addresses to services, but they are still layer 4

    (i.e. a service is not an IP address; it's an IP address + protocol + port)

  • This is caused by the current implementation of kube-proxy

    (it relies on mechanisms that don't support layer 3)

  • As a result: you have to indicate the port number for your service

  • Running services with arbitrary port (or port ranges) requires hacks

    (e.g. host networking mode)

k8s/kubectlexpose.md

214/459

Testing our service

  • We will now send a few HTTP requests to our pods
  • Let's obtain the IP address that was allocated for our service, programmatically:
    IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}')
  • Send a few requests:

    curl http://$IP:8888/
  • Too much output? Filter it with jq:

    curl -s http://$IP:8888/ | jq .HOSTNAME
215/459

Testing our service

  • We will now send a few HTTP requests to our pods
  • Let's obtain the IP address that was allocated for our service, programmatically:
    IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}')
  • Send a few requests:

    curl http://$IP:8888/
  • Too much output? Filter it with jq:

    curl -s http://$IP:8888/ | jq .HOSTNAME

Try it a few times! Our requests are load balanced across multiple pods.

k8s/kubectlexpose.md

216/459

If we don't need a load balancer

  • Sometimes, we want to access our scaled services directly:

    • if we want to save a tiny little bit of latency (typically less than 1ms)

    • if we need to connect over arbitrary ports (instead of a few fixed ones)

    • if we need to communicate over another protocol than UDP or TCP

    • if we want to decide how to balance the requests client-side

    • ...

  • In that case, we can use a "headless service"

k8s/kubectlexpose.md

217/459

Headless services

  • A headless service is obtained by setting the clusterIP field to None

    (Either with --cluster-ip=None, or by providing a custom YAML)

  • As a result, the service doesn't have a virtual IP address

  • Since there is no virtual IP address, there is no load balancer either

  • CoreDNS will return the pods' IP addresses as multiple A records

  • This gives us an easy way to discover all the replicas for a deployment

k8s/kubectlexpose.md

218/459

Services and endpoints

  • A service has a number of "endpoints"

  • Each endpoint is a host + port where the service is available

  • The endpoints are maintained and updated automatically by Kubernetes

  • Check the endpoints that Kubernetes has associated with our httpenv service:
    kubectl describe service httpenv

In the output, there will be a line starting with Endpoints:.

That line will list a bunch of addresses in host:port format.

k8s/kubectlexpose.md

219/459

Viewing endpoint details

  • When we have many endpoints, our display commands truncate the list

    kubectl get endpoints
  • If we want to see the full list, we can use one of the following commands:

    kubectl describe endpoints httpenv
    kubectl get endpoints httpenv -o yaml
  • These commands will show us a list of IP addresses

  • These IP addresses should match the addresses of the corresponding pods:

    kubectl get pods -l app=httpenv -o wide

k8s/kubectlexpose.md

220/459

endpoints not endpoint

  • endpoints is the only resource that cannot be singular
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
  • This is because the type itself is plural (unlike every other resource)

  • There is no endpoint object: type Endpoints struct

  • The type doesn't represent a single endpoint, but a list of endpoints

k8s/kubectlexpose.md

221/459

ExternalIP

  • When creating a servivce, we can also specify an ExternalIP

    (this is not a type, but an extra attribute to the service)

  • It will make the service availableon this IP address

    (if the IP address belongs to a node of the cluster)

k8s/kubectlexpose.md

222/459

Ingress

  • Ingresses are another type (kind) of resource

  • They are specifically for HTTP services

    (not TCP or UDP)

  • They can also handle TLS certificates, URL rewriting ...

  • They require an Ingress Controller to function

k8s/kubectlexpose.md

223/459

Image separating from the next chapter

224/459

NSX-T

(automatically generated title slide)

225/459

NSX-T

Connect and secure Kubernetes Pods

  • Distributed firewall and micro-segmentation for VMs and Pods

  • Ingress and LoadBalancer Controller for Kubernetes

  • Traceflow for Pods and dynamic routing

vmware/nsxt.md

226/459

Image separating from the next chapter

227/459

Shipping images with a registry

(automatically generated title slide)

228/459

Shipping images with a registry

  • Initially, our app was running on a single node

  • We could build and run in the same place

  • Therefore, we did not need to ship anything

  • Now that we want to run on a cluster, things are different

  • The easiest way to ship container images is to use a registry

k8s/shippingimages.md

229/459

How Docker registries work (a reminder)

  • What happens when we execute docker run alpine ?

  • If the Engine needs to pull the alpine image, it expands it into library/alpine

  • library/alpine is expanded into index.docker.io/library/alpine

  • The Engine communicates with index.docker.io to retrieve library/alpine:latest

  • To use something else than index.docker.io, we specify it in the image name

  • Examples:

    docker pull gcr.io/google-containers/alpine-with-bash:1.0
    docker build -t registry.mycompany.io:5000/myimage:awesome .
    docker push registry.mycompany.io:5000/myimage:awesome

k8s/shippingimages.md

230/459

Running DockerCoins on Kubernetes

  • Create one deployment for each component

    (hasher, redis, rng, webui, worker)

  • Expose deployments that need to accept connections

    (hasher, redis, rng, webui)

  • For redis, we can use the official redis image

  • For the 4 others, we need to build images and push them to some registry

k8s/shippingimages.md

231/459

Building and shipping images

  • There are many options!

  • Manually:

    • build locally (with docker build or otherwise)

    • push to the registry

  • Automatically:

    • build and test locally

    • when ready, commit and push a code repository

    • the code repository notifies an automated build system

    • that system gets the code, builds it, pushes the image to the registry

k8s/shippingimages.md

232/459

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

  • There are also commercial products to run our own registry

    (Docker EE, Quay...)

  • And open source options, too!

  • When picking a registry, pay attention to its build system

    (when it has one)

k8s/shippingimages.md

233/459

Using images from the Docker Hub

  • For everyone's convenience, we took care of building DockerCoins images

  • We pushed these images to the DockerHub, under the dockercoins user

  • These images are tagged with a version number, v0.1

  • The full image names are therefore:

    • dockercoins/hasher:v0.1

    • dockercoins/rng:v0.1

    • dockercoins/webui:v0.1

    • dockercoins/worker:v0.1

k8s/buildshiprun-dockerhub.md

234/459

Image separating from the next chapter

235/459

Running our application on Kubernetes

(automatically generated title slide)

236/459

Running our application on Kubernetes

  • We can now deploy our code (as well as a redis instance)
  • Deploy redis:

    kubectl create deployment redis --image=redis
  • Deploy everything else:

    kubectl create deployment hasher --image=dockercoins/hasher:v0.1
    kubectl create deployment rng --image=dockercoins/rng:v0.1
    kubectl create deployment webui --image=dockercoins/webui:v0.1
    kubectl create deployment worker --image=dockercoins/worker:v0.1

k8s/ourapponkube.md

237/459

Deploying other images

  • If we wanted to deploy images from another registry ...

  • ... Or with a different tag ...

  • ... We could use the following snippet:

REGISTRY=dockercoins
TAG=v0.1
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done

k8s/ourapponkube.md

238/459

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker
239/459

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

240/459

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

💡 Oh right! We forgot to expose.

k8s/ourapponkube.md

241/459

Connecting containers together

  • Three deployments need to be reachable by others: hasher, redis, rng

  • worker doesn't need to be exposed

  • webui will be dealt with later

  • Expose each deployment, specifying the right port:
    kubectl expose deployment redis --port 6379
    kubectl expose deployment rng --port 80
    kubectl expose deployment hasher --port 80

k8s/ourapponkube.md

242/459

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

243/459

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

We should now see the worker, well, working happily.

k8s/ourapponkube.md

244/459

Exposing services for external access

  • Now we would like to access the Web UI

  • We will expose it with a NodePort

    (just like we did for the registry)

  • Create a NodePort service for the Web UI:

    kubectl expose deploy/webui --type=NodePort --port=80
  • Check the port that was allocated:

    kubectl get svc

On PKS, replace NodePort with LoadBalancer.

k8s/ourapponkube.md

245/459

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI
  • On PKS, you will have to use the EXTERNAL-IP shown on the webui line

    (and you can connect to port 80, yay!)

246/459

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI
  • On PKS, you will have to use the EXTERNAL-IP shown on the webui line

    (and you can connect to port 80, yay!)

Yes, this may take a little while to update. (Narrator: it was DNS.)

247/459

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI
  • On PKS, you will have to use the EXTERNAL-IP shown on the webui line

    (and you can connect to port 80, yay!)

Yes, this may take a little while to update. (Narrator: it was DNS.)

Alright, we're back to where we started, when we were running on a single node!

k8s/ourapponkube.md

248/459

Image separating from the next chapter

249/459

Deploying with YAML

(automatically generated title slide)

250/459

Deploying with YAML

  • So far, we created resources with the following commands:

    • kubectl run

    • kubectl create deployment

    • kubectl expose

  • We can also create resources directly with YAML manifests

k8s/yamldeploy.md

251/459

kubectl apply vs create

  • kubectl create -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, don't alter them
      (and display error message)

  • kubectl apply -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, update them
      (to match the definition provided by the YAML file)

    • stores the manifest as an annotation in the resource

k8s/yamldeploy.md

252/459

Creating multiple resources

  • The manifest can contain multiple resources separated by ---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...

k8s/yamldeploy.md

253/459

Creating multiple resources

  • The manifest can also contain a list of resources
apiVersion: v1
kind: List
items:
- kind: ...
apiVersion: ...
...
- kind: ...
apiVersion: ...
...

k8s/yamldeploy.md

254/459

Deploying dockercoins with YAML

  • We provide a YAML manifest with all the resources for Dockercoins

    (Deployments and Services)

  • We can use it if we need to deploy or redeploy Dockercoins

  • Deploy or redeploy Dockercoins:
    kubectl apply -f ~/container.training/k8s/dockercoins.yaml

(If we deployed Dockercoins earlier, we will see warning messages, because the resources that we created lack the necessary annotation. We can safely ignore them.)

k8s/yamldeploy.md

255/459

Image separating from the next chapter

256/459

Setting up Kubernetes

(automatically generated title slide)

257/459

Setting up Kubernetes

  • How did we set up these Kubernetes clusters that we're using?
258/459

Setting up Kubernetes

  • How did we set up these Kubernetes clusters that we're using?
  • We used kubeadm on freshly installed VM instances running Ubuntu LTS

    1. Install Docker

    2. Install Kubernetes packages

    3. Run kubeadm init on the first node (it deploys the control plane on that node)

    4. Set up Weave (the overlay network)
      (that step is just one kubectl apply command; discussed later)

    5. Run kubeadm join on the other nodes (with the token produced by kubeadm init)

    6. Copy the configuration file generated by kubeadm init

  • Check the prepare VMs README for more details

k8s/setup-k8s.md

259/459

kubeadm drawbacks

  • Doesn't set up Docker or any other container engine

  • Doesn't set up the overlay network

  • Doesn't set up multi-master (no high availability)

260/459

kubeadm drawbacks

  • Doesn't set up Docker or any other container engine

  • Doesn't set up the overlay network

  • Doesn't set up multi-master (no high availability)

    (At least ... not yet! Though it's experimental in 1.12.)

261/459

kubeadm drawbacks

  • Doesn't set up Docker or any other container engine

  • Doesn't set up the overlay network

  • Doesn't set up multi-master (no high availability)

    (At least ... not yet! Though it's experimental in 1.12.)

  • "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme

k8s/setup-k8s.md

262/459

Other deployment options

k8s/setup-k8s.md

263/459

Even more deployment options

  • If you like Ansible: kubespray

  • If you like Terraform: typhoon

  • If you like Terraform and Puppet: tarmak

  • You can also learn how to install every component manually, with the excellent tutorial Kubernetes The Hard Way

    Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.

  • There are also many commercial options available!

  • For a longer list, check the Kubernetes documentation:
    it has a great guide to pick the right solution to set up Kubernetes.

k8s/setup-k8s.md

264/459

Image separating from the next chapter

265/459

PKS

(automatically generated title slide)

266/459

PKS

Automate and streamline Kubernetes cluster deployment and operations

  • Fully automated installation of mainstream Kubernetes

  • Scale up, scale down & upgrade clusters

  • Highly-available control plane & self-healing features

    (replace nodes automatically when needed and deploy CVE patches)

  • Integration with VMware SDDC (Software Defined Data Center) features

    (e.g. vMotion, DRS, Shared Datastore, NSX-T, vREALIZE Suite)

vmware/pks.md

267/459

Image separating from the next chapter

268/459

Scaling our demo app

(automatically generated title slide)

269/459

Scaling our demo app

  • Our ultimate goal is to get more DockerCoins

    (i.e. increase the number of loops per second shown on the web UI)

  • Let's look at the architecture again:

    DockerCoins architecture

  • The loop is done in the worker; perhaps we could try adding more workers?

k8s/scalingdockercoins.md

270/459

Adding another worker

  • All we have to do is scale the worker Deployment
  • Open two new terminals to check what's going on with pods and deployments:
    kubectl get pods -w
    kubectl get deployments -w
  • Now, create more worker replicas:
    kubectl scale deployment worker --replicas=2

After a few seconds, the graph in the web UI should show up.

k8s/scalingdockercoins.md

271/459

Adding more workers

  • If 2 workers give us 2x speed, what about 3 workers?
  • Scale the worker Deployment further:
    kubectl scale deployment worker --replicas=3

The graph in the web UI should go up again.

(This is looking great! We're gonna be RICH!)

k8s/scalingdockercoins.md

272/459

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10
273/459

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10

The graph will peak at 10 hashes/second.

(We can add as many workers as we want: we will never go past 10 hashes/second.)

k8s/scalingdockercoins.md

274/459

Didn't we briefly exceed 10 hashes/second?

  • It may look like it, because the web UI shows instant speed

  • The instant speed can briefly exceed 10 hashes/second

  • The average speed cannot

  • The instant speed can be biased because of how it's computed

k8s/scalingdockercoins.md

275/459

Why instant speed is misleading

  • The instant speed is computed client-side by the web UI

  • The web UI checks the hash counter once per second
    (and does a classic (h2-h1)/(t2-t1) speed computation)

  • The counter is updated once per second by the workers

  • These timings are not exact
    (e.g. the web UI check interval is client-side JavaScript)

  • Sometimes, between two web UI counter measurements,
    the workers are able to update the counter twice

  • During that cycle, the instant speed will appear to be much bigger
    (but it will be compensated by lower instant speed before and after)

k8s/scalingdockercoins.md

276/459

Why are we stuck at 10 hashes per second?

  • If this was high-quality, production code, we would have instrumentation

    (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)

  • It's not!

  • Perhaps we could benchmark our web services?

    (with tools like ab, or even simpler, httping)

k8s/scalingdockercoins.md

277/459

Benchmarking our web services

  • We want to check hasher and rng

  • We are going to use httping

  • It's just like ping, but using HTTP GET requests

    (it measures how long it takes to perform one GET request)

  • It's used like this:

    httping [-c count] http://host:port/path
  • Or even simpler:

    httping ip.ad.dr.ess
  • We will use httping on the ClusterIP addresses of our services

k8s/scalingdockercoins.md

278/459

Obtaining ClusterIP addresses

  • We can simply check the output of kubectl get services

  • Or do it programmatically, as in the example below

  • Retrieve the IP addresses:
    HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})
    RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})

Now we can access the IP addresses of our services through $HASHER and $RNG.

k8s/scalingdockercoins.md

279/459

Checking hasher and rng response times

  • Check the response times for both services:
    httping -c 3 $HASHER
    httping -c 3 $RNG
  • hasher is fine (it should take a few milliseconds to reply)

  • rng is not (it should take about 700 milliseconds if there are 10 workers)

  • Something is wrong with rng, but ... what?

k8s/scalingdockercoins.md

280/459

Scaling rng

  • Let's scale the rng service just like we scaled worker
  • Scale rng:
    kubectl scale deploy rng --replicas=2

The web UI graph should go past 10 hashes/second.

kube-fullday.yml

281/459

Image separating from the next chapter

282/459

vROPS

(automatically generated title slide)

283/459

vROPS

Manage Kubernetes and/or PKS clusters

  • Automatically add new PKS clusters after deployment

  • Supervision

  • Capacity management

  • Global view of infrastructure

vmware/vrops.md

284/459

Image separating from the next chapter

285/459

Rolling updates

(automatically generated title slide)

286/459

Rolling updates

  • By default (without rolling updates), when a scaled resource is updated:

    • new pods are created

    • old pods are terminated

    • ... all at the same time

    • if something goes wrong, ¯\_(ツ)_/¯

k8s/rollout.md

287/459

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple Replica Sets

  • Each Replica Set is a group of identical Pods

    (with the same image, arguments, parameters ...)

  • During the rolling update, we have at least two Replica Sets:

    • the "new" set (corresponding to the "target" version)

    • at least one "old" set

  • We can have multiple "old" sets

    (if we start another update before the first one is done)

k8s/rollout.md

288/459

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge

  • They can be specified in absolute number of pods, or percentage of the replicas count

  • At any given time ...

    • there will always be at least replicas-maxUnavailable pods available

    • there will never be more than replicas+maxSurge pods in total

    • there will therefore be up to maxUnavailable+maxSurge pods being updated

  • We have the possibility of rolling back to the previous version
    (if the update fails or is unsatisfactory in any way)

k8s/rollout.md

289/459

Checking current rollout parameters

  • Recall how we build custom reports with kubectl and jq:
  • Show the rollout plan for our deployments:
    kubectl get deploy -o json |
    jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"

k8s/rollout.md

290/459

Rolling updates in practice

  • As of Kubernetes 1.8, we can do rolling updates with:

    deployments, daemonsets, statefulsets

  • Editing one of these resources will automatically result in a rolling update

  • Rolling updates can be monitored with the kubectl rollout subcommand

k8s/rollout.md

291/459

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2
292/459

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2

That rollout should be pretty quick. What shows in the web UI?

k8s/rollout.md

293/459

Give it some time

  • At first, it looks like nothing is happening (the graph remains at the same level)

  • According to kubectl get deploy -w, the deployment was updated really quickly

  • But kubectl get pods -w tells a different story

  • The old pods are still here, and they stay in Terminating state for a while

  • Eventually, they are terminated; and then the graph decreases significantly

  • This delay is due to the fact that our worker doesn't handle signals

  • Kubernetes sends a "polite" shutdown request to the worker, which ignores it

  • After a grace period, Kubernetes gets impatient and kills the container

    (The grace period is 30 seconds, but can be changed if needed)

k8s/rollout.md

294/459

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker
295/459

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker

Our rollout is stuck. However, the app is not dead.

(After a minute, it will stabilize to be 20-25% slower.)

k8s/rollout.md

296/459

What's going on with our rollout?

  • Why is our app a bit slower?

  • Because MaxUnavailable=25%

    ... So the rollout terminated 2 replicas out of 10 available

  • Okay, but why do we see 5 new replicas being rolled out?

  • Because MaxSurge=25%

    ... So in addition to replacing 2 replicas, the rollout is also starting 3 more

  • It rounded down the number of MaxUnavailable pods conservatively,
    but the total number of pods being rolled out is allowed to be 25+25=50%

k8s/rollout.md

297/459

The nitty-gritty details

  • We start with 10 pods running for the worker deployment

  • Current settings: MaxUnavailable=25% and MaxSurge=25%

  • When we start the rollout:

    • two replicas are taken down (as per MaxUnavailable=25%)
    • two others are created (with the new version) to replace them
    • three others are created (with the new version) per MaxSurge=25%)
  • Now we have 8 replicas up and running, and 5 being deployed

  • Our rollout is stuck at this point!

k8s/rollout.md

298/459

Checking the dashboard during the bad rollout

If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.

  • Connect to the dashboard that we deployed earlier

  • Check that we have failures in Deployments, Pods, and Replica Sets

  • Can we see the reason for the failure?

k8s/rollout.md

299/459

Recovering from a bad rollout

  • We could push some v0.3 image

    (the pod retry logic will eventually catch it and the rollout will proceed)

  • Or we could invoke a manual rollback

  • Cancel the deployment and wait for the dust to settle:
    kubectl rollout undo deploy worker
    kubectl rollout status deploy worker

k8s/rollout.md

300/459

Rolling back to an older version

  • We reverted to v0.2

  • But this version still has a performance problem

  • How can we get back to the previous version?

k8s/rollout.md

301/459

Multiple "undos"

  • What happens if we try kubectl rollout undo again?
  • Try it:

    kubectl rollout undo deployment worker
  • Check the web UI, the list of pods ...

🤔 That didn't work.

k8s/rollout.md

302/459

Multiple "undos" don't work

  • If we see successive versions as a stack:

    • kubectl rollout undo doesn't "pop" the last element from the stack

    • it copies the N-1th element to the top

  • Multiple "undos" just swap back and forth between the last two versions!

  • Go back to v0.2 again:
    kubectl rollout undo deployment worker

k8s/rollout.md

303/459

In this specific scenario

  • Our version numbers are easy to guess

  • What if we had used git hashes?

  • What if we had changed other parameters in the Pod spec?

k8s/rollout.md

304/459

Listing versions

  • We can list successive versions of a Deployment with kubectl rollout history
  • Look at our successive versions:
    kubectl rollout history deployment worker

We don't see all revisions.

We might see something like 1, 4, 5.

(Depending on how many "undos" we did before.)

k8s/rollout.md

305/459

Explaining deployment revisions

  • These revisions correspond to our Replica Sets

  • This information is stored in the Replica Set annotations

  • Check the annotations for our replica sets:
    kubectl describe replicasets -l app=worker | grep -A3

k8s/rollout.md

306/459

What about the missing revisions?

  • The missing revisions are stored in another annotation:

    deployment.kubernetes.io/revision-history

  • These are not shown in kubectl rollout history

  • We could easily reconstruct the full list with a script

    (if we wanted to!)

k8s/rollout.md

307/459

Rolling back to an older version

  • kubectl rollout undo can work with a revision number
  • Roll back to the "known good" deployment version:

    kubectl rollout undo deployment worker --to-revision=1
  • Check the web UI or the list of pods

k8s/rollout.md

308/459

Changing rollout parameters

  • We want to:

    • revert to v0.1
    • be conservative on availability (always have desired number of available workers)
    • go slow on rollout speed (update only one pod at a time)
    • give some time to our workers to "warm up" before starting more

The corresponding changes can be expressed in the following YAML snippet:

spec:
template:
spec:
containers:
- name: worker
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 10

k8s/rollout.md

309/459

Applying changes through a YAML patch

  • We could use kubectl edit deployment worker

  • But we could also use kubectl patch with the exact YAML shown before

  • Apply all our changes and wait for them to take effect:
    kubectl patch deployment worker -p "
    spec:
    template:
    spec:
    containers:
    - name: worker
    image: dockercoins/worker:v0.1
    strategy:
    rollingUpdate:
    maxUnavailable: 0
    maxSurge: 1
    minReadySeconds: 10
    "
    kubectl rollout status deployment worker
    kubectl get deploy -o json worker |
    jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"

k8s/rollout.md

310/459

Image separating from the next chapter

311/459

Namespaces

(automatically generated title slide)

312/459

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

313/459

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

  • As hinted by the title of this section, we will use namespaces

k8s/namespaces.md

314/459

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

315/459

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

316/459

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

317/459

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

  • Except for resources that exist at the cluster scope

    (these do not belong to a namespace)

k8s/namespaces.md

318/459

Uniquely identifying a resource

  • For namespaced resources:

    the tuple (kind, name, namespace) needs to be unique

  • For resources at the cluster scope:

    the tuple (kind, name) needs to be unique

  • List resource types again, and check the NAMESPACED column:
    kubectl api-resources

k8s/namespaces.md

319/459

Pre-existing namespaces

  • If we deploy a cluster with kubeadm, we have three or four namespaces:

    • default (for our applications)

    • kube-system (for the control plane)

    • kube-public (contains one ConfigMap for cluster discovery)

    • kube-node-lease (in Kubernetes 1.14 and later; contains Lease objects)

  • If we deploy differently, we may have different namespaces

k8s/namespaces.md

320/459

Creating namespaces

  • Let's see two identical methods to create a namespace
  • We can use kubectl create namespace:

    kubectl create namespace blue
  • Or we can construct a very minimal YAML snippet:

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: blue
    EOF
  • Some tools like Helm will create namespaces automatically when needed

k8s/namespaces.md

321/459

Using namespaces

  • We can pass a -n or --namespace flag to most kubectl commands:

    kubectl -n blue get svc
  • We can also change our current context

  • A context is a (user, cluster, namespace) tuple

  • We can manipulate contexts with the kubectl config command

k8s/namespaces.md

322/459

Viewing existing contexts

  • On our training environments, at this point, there should be only one context
  • View existing contexts to see the cluster name and the current user:
    kubectl config get-contexts
  • The current context (the only one!) is tagged with a *

  • What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?

k8s/namespaces.md

323/459

What's in a context

  • NAME is an arbitrary string to identify the context

  • CLUSTER is a reference to a cluster

    (i.e. API endpoint URL, and optional certificate)

  • AUTHINFO is a reference to the authentication information to use

    (i.e. a TLS client certificate, token, or otherwise)

  • NAMESPACE is the namespace

    (empty string = default)

k8s/namespaces.md

324/459

Switching contexts

  • We want to use a different namespace

  • Solution 1: update the current context

    This is appropriate if we need to change just one thing (e.g. namespace or authentication).

  • Solution 2: create a new context and switch to it

    This is appropriate if we need to change multiple things and switch back and forth.

  • Let's go with solution 1!

k8s/namespaces.md

325/459

Updating a context

  • This is done through kubectl config set-context

  • We can update a context by passing its name, or the current context with --current

  • Update the current context to use the blue namespace:

    kubectl config set-context --current --namespace=blue
  • Check the result:

    kubectl config get-contexts

k8s/namespaces.md

326/459

Using our new namespace

  • Let's check that we are in our new namespace, then deploy a new copy of Dockercoins
  • Verify that the new context is empty:
    kubectl get all

k8s/namespaces.md

327/459

Deploying DockerCoins with YAML files

  • The GitHub repository jpetazzo/kubercoins contains everything we need!
  • Clone the kubercoins repository:

    cd ~
    git clone https://github.com/jpetazzo/kubercoins
  • Create all the DockerCoins resources:

    kubectl create -f kubercoins

If the argument behind -f is a directory, all the files in that directory are processed.

The subdirectories are not processed, unless we also add the -R flag.

k8s/namespaces.md

328/459

Viewing the deployed app

  • Let's see if this worked correctly!
  • Retrieve the port number allocated to the webui service:

    kubectl get svc webui
  • Point our browser to http://X.X.X.X:3xxxx

If the graph shows up but stays at zero, give it a minute or two!

k8s/namespaces.md

329/459

Namespaces and isolation

  • Namespaces do not provide isolation

  • A pod in the green namespace can communicate with a pod in the blue namespace

  • A pod in the default namespace can communicate with a pod in the kube-system namespace

  • CoreDNS uses a different subdomain for each namespace

  • Example: from any pod in the cluster, you can connect to the Kubernetes API with:

    https://kubernetes.default.svc.cluster.local:443/

k8s/namespaces.md

330/459

Isolating pods

  • Actual isolation is implemented with network policies

  • Network policies are resources (like deployments, services, namespaces...)

  • Network policies specify which flows are allowed:

    • between pods

    • from pods to the outside world

    • and vice-versa

k8s/namespaces.md

331/459

Switch back to the default namespace

  • Let's make sure that we don't run future exercises in the blue namespace
  • Switch back to the original context:
    kubectl config set-context --current --namespace=

Note: we could have used --namespace=default for the same result.

k8s/namespaces.md

332/459

Switching namespaces more easily

  • We can also use a little helper tool called kubens:

    # Switch to namespace foo
    kubens foo
    # Switch back to the previous namespace
    kubens -
  • On our clusters, kubens is called kns instead

    (so that it's even fewer keystrokes to switch namespaces)

k8s/namespaces.md

333/459

kubens and kubectx

  • With kubens, we can switch quickly between namespaces

  • With kubectx, we can switch quickly between contexts

  • Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx

  • On our clusters, they are installed as kns and kctx

    (for brevity and to avoid completion clashes between kubectx and kubectl)

k8s/namespaces.md

334/459

kube-ps1

  • It's easy to lose track of our current cluster / context / namespace

  • kube-ps1 makes it easy to track these, by showing them in our shell prompt

  • It's a simple shell script available from https://github.com/jonmosco/kube-ps1

  • On our clusters, kube-ps1 is installed and included in PS1:

    [123.45.67.89] (kubernetes-admin@kubernetes:default) docker@node1 ~

    (The highlighted part is context:namespace, managed by kube-ps1)

  • Highly recommended if you work across multiple contexts or namespaces!

k8s/namespaces.md

335/459

Image separating from the next chapter

336/459

Volumes

(automatically generated title slide)

337/459

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine

    • share files and directories between containers and their host

    • centralize configuration information in Kubernetes and expose it to containers

    • manage credentials and secrets and expose them securely to containers

    • store persistent data for stateful services

    • access storage systems (like Ceph, EBS, NFS, Portworx, and many others)

k8s/volumes.md

338/459

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

  • Both Docker and Kubernetes volumes enable access to storage systems

  • Kubernetes volumes are also used to expose configuration and secrets

  • Docker has specific concepts for configuration and secrets
    (but under the hood, the technical implementation is similar)

  • If you're not familiar with Docker volumes, you can safely ignore this slide!

k8s/volumes.md

339/459

Volumes ≠ Persistent Volumes

  • Volumes and Persistent Volumes are related, but very different!

  • Volumes:

    • appear in Pod specifications (see next slide)

    • do not exist as API resources (cannot do kubectl get volumes)

  • Persistent Volumes:

    • are API resources (can do kubectl get persistentvolumes)

    • correspond to concrete volumes (e.g. on a SAN, EBS, etc.)

    • cannot be associated with a Pod directly; but through a Persistent Volume Claim

    • won't be discussed further in this section

k8s/volumes.md

340/459

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

  • We will mount that volume in a container in the Pod

  • By default, this volume will be an emptyDir

    (an empty directory)

  • It will "shadow" the directory where it's mounted

k8s/volumes.md

341/459

Our basic Pod

apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx

This is a MVP! (Minimum Viable Pod😉)

It runs a single NGINX container.

k8s/volumes.md

342/459

Trying the basic pod

  • Create the Pod:

    kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should see the "Welcome to NGINX" page.)

k8s/volumes.md

343/459

Adding a volume

  • We need to add the volume in two places:

    • at the Pod level (to declare the volume)

    • at the container level (to mount the volume)

  • We will declare a volume named www

  • No type is specified, so it will default to emptyDir

    (as the name implies, it will be initialized as an empty directory at pod creation)

  • In that pod, there is also a container named nginx

  • That container mounts the volume www to path /usr/share/nginx/html/

k8s/volumes.md

344/459

The Pod with a volume

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/

k8s/volumes.md

345/459

Trying the Pod with a volume

  • Create the Pod:

    kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should now see a "403 Forbidden" error page.)

k8s/volumes.md

346/459

Populating the volume with another container

  • Let's add another container to the Pod

  • Let's mount the volume in both containers

  • That container will populate the volume with static files

  • NGINX will then serve these static files

  • To populate the volume, we will clone the Spoon-Knife repository

k8s/volumes.md

347/459

Sharing a volume between two containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

k8s/volumes.md

348/459

Sharing a volume, explained

  • We added another container to the pod

  • That container mounts the www volume on a different path (/www)

  • It uses the alpine image

  • When started, it installs git and clones the octocat/Spoon-Knife repository

    (that repository contains a tiny HTML website)

  • As a result, NGINX now serves this website

k8s/volumes.md

349/459

Trying the shared volume

  • This one will be time-sensitive!

  • We need to catch the Pod IP address as soon as it's created

  • Then send a request to it as fast as possible

  • Watch the pods (so that we can catch the Pod IP address)
    kubectl get pods -o wide --watch

k8s/volumes.md

350/459

Shared volume in action

  • Create the pod:

    kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml
  • As soon as we see its IP address, access it:

    curl $IP
  • A few seconds later, the state of the pod will change; access it again:

    curl $IP

The first time, we should see "403 Forbidden".

The second time, we should see the HTML file from the Spoon-Knife repository.

k8s/volumes.md

351/459

Explanations

  • Both containers are started at the same time

  • NGINX starts very quickly

    (it can serve requests immediately)

  • But at this point, the volume is empty

    (NGINX serves "403 Forbidden")

  • The other containers installs git and clones the repository

    (this takes a bit longer)

  • When the other container is done, the volume holds the repository

    (NGINX serves the HTML file)

k8s/volumes.md

352/459

The devil is in the details

  • The default restartPolicy is Always

  • This would cause our git container to run again ... and again ... and again

    (with an exponential back-off delay, as explained in the documentation)

  • That's why we specified restartPolicy: OnFailure

k8s/volumes.md

353/459

Inconsistencies

  • There is a short period of time during which the website is not available

    (because the git container hasn't done its job yet)

  • With a bigger website, we could get inconsistent results

    (where only a part of the content is ready)

  • In real applications, this could cause incorrect results

  • How can we avoid that?

k8s/volumes.md

354/459

Init Containers

  • We can define containers that should execute before the main ones

  • They will be executed in order

    (instead of in parallel)

  • They must all succeed before the main containers are started

  • This is exactly what we need here!

  • Let's see one in action

See Init Containers documentation for all the details.

k8s/volumes.md

355/459

Defining Init Containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

k8s/volumes.md

356/459

Trying the init container

  • Repeat the same operation as earlier

    (try to send HTTP requests as soon as the pod comes up)

  • This time, instead of "403 Forbidden" we get a "connection refused"

  • NGINX doesn't start until the git container has done its job

  • We never get inconsistent results

    (a "half-ready" container)

k8s/volumes.md

357/459

Other uses of init containers

  • Load content

  • Generate configuration (or certificates)

  • Database migrations

  • Waiting for other services to be up

    (to avoid flurry of connection errors in main container)

  • etc.

k8s/volumes.md

358/459

Volume lifecycle

  • The lifecycle of a volume is linked to the pod's lifecycle

  • This means that a volume is created when the pod is created

  • This is mostly relevant for emptyDir volumes

    (other volumes, like remote storage, are not "created" but rather "attached" )

  • A volume survives across container restarts

  • A volume is destroyed (or, for remote storage, detached) when the pod is destroyed

k8s/volumes.md

359/459

Image separating from the next chapter

360/459

Managing configuration

(automatically generated title slide)

361/459

Managing configuration

  • Some applications need to be configured (obviously!)

  • There are many ways for our code to pick up configuration:

    • command-line arguments

    • environment variables

    • configuration files

    • configuration servers (getting configuration from a database, an API...)

    • ... and more (because programmers can be very creative!)

  • How can we do these things with containers and Kubernetes?

k8s/configuration.md

362/459

Passing configuration to containers

  • There are many ways to pass configuration to code running in a container:

    • baking it into a custom image

    • command-line arguments

    • environment variables

    • injecting configuration files

    • exposing it over the Kubernetes API

    • configuration servers

  • Let's review these different strategies!

k8s/configuration.md

363/459

Baking custom images

  • Put the configuration in the image

    (it can be in a configuration file, but also ENV or CMD actions)

  • It's easy! It's simple!

  • Unfortunately, it also has downsides:

    • multiplication of images

    • different images for dev, staging, prod ...

    • minor reconfigurations require a whole build/push/pull cycle

  • Avoid doing it unless you don't have the time to figure out other options

k8s/configuration.md

364/459

Command-line arguments

  • Pass options to args array in the container specification

  • Example (source):

    args:
    - "--data-dir=/var/lib/etcd"
    - "--advertise-client-urls=http://127.0.0.1:2379"
    - "--listen-client-urls=http://127.0.0.1:2379"
    - "--listen-peer-urls=http://127.0.0.1:2380"
    - "--name=etcd"
  • The options can be passed directly to the program that we run ...

    ... or to a wrapper script that will use them to e.g. generate a config file

k8s/configuration.md

365/459

Command-line arguments, pros & cons

  • Works great when options are passed directly to the running program

    (otherwise, a wrapper script can work around the issue)

  • Works great when there aren't too many parameters

    (to avoid a 20-lines args array)

  • Requires documentation and/or understanding of the underlying program

    ("which parameters and flags do I need, again?")

  • Well-suited for mandatory parameters (without default values)

  • Not ideal when we need to pass a real configuration file anyway

k8s/configuration.md

366/459

Environment variables

  • Pass options through the env map in the container specification

  • Example:

    env:
    - name: ADMIN_PORT
    value: "8080"
    - name: ADMIN_AUTH
    value: Basic
    - name: ADMIN_CRED
    value: "admin:0pensesame!"

value must be a string! Make sure that numbers and fancy strings are quoted.

🤔 Why this weird {name: xxx, value: yyy} scheme? It will be revealed soon!

k8s/configuration.md

367/459

The downward API

  • In the previous example, environment variables have fixed values

  • We can also use a mechanism called the downward API

  • The downward API allows exposing pod or container information

    • either through special files (we won't show that for now)

    • or through environment variables

  • The value of these environment variables is computed when the container is started

  • Remember: environment variables won't (can't) change after container start

  • Let's see a few concrete examples!

k8s/configuration.md

368/459

Exposing the pod's namespace

- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
  • Useful to generate FQDN of services

    (in some contexts, a short name is not enough)

  • For instance, the two commands should be equivalent:

    curl api-backend
    curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local

k8s/configuration.md

369/459

Exposing the pod's IP address

- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
  • Useful if we need to know our IP address

    (we could also read it from eth0, but this is more solid)

k8s/configuration.md

370/459

Exposing the container's resource limits

- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
  • Useful for runtimes where memory is garbage collected

  • Example: the JVM

    (the memory available to the JVM should be set with the -Xmx flag)

  • Best practice: set a memory limit, and pass it to the runtime

  • Note: recent versions of the JVM can do this automatically

    (see JDK-8146115) and this blog post for detailed examples)

k8s/configuration.md

371/459

More about the downward API

  • This documentation page tells more about these environment variables

  • And this one explains the other way to use the downward API

    (through files that get created in the container filesystem)

k8s/configuration.md

372/459

Environment variables, pros and cons

  • Works great when the running program expects these variables

  • Works great for optional parameters with reasonable defaults

    (since the container image can provide these defaults)

  • Sort of auto-documented

    (we can see which environment variables are defined in the image, and their values)

  • Can be (ab)used with longer values ...

  • ... You can put an entire Tomcat configuration file in an environment ...

  • ... But should you?

(Do it if you really need to, we're not judging! But we'll see better ways.)

k8s/configuration.md

373/459

Injecting configuration files

  • Sometimes, there is no way around it: we need to inject a full config file

  • Kubernetes provides a mechanism for that purpose: configmaps

  • A configmap is a Kubernetes resource that exists in a namespace

  • Conceptually, it's a key/value map

    (values are arbitrary strings)

  • We can think about them in (at least) two different ways:

    • as holding entire configuration file(s)

    • as holding individual configuration parameters

Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!

k8s/configuration.md

374/459

Configmaps storing entire files

  • In this case, each key/value pair corresponds to a configuration file

  • Key = name of the file

  • Value = content of the file

  • There can be one key/value pair, or as many as necessary

    (for complex apps with multiple configuration files)

  • Examples:

    # Create a configmap with a single key, "app.conf"
    kubectl create configmap my-app-config --from-file=app.conf
    # Create a configmap with a single key, "app.conf" but another file
    kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf
    # Create a configmap with multiple keys (one per file in the config.d directory)
    kubectl create configmap my-app-config --from-file=config.d/

k8s/configuration.md

375/459

Configmaps storing individual parameters

  • In this case, each key/value pair corresponds to a parameter

  • Key = name of the parameter

  • Value = value of the parameter

  • Examples:

    # Create a configmap with two keys
    kubectl create cm my-app-config \
    --from-literal=foreground=red \
    --from-literal=background=blue
    # Create a configmap from a file containing key=val pairs
    kubectl create cm my-app-config \
    --from-env-file=app.conf

k8s/configuration.md

376/459

Exposing configmaps to containers

  • Configmaps can be exposed as plain files in the filesystem of a container

    • this is achieved by declaring a volume and mounting it in the container

    • this is particularly effective for configmaps containing whole files

  • Configmaps can be exposed as environment variables in the container

    • this is achieved with the downward API

    • this is particularly effective for configmaps containing individual parameters

  • Let's see how to do both!

k8s/configuration.md

377/459

Passing a configuration file with a configmap

  • We will start a load balancer powered by HAProxy

  • We will use the official haproxy image

  • It expects to find its configuration in /usr/local/etc/haproxy/haproxy.cfg

  • We will provide a simple HAproxy configuration, k8s/haproxy.cfg

  • It listens on port 80, and load balances connections between IBM and Google

k8s/configuration.md

378/459

Creating the configmap

  • Go to the k8s directory in the repository:

    cd ~/container.training/k8s
  • Create a configmap named haproxy and holding the configuration file:

    kubectl create configmap haproxy --from-file=haproxy.cfg
  • Check what our configmap looks like:

    kubectl get configmap haproxy -o yaml

k8s/configuration.md

379/459

Using the configmap

We are going to use the following pod definition:

apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

k8s/configuration.md

380/459

Using the configmap

  • The resource definition from the previous slide is in k8s/haproxy.yaml
  • Create the HAProxy pod:
    kubectl apply -f ~/container.training/k8s/haproxy.yaml
  • Check the IP address allocated to the pod:
    kubectl get pod haproxy -o wide
    IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP)

k8s/configuration.md

381/459

Testing our load balancer

  • The load balancer will send:

    • half of the connections to Google

    • the other half to IBM

  • Access the load balancer a few times:
    curl $IP
    curl $IP
    curl $IP

We should see connections served by Google, and others served by IBM.
(Each server sends us a redirect page. Look at the URL that they send us to!)

k8s/configuration.md

382/459

Exposing configmaps with the downward API

  • We are going to run a Docker registry on a custom port

  • By default, the registry listens on port 5000

  • This can be changed by setting environment variable REGISTRY_HTTP_ADDR

  • We are going to store the port number in a configmap

  • Then we will expose that configmap as a container environment variable

k8s/configuration.md

383/459

Creating the configmap

  • Our configmap will have a single key, http.addr:

    kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80
  • Check our configmap:

    kubectl get configmap registry -o yaml

k8s/configuration.md

384/459

Using the configmap

We are going to use the following pod definition:

apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

k8s/configuration.md

385/459

Using the configmap

  • The resource definition from the previous slide is in k8s/registry.yaml
  • Create the registry pod:
    kubectl apply -f ~/container.training/k8s/registry.yaml
  • Check the IP address allocated to the pod:

    kubectl get pod registry -o wide
    IP=$(kubectl get pod registry -o json | jq -r .status.podIP)
  • Confirm that the registry is available on port 80:

    curl $IP/v2/_catalog

k8s/configuration.md

386/459

Passwords, tokens, sensitive information

  • For sensitive information, there is another special resource: Secrets

  • Secrets and Configmaps work almost the same way

    (we'll expose the differences on the next slide)

  • The intent is different, though:

    "You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."

    "In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."

    (Source: the author of both features)

k8s/configuration.md

387/459

Differences between configmaps and secrets

k8s/configuration.md

388/459

Image separating from the next chapter

389/459

Highly available Persistent Volumes

(automatically generated title slide)

390/459

Highly available Persistent Volumes

  • How can we achieve true durability?

  • How can we store data that would survive the loss of a node?

391/459

Highly available Persistent Volumes

  • How can we achieve true durability?

  • How can we store data that would survive the loss of a node?

  • We need to use Persistent Volumes backed by highly available storage systems

  • There are many ways to achieve that:

    • leveraging our cloud's storage APIs

    • using NAS/SAN systems or file servers

    • distributed storage systems

392/459

Highly available Persistent Volumes

  • How can we achieve true durability?

  • How can we store data that would survive the loss of a node?

  • We need to use Persistent Volumes backed by highly available storage systems

  • There are many ways to achieve that:

    • leveraging our cloud's storage APIs

    • using NAS/SAN systems or file servers

    • distributed storage systems

  • We are going to see one distributed storage system in action

k8s/portworx.md

393/459

Our test scenario

  • We will set up a distributed storage system on our cluster

  • We will use it to deploy a SQL database (PostgreSQL)

  • We will insert some test data in the database

  • We will disrupt the node running the database

  • We will see how it recovers

k8s/portworx.md

394/459

Portworx

  • Portworx is a commercial persistent storage solution for containers

  • It works with Kubernetes, but also Mesos, Swarm ...

  • It provides hyper-converged storage

    (=storage is provided by regular compute nodes)

  • We're going to use it here because it can be deployed on any Kubernetes cluster

    (it doesn't require any particular infrastructure)

  • We don't endorse or support Portworx in any particular way

    (but we appreciate that it's super easy to install!)

k8s/portworx.md

395/459

A useful reminder

  • We're installing Portworx because we need a storage system

  • If you are using AKS, EKS, GKE ... you already have a storage system

    (but you might want another one, e.g. to leverage local storage)

  • If you have setup Kubernetes yourself, there are other solutions available too

    • on premises, you can use a good old SAN/NAS

    • on a private cloud like OpenStack, you can use e.g. Cinder

    • everywhere, you can use other systems, e.g. Gluster, StorageOS

k8s/portworx.md

396/459

Portworx requirements

  • Kubernetes cluster ✔️

  • Optional key/value store (etcd or Consul) ❌

  • At least one available block device ❌

k8s/portworx.md

397/459

The key-value store

  • In the current version of Portworx (1.4) it is recommended to use etcd or Consul

  • But Portworx also has beta support for an embedded key/value store

  • For simplicity, we are going to use the latter option

    (but if we have deployed Consul or etcd, we can use that, too)

k8s/portworx.md

398/459

One available block device

  • Block device = disk or partition on a disk

  • We can see block devices with lsblk

    (or cat /proc/partitions if we're old school like that!)

  • If we don't have a spare disk or partition, we can use a loop device

  • A loop device is a block device actually backed by a file

  • These are frequently used to mount ISO (CD/DVD) images or VM disk images

k8s/portworx.md

399/459

Setting up a loop device

  • We are going to create a 10 GB (empty) file on each node

  • Then make a loop device from it, to be used by Portworx

  • Create a 10 GB file on each node:

    for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done

    (If SSH asks to confirm host keys, enter yes each time.)

  • Associate the file to a loop device on each node:

    for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done

k8s/portworx.md

400/459

Installing Portworx

  • To install Portworx, we need to go to https://install.portworx.com/

  • This website will ask us a bunch of questions about our cluster

  • Then, it will generate a YAML file that we should apply to our cluster

401/459

Installing Portworx

  • To install Portworx, we need to go to https://install.portworx.com/

  • This website will ask us a bunch of questions about our cluster

  • Then, it will generate a YAML file that we should apply to our cluster

  • Or, we can just apply that YAML file directly (it's in k8s/portworx.yaml)

  • Install Portworx:
    kubectl apply -f ~/container.training/k8s/portworx.yaml

k8s/portworx.md

402/459

Generating a custom YAML file

If you want to generate a YAML file tailored to your own needs, the easiest way is to use https://install.portworx.com/.

FYI, this is how we obtained the YAML file used earlier:

KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion)
BLKDEV=/dev/loop4
curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true

If you want to use an external key/value store, add one of the following:

&k=etcd://XXX:2379
&k=consul://XXX:8500

... where XXX is the name or address of your etcd or Consul server.

k8s/portworx.md

403/459

Waiting for Portworx to be ready

  • The installation process will take a few minutes
  • Check out the logs:

    stern -n kube-system portworx
  • Wait until it gets quiet

    (you should see portworx service is healthy, too)

k8s/portworx.md

404/459

Dynamic provisioning of persistent volumes

  • We are going to run PostgreSQL in a Stateful set

  • The Stateful set will specify a volumeClaimTemplate

  • That volumeClaimTemplate will create Persistent Volume Claims

  • Kubernetes' dynamic provisioning will satisfy these Persistent Volume Claims

    (by creating Persistent Volumes and binding them to the claims)

  • The Persistent Volumes are then available for the PostgreSQL pods

k8s/portworx.md

405/459

Storage Classes

  • It's possible that multiple storage systems are available

  • Or, that a storage system offers multiple tiers of storage

    (SSD vs. magnetic; mirrored or not; etc.)

  • We need to tell Kubernetes which system and tier to use

  • This is achieved by creating a Storage Class

  • A volumeClaimTemplate can indicate which Storage Class to use

  • It is also possible to mark a Storage Class as "default"

    (it will be used if a volumeClaimTemplate doesn't specify one)

k8s/portworx.md

406/459

Check our default Storage Class

  • The YAML manifest applied earlier should define a default storage class
  • Check that we have a default storage class:
    kubectl get storageclass

There should be a storage class showing as portworx-replicated (default).

k8s/portworx.md

407/459

Our default Storage Class

This is our Storage Class (in k8s/storage-class.yaml):

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"
  • It says "use Portworx to create volumes"

  • It tells Portworx to "keep 2 replicas of these volumes"

  • It marks the Storage Class as being the default one

k8s/portworx.md

408/459

Our Postgres Stateful set

  • The next slide shows k8s/postgres.yaml

  • It defines a Stateful set

  • With a volumeClaimTemplate requesting a 1 GB volume

  • That volume will be mounted to /var/lib/postgresql/data

  • There is another little detail: we enable the stork scheduler

  • The stork scheduler is optional (it's specific to Portworx)

  • It helps the Kubernetes scheduler to colocate the pod with its volume

    (see this blog post for more details about that)

k8s/portworx.md

409/459
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:11
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

k8s/portworx.md

410/459

Creating the Stateful set

  • Before applying the YAML, watch what's going on with kubectl get events -w
  • Apply that YAML:
    kubectl apply -f ~/container.training/k8s/postgres.yaml

k8s/portworx.md

411/459

Testing our PostgreSQL pod

  • We will use kubectl exec to get a shell in the pod

  • Good to know: we need to use the postgres user in the pod

  • Get a shell in the pod, as the postgres user:
    kubectl exec -ti postgres-0 su postgres
  • Check that default databases have been created correctly:
    psql -l

(This should show us 3 lines: postgres, template0, and template1.)

k8s/portworx.md

412/459

Inserting data in PostgreSQL

  • We will create a database and populate it with pgbench
  • Create a database named demo:

    createdb demo
  • Populate it with pgbench:

    pgbench -i -s 10 demo
  • The -i flag means "create tables"

  • The -s 10 flag means "create 10 x 100,000 rows"

k8s/portworx.md

413/459

Checking how much data we have now

  • The pgbench tool inserts rows in table pgbench_accounts
  • Check that the demo base exists:

    psql -l
  • Check how many rows we have in pgbench_accounts:

    psql demo -c "select count(*) from pgbench_accounts"

(We should see a count of 1,000,000 rows.)

k8s/portworx.md

414/459

Find out which node is hosting the database

  • We can find that information with kubectl get pods -o wide
  • Check the node running the database:
    kubectl get pod postgres-0 -o wide

We are going to disrupt that node.

415/459

Find out which node is hosting the database

  • We can find that information with kubectl get pods -o wide
  • Check the node running the database:
    kubectl get pod postgres-0 -o wide

We are going to disrupt that node.

By "disrupt" we mean: "disconnect it from the network".

k8s/portworx.md

416/459

Disconnect the node

  • We will use iptables to block all traffic exiting the node

    (except SSH traffic, so we can repair the node later if needed)

  • SSH to the node to disrupt:

    ssh nodeX
  • Allow SSH traffic leaving the node, but block all other traffic:

    sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPT
    sudo iptables -I OUTPUT 2 -j DROP

k8s/portworx.md

417/459

Check that the node is disconnected

  • Check that the node can't communicate with other nodes:

    ping node1
  • Logout to go back on node1

  • Watch the events unfolding with kubectl get events -w and kubectl get pods -w
  • It will take some time for Kubernetes to mark the node as unhealthy

  • Then it will attempt to reschedule the pod to another node

  • In about a minute, our pod should be up and running again

k8s/portworx.md

418/459

Check that our data is still available

  • We are going to reconnect to the (new) pod and check
  • Get a shell on the pod:
    kubectl exec -ti postgres-0 su postgres
  • Check the number of rows in the pgbench_accounts table:
    psql demo -c "select count(*) from pgbench_accounts"

k8s/portworx.md

419/459

Double-check that the pod has really moved

  • Just to make sure the system is not bluffing!
  • Look at which node the pod is now running on
    kubectl get pod postgres-0 -o wide

k8s/portworx.md

420/459

Re-enable the node

  • Let's fix the node that we disconnected from the network
  • SSH to the node:

    ssh nodeX
  • Remove the iptables rule blocking traffic:

    sudo iptables -D OUTPUT 2

k8s/portworx.md

421/459

A few words about this PostgreSQL setup

  • In a real deployment, you would want to set a password

  • This can be done by creating a secret:

    kubectl create secret generic postgres \
    --from-literal=password=$(base64 /dev/urandom | head -c16)
  • And then passing that secret to the container:

    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: postgres
    key: password

k8s/portworx.md

422/459

Troubleshooting Portworx

  • If we need to see what's going on with Portworx:

    PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json |
    jq -r .items[0].metadata.name)
    kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status
  • We can also connect to Lighthouse (a web UI)

    • check the port with kubectl -n kube-system get svc px-lighthouse

    • connect to that port

    • the default login/password is admin/Password1

    • then specify portworx-service as the endpoint

k8s/portworx.md

423/459

Removing Portworx

  • Portworx provides a storage driver

  • It needs to place itself "above" the Kubelet

    (it installs itself straight on the nodes)

  • To remove it, we need to do more than just deleting its Kubernetes resources

  • It is done by applying a special label:

    kubectl label nodes --all px/enabled=remove --overwrite
  • Then removing a bunch of local files:

    sudo chattr -i /etc/pwx/.private.json
    sudo rm -rf /etc/pwx /opt/pwx

    (on each node where Portworx was running)

k8s/portworx.md

424/459

Dynamic provisioning without a provider

  • What if we want to use Stateful sets without a storage provider?

  • We will have to create volumes manually

    (by creating Persistent Volume objects)

  • These volumes will be automatically bound with matching Persistent Volume Claims

  • We can use local volumes (essentially bind mounts of host directories)

  • Of course, these volumes won't be available in case of node failure

  • Check this blog post for more information and gotchas

k8s/portworx.md

425/459

Acknowledgements

The Portworx installation tutorial, and the PostgreSQL example, were inspired by Portworx examples on Katacoda, in particular:

k8s/portworx.md

426/459

Image separating from the next chapter

427/459

vSAN

(automatically generated title slide)

428/459

vSAN

Instantiate Stateful Pods

  • Compatible with CSI

  • Distributed storage for higher fault tolerance + performance

  • Available for Pods and VMs

vmware/vsan.md

429/459

Image separating from the next chapter

430/459

Next steps

(automatically generated title slide)

431/459

Next steps

Alright, how do I get started and containerize my apps?

432/459

Next steps

Alright, how do I get started and containerize my apps?

Suggested containerization checklist:

  • write a Dockerfile for one service in one app
  • write Dockerfiles for the other (buildable) services
  • write a Compose file for that whole app
  • make sure that devs are empowered to run the app in containers
  • set up automated builds of container images from the code repo
  • set up a CI pipeline using these container images
  • set up a CD pipeline (for staging/QA) using these images

And then it is time to look at orchestration!

k8s/whatsnext.md

433/459

Options for our first production cluster

  • Use a managed cluster (AKS, EKS, GKE, PKS...)

    (price: $, difficulty: medium)

  • Hire someone to deploy it for us

    (price: $$, difficulty: easy)

  • Do it ourselves

    (price: $-$$$, difficulty: hard)

k8s/whatsnext.md

434/459

One big cluster vs. multiple small ones

  • Yes, it is possible to have prod+dev in a single cluster

    (and implement good isolation and security with RBAC, network policies...)

  • But it is not a good idea to do that for our first deployment

  • Start with a production cluster + at least a test cluster

  • Implement and check RBAC and isolation on the test cluster

    (e.g. deploy multiple test versions side-by-side)

  • Make sure that all our devs have usable dev clusters

    (whether it's a local minikube or a full-blown multi-node cluster)

k8s/whatsnext.md

435/459

Namespaces

  • Namespaces let you run multiple identical stacks side by side

  • Two namespaces (e.g. blue and green) can each have their own redis service

  • Each of the two redis services has its own ClusterIP

  • CoreDNS creates two entries, mapping to these two ClusterIP addresses:

    redis.blue.svc.cluster.local and redis.green.svc.cluster.local

  • Pods in the blue namespace get a search suffix of blue.svc.cluster.local

  • As a result, resolving redis from a pod in the blue namespace yields the "local" redis

This does not provide isolation! That would be the job of network policies.

k8s/whatsnext.md

436/459

Relevant sections

k8s/whatsnext.md

437/459

Stateful services (databases etc.)

  • As a first step, it is wiser to keep stateful services outside of the cluster

  • Exposing them to pods can be done with multiple solutions:

    • ExternalName services
      (redis.blue.svc.cluster.local will be a CNAME record)

    • ClusterIP services with explicit Endpoints
      (instead of letting Kubernetes generate the endpoints from a selector)

    • Ambassador services
      (application-level proxies that can provide credentials injection and more)

k8s/whatsnext.md

438/459

Stateful services (second take)

  • If we want to host stateful services on Kubernetes, we can use:

    • a storage provider

    • persistent volumes, persistent volume claims

    • stateful sets

  • Good questions to ask:

    • what's the operational cost of running this service ourselves?

    • what do we gain by deploying this stateful service on Kubernetes?

  • Relevant sections: Volumes | Stateful Sets | Persistent Volumes

  • Excellent blog post tackling the question: “Should I run Postgres on Kubernetes?”

k8s/whatsnext.md

439/459

HTTP traffic handling

  • Services are layer 4 constructs

  • HTTP is a layer 7 protocol

  • It is handled by ingresses (a different resource kind)

  • Ingresses allow:

    • virtual host routing
    • session stickiness
    • URI mapping
    • and much more!
  • This section shows how to expose multiple HTTP apps using Træfik

k8s/whatsnext.md

440/459

Logging

  • Logging is delegated to the container engine

  • Logs are exposed through the API

  • Logs are also accessible through local files (/var/log/containers)

  • Log shipping to a central platform is usually done through these files

    (e.g. with an agent bind-mounting the log directory)

  • This section shows how to do that with Fluentd and the EFK stack

k8s/whatsnext.md

441/459

Metrics

  • The kubelet embeds cAdvisor, which exposes container metrics

    (cAdvisor might be separated in the future for more flexibility)

  • It is a good idea to start with Prometheus

    (even if you end up using something else)

  • Starting from Kubernetes 1.8, we can use the Metrics API

  • Heapster was a popular add-on

    (but is being deprecated starting with Kubernetes 1.11)

k8s/whatsnext.md

442/459

Managing the configuration of our applications

  • Two constructs are particularly useful: secrets and config maps

  • They allow to expose arbitrary information to our containers

  • Avoid storing configuration in container images

    (There are some exceptions to that rule, but it's generally a Bad Idea)

  • Never store sensitive information in container images

    (It's the container equivalent of the password on a post-it note on your screen)

  • This section shows how to manage app config with config maps (among others)

k8s/whatsnext.md

443/459

Congratulations!

  • We learned a lot about Kubernetes, its internals, its advanced concepts
444/459

Congratulations!

  • We learned a lot about Kubernetes, its internals, its advanced concepts

  • That was just the easy part

  • The hard challenges will revolve around culture and people

445/459

Congratulations!

  • We learned a lot about Kubernetes, its internals, its advanced concepts

  • That was just the easy part

  • The hard challenges will revolve around culture and people

  • ... What does that mean?

k8s/whatsnext.md

446/459

Running an app involves many steps

  • Write the app

  • Tests, QA ...

  • Ship something (more on that later)

  • Provision resources (e.g. VMs, clusters)

  • Deploy the something on the resources

  • Manage, maintain, monitor the resources

  • Manage, maintain, monitor the app

  • And much more

k8s/whatsnext.md

447/459

Who does what?

  • The old "devs vs ops" division has changed

  • In some organizations, "ops" are now called "SRE" or "platform" teams

    (and they have very different sets of skills)

  • Do you know which team is responsible for each item on the list on the previous page?

  • Acknowledge that a lot of tasks are outsourced

    (e.g. if we add "buy/rack/provision machines" in that list)

k8s/whatsnext.md

448/459

What do we ship?

  • Some organizations embrace "you build it, you run it"

  • When "build" and "run" are owned by different teams, where's the line?

  • What does the "build" team ship to the "run" team?

  • Let's see a few options, and what they imply

k8s/whatsnext.md

449/459

Shipping code

  • Team "build" ships code

    (hopefully in a repository, identified by a commit hash)

  • Team "run" containerizes that code

✔️ no extra work for developers

❌ very little advantage of using containers

k8s/whatsnext.md

450/459

Shipping container images

  • Team "build" ships container images

    (hopefully built automatically from a source repository)

  • Team "run" uses theses images to create e.g. Kubernetes resources

✔️ universal artefact (support all languages uniformly)

✔️ easy to start a single component (good for monoliths)

❌ complex applications will require a lot of extra work

❌ adding/removing components in the stack also requires extra work

❌ complex applications will run very differently between dev and prod

k8s/whatsnext.md

451/459

Shipping Compose files

(Or another kind of dev-centric manifest)

  • Team "build" ships a manifest that works on a single node

    (as well as images, or ways to build them)

  • Team "run" adapts that manifest to work on a cluster

✔️ all teams can start the stack in a reliable, deterministic manner

❌ adding/removing components still requires some work (but less than before)

❌ there will be some differences between dev and prod

k8s/whatsnext.md

452/459

Shipping Kubernetes manifests

  • Team "build" ships ready-to-run manifests

    (YAML, Helm charts, Kustomize ...)

  • Team "run" adjusts some parameters and monitors the application

✔️ parity between dev and prod environments

✔️ "run" team can focus on SLAs, SLOs, and overall quality

❌ requires a lot of extra work (and new skills) from the "build" team

❌ Kubernetes is not a very convenient development platform (at least, not yet)

k8s/whatsnext.md

453/459

What's the right answer?

  • It depends on our teams

    • existing skills (do they know how to do it?)

    • availability (do they have the time to do it?)

    • potential skills (can they learn to do it?)

  • It depends on our culture

    • owning "run" often implies being on call

    • do we reward on-call duty without encouraging hero syndrome?

    • do we give people resources (time, money) to learn?

k8s/whatsnext.md

454/459

Developer experience

We've put this last, but it's pretty important!

  • How do you on-board a new developer?

  • What do they need to install to get a dev stack?

  • How does a code change make it from dev to prod?

  • How does someone add a component to a stack?

k8s/whatsnext.md

455/459

Image separating from the next chapter

456/459

Links and resources

All things Kubernetes:

All things Docker:

Everything else:

These slides (and future updates) are on → http://container.training/

k8s/links.md

458/459

That's all, folks!
Questions?

end

shared/thankyou.md

459/459

Intros

  • Hello! We are:

  • Feel free to interrupt for questions at any time

  • Especially when you see full screen container pictures!

logistics.md

2/459
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow