A quick overview of Docker Swarm

A quick overview of Docker Swarm

Docker

 

 

Docker Swarm is native clustering for Docker. It’s strength: the API you already know and use every day is the one you’ll use with swarm too. No need to learn a new set of commands and a new way to work.

There are a few ways to set up a swarm cluster. I like it simple, yet close to reality. My setup is going to be ever so slightly more complicated then a straight docker machine creation with boot2docker. I just use a CentOS 7 base (4 Vagrant VMs) and docker machine to install the Docker Engine. Very easy. Use boot2docker or whatever box you like.

Here’s my Vagrant file. If you take it as is, you should at least change the BOX name (the one in the file is a home made one, not available on the interwebs).

Once created, for each VM, we’ll install the docker engine with docker machine with the generic driver. We’ll create a swarm master and the swarm nodes.

Before we can do this, we need a swarm token. Again, we’ll keep it simple and use the discovery service hosted on Dockerhub. There’s a catch! Your virtual hosts will need an internet connection. But this should not be a problem (NAT or Bridge the hosts). If you’re familiar with other discovery backends and have them readily available internally, do your thing.

Our cluster is waiting, and we need to generate that token for the discovery service. Any docker host will do: I’m sure you have plenty. Execute a docker run --rm swarm create and save the generated token. Done! Forget about this docker host now, we won’t need it anymore.

With that token, we’ll be able to generate each host in our cluster with docker machine. Start with the swarm master.

Did you notice the ssh key is the generic one? That is possible because of the ssh.insert_key = false entry in the vagrantfile. This is only to avoid using specific keys for each host every time. This is just a lab after all.

Anyway … export the environment variables we need to talk to our master and do not forget --swarm or you’ll just be talking to a docker host and not a swarm master, big difference!

Check that your master is clearly identified:

LGTM, I guess we can continue by adding our swarm nodes. Here’s node 01. The 2 other nodes are created exactly the same way by incrementing IP and name.

Once everything is created , “talking” to the master (I insist), we can verify that we have 4 available nodes.

Our cluster seems to be ready to handle anything we’ll throw at it 😉 But there’s one more thing I’d like to do: remove the agent running on the master. I don’t want the master to be used as a standard execution node.

To see what is running where:

So the container I want to stop, is d693c78d4aee: this is the swarm join ... container running on the master. If you’d prefer the master to be able to run stuff, leave things as they are. If you remove the agent from the master, your available nodes will shrink by 1 node (the master) and you’ll end up with 3 nodes in the list.

In my case:

Next. Let’s run busybox … somewhere …

Somewhere in your swarm cluster, a node has downloaded the busybox image (from DockerHub in my case), started it, and you should be in a shell right now. That’s cool, but where? Is it just random? Naaaah …

If you log on to the master with docker-machine ssh swarm-master and inspect the master container, you should see something along the following lines:

What do we see?

  • manage: oh boy!, he’s the boss, hide!
  • docker-machine has taken care of the certificates and all communications are encrypted
  • we can reach the API on port 3376
  • the default strategy to handle containers in this cluster is spread

In fact, you can pick one of 3 strategies:

  • spread: the node’s rank is computed according to available CPU, it’s RAM and the number of containers it is running
  • binpack: this strategy uses the same criteria but causes swarm to optimize for the node which is most packed
  • random: no computation is done. A node is selected at random

Forget about “random” used primarily for debugging. Keep this in mind:

  • spread = distribute containers on all swarm nodes
  • binpack = pack as many containers as possible on a node, then continue to another one, etc …

Now we know that things are not random 😉

That’s all fine and dandy but you need moarrr! All right, we need to look at filters then.

An example with a constraint is going to shed some light on what can be done with these filters. Let’s create 2 labels we’ll use as constraints in a moment: QA and PROD. What? what? … on the same cluster? hugh? Sure! why not?

Start by adding these shiny labels on swarm-01 and sawrm-02. To do this, on each host, edit the /etc/systemd/system/docker.service file, add --label environment=QA to the ExecStart line. Same applies to swarm-02 with --label environment=PROD.

Before we continue, make sure you understand how to manually restart the swarm agent and the docker service. It’s quite important, because while you can use docker-machine to insert labels at creation, knowing what is going on under the hood will give you much more flexibility. And right now, docker machine is not going to be of any help. You’re at the helm!

The swarm agent command we will need is:

To restart the docker service (depending on your distribution), issue a systemctl daemon-reload && systemctl restart docker

We are all set, stop the agent on swarm-01, remove it, and restart the docker service. And sure enough, swarm-01 disappeared from the list of swarm nodes: docker run --rm swarm list token://1a90da6257a81049afa00a90ff1370b9.

If you do a docker info on swarm-01, you’ll see the label has appeared in the output. Start a new agent to join the cluster again. Talking to the swarm master, you should see your labels magically appeared. Do the same for swarm-02.

Cool, let’s move on.

To run a container in QA: docker run -e constraint:environment==QA hello-world
That container has not been removed, so you can go check where it executed with docker ps -a and … yep, swarm-01.

Let’s run the same command with the constraint changed to PROD: docker run -e constraint:environment==PROD hello-world.
Check again … swarm-02.

Imagine you have 40 hosts in your cluster, you may now have 20 in QA and 20 in PROD: you effectively did split your cluster in 2 sub-clusters.

Great, we know constraints are one way to filter what goes where. There are built in filters as well. One example is ports. You cannot expose the same port on the same swarm host. Thus, if you start the same service several times, on the same port(s), it’ll start each time on a different host. Let’s say we want to monitor our swarm hosts with cAdvisor. What happens if we start cAdvisor 3 times on port 8080?

That’s right! A cAdvisor image will be retrieved from DockerHub and started on each host. If you try again, you’ll get this error: Error response from daemon: unable to find a node with port 8080 available. Self explanatory, hey?

Give it a try:

If you check out http://192.168.69.22-23-24:8080, you’ll land on the node’s respective cAdvisor page.

There you go, a quick and easy overview: have fun with your own swarm.

Olivier Robert
No Comments

Post a Comment

Comment
Name
Email
Website