Dec 30, 2017

Docker-Swarm: Running with more than one manager-node / How to add a secondary manager or multiple managers

Adding additional masters to a docker swarm is very easy. I just followed the documentation:

On the manager i ran the following command:
alpine:~# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377
And on the node:
node03:~# docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1
l7jsmbghl-5nrni6ksqnkljvqpp59m5gfh1 192.168.178.46:2377

This node joined a swarm as a manager.
The new state of the cluster shows:
alpine:~# docker  info | grep -A 23 Swarm
Swarm: active
 NodeID: wy1z8jxmr1cyupdqgkm6lxhe2
 Is Manager: true
 ClusterID: wkbjyxbcuohgdqc3amhl9umlq
 Managers: 2
 Nodes: 4

 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.178.46
 Manager Addresses:
  192.168.178.46:2377
  192.168.178.50:2377
If you want to use a new token just run:
alpine:~# docker  swarm join-token --rotate  manager
Successfully rotated manager join token.

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-cgy143mwghbfoozt0b8li2587 192.168.178.46:2377

Related posts:

Dec 13, 2017

Docker-CE: Setting up a tomcat in less than a minute and running your JSP...

Last time i wrote about processes and files of a docker container hosting the docker example myapp.py.
Next step was to run a tomcat with a small application inside.

This can be done with theses commands:
  1. Get tomcat from the docker library:
    # docker pull tomcat
    Using default tag: latest
    latest: Pulling from library/tomcat
    3e17c6eae66c: Pull complete
    fdfb54153de7: Pull complete
    a4ca6e73242a: Pull complete
    5161d2a139e2: Pull complete
    7659b327f9ec: Pull complete
    ce47e69f11ad: Pull complete
    7d946df3a3d8: Pull complete
    a57cba73d797: Pull complete
    7e6f56cdb523: Pull complete
    06e4787b3ca5: Pull complete
    c760cb7e43cb: Pull complete
    ad6d0815df5c: Pull complete
    d7e1da09fc22: Pull complete
    Digest: sha256:a069d49c414bad0d98f5a4d7f9b7fdd318ccc451dc535084480c8aead68272d2
    Status: Downloaded newer image for tomcat:latest
  2. Test the tomcat:
    # docker run -p 4000:8080 tomcat
    20-Nov-2017 20:38:11.754 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.23
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Sep 28 2017 10:30:11 UTC
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.5.23.0
    ....
    ....
    org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
    20-Nov-2017 20:41:59.928 INFO [Thread-5] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
     

This was easy.
Now create your JSP and run it:
  1. create a directory
    mkdir tomcatstatus
  2. create a jsp inside this direcotry
    vi tomcatstatus/index.jsp
    and insert the following content:
    <%@ page language="java" import="java.util.*" %>


    Host name : <%=java.net.InetAddress.getLocalHost().getHostName() %>

    Server Version: <%= application.getServerInfo() %>

    Servlet Version: <%= application.getMajorVersion() %>.<%= application.getMinorVersion(
    ) %>
    JSP Version: <%= JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersio
    n() %>
  3. Run docker
    docker run -v /home/schroff/tomcatstatus:/usr/local/tomcat/webapps/status -p 4000:8080 tomcat
  4. Connect to port 4000:

Wow - i am really stunned how fast the tomcat was setup and the jsp was launched. No installation of java (ok, this is only apt install) and no setup procedure for Apache tomcat (ok, this is just a tar -zxvf). But if i want to run more than one installation - docker is faster than repeating the installation or copying files.  Really cool!


(One thing i forgot: Installation of docker onto your server)

Related posts:

Dec 10, 2017

Docker-Swarm: Running a minimal webserver in a swarm

In my last posting to docker swarm i created a swarm on virtualbox with alpine linux with a hdd footprint of 290MB per node:
There are some tutorials out there with running a nginx or a java webserver in a container but >100MB for each node seems far to much for my tests.

So i decided to create a application which listens on port 8080 with netcat. I created a directory ncweb with ncweb.sh:
ncweb# cat ncweb.sh
#!/bin/bash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080; done 2&>1 >> logfile
and a index.html:
ncweb# cat index.html
<html>
  <head>
    <title>"Hello, World"</title>
  </head>
  <body bgcolor=white>
    <table border="0" cellpadding="10">
      <tr>
        <td>
          <h1>"Hello, World"</h1>
        </td>
      </tr>
    </table>
  </body>
</html>
The Dockerfile looks like this:
# cat Dockerfile
FROM alpine
WORKDIR /tmp
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]
After that i created the container:
ncweb# docker build -t ncweb:0.2 .
Sending build context to Docker daemon  4.096kB
Step 1/5 : FROM alpine
 ---> 053cde6e8953
Step 2/5 : WORKDIR /tmp
 ---> Using cache
 ---> c3e11ac3773b
Step 3/5 : RUN mkdir ncweb
 ---> Using cache
 ---> d9e634c03cd1
Step 4/5 : ADD .  /tmp
 ---> 95f022aacc1c
Step 5/5 : ENTRYPOINT [ "/tmp/ncweb.sh" ]
 ---> Running in c1a9e8cee248
 ---> 6880521f68e4
Removing intermediate container c1a9e8cee248
Successfully built 6880521f68e4
Successfully tagged ncweb:0.2
And let's do a test without docker swarm:
ncweb# docker run -p 8080:8080  ncweb
But running this as a service fails:
# docker service create --replicas=1 --name myweb ncweb:0.2
image ncweb:0.2 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.2 independently,
possibly leading to different nodes running different
versions of the image.
n0himwum38bqzd8ob1vf8zhip
overall progress: 0 out of 1 tasks
1/1: No such image: ncweb:0.2
^COperation continuing in background.
Use `docker service ps n0himwum38bqzd8ob1vf8zhip` to check progress.
and:
# docker service ps n0himwum38bqzd8ob1vf8zhip
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR                        PORTS
8tjsuae9jv8o        myweb.1             ncweb:0.2           node01              Ready               Rejected 3 seconds ago        "No such image: ncweb:0.2"  
qp24ssxb5bl5         \_ myweb.1         ncweb:0.2           alpine              Shutdown            Failed 36 seconds ago         "task: non-zero exit (2)"   
zwfgcatk7zyi         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
v4a7zkb85yd4         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
ycjftjusv484         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
 
# docker service rm n0himwum38bqzd8ob1vf8zhip
n0himwum38bqzd8ob1vf8zhip
The error "No such image..." is happening, because the container ncweb is only in the repository of my master.
The easiest way for my test environment is to distribute the local image to all nodes:
# docker save ncweb:0.3 | ssh 192.168.178.47 docker load
The authenticity of host '192.168.178.47 (192.168.178.47)' can't be established.
ECDSA key fingerprint is SHA256:2/8O/SE1fGJ4f5bAQls5txrKMbqZfMmiZ+Tha/WFKxA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.178.47' (ECDSA) to the list of known hosts.
root@192.168.178.47's password:
Loaded image: ncweb:0.3
(i have to distribute a ssh-key to all nodes)

and then:
alpine:~/ncweb# docker service create --replicas=1 --name myweb ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.
# docker service ps myweb
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
in97xlc7azcw        myweb.1             ncweb:0.3           node01              Running             Running 8 seconds ago      
So my nc-webserver runs on node01, but i can not access it there because i did not define any port mappings ;-(

But finally this command did the job:
# docker service create --replicas=1 --name myweb --publish 8080:8080  ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.
runf8u9r8719sk13mkf8hh8ec
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
The hostname corresponds to the docker container id on node01:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
6c9434b08082        ncweb:0.3           "/tmp/ncweb.sh"     37 minutes ago      Up 37 minutes                           myweb.1.lqiyb34cuxxme2141ahsg8neu


Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to get a loadbalancing mechanism for a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?

Related posts:




Dec 5, 2017

Docker-Swarm: One manager, two nodes with Alpine Linux

After creating a Alpine Linux VM inside virtualbox and after adding docker because of the small disk footprint (Alpine Linux: 170MB | with docker: 280MB) i performed the following steps to create a docker swarm:
  • cloning the vm twice
  • assigning a static ip to the manager node
  • create new MACs for the network interface cards on the nodes 


Then i followed the tutorial https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/ but without running the docker-machine commands, because i have 3 VMs and do not want to run the node on top of docker.

manager:
alpine:~# docker swarm init --advertise-addr 192.168.178.46
Swarm initialized: current node (wy1z8jxmr1cyupdqgkm6lxhe2) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
nodes
#     docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w
8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377
This node joined a swarm as a worker.

And then a check on the master:
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Ready               Active             
Run a first job:
alpine:~# docker service create --replicas 1 --name helloworld alpine ping 192.168.178.1
rsn6igby4f6d7uuy8eny7sbfb
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
But on my manager i get no output for "docker ps". But this is, because the service is not running here:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
wrrobalt4oe7        helloworld.1        alpine:latest       node01              Running             Running 2 minutes ago                      
Node 1 shows:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
40c5e9b2ffbc        alpine:latest       "ping 192.168.178.1"   3 minutes ago       Up 3 minutes                            helloworld.1.wrrobalt4oe7mrbhxjlweuxgk
If i do a kill on the ping process, it is immediately restarted:
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2597 root       0:00 grep ping
node01:~# kill 2597
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2600 root       0:00 grep ping

A scale up is no problem:
alpine:~# docker service create --replicas 2 --name helloworld alpine ping 192.168.178.1
3lrdqdpjuqml6creswdcqpn2p
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
616scw68s8bv        helloworld.1        alpine:latest       node02              Running             Running 8 seconds ago                      
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running 8 seconds ago                      
And a shutdown of node02 is no problem:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Ready               Ready 2 seconds ago                             
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 17 seconds ago                          
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running about a minute ago          


After a switchoff of node01 both service are running on the remaining master:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running about a minute ago                      
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running about a minute ago                      
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 seconds ago                           
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Running 2 minutes ago              
So failover is working.
But failback does not occur. After switching on node01 again, the service remains on the manager:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE               ERROR                         PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running 4 minutes ago                                    
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 4 minutes ago                                    
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 minutes ago                                    
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Failed about a minute ago   "task: non-zero exit (255)"  
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Down                Active             


Last thing: How to stop the service?
alpine:~# docker service rm  helloworld
helloworld
alpine:~# docker service ps helloworld
no such service: helloworld

Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to do this with a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?
Related posts: