Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

May 26, 2022

Review: Securing containers & cloud for dummies

Securing containers & cloud (provided by sysdig) is a booklet with 42 pages and 7 chapters. Like most of the "for dummies" series the last chapter is a summary with ten considerations.

But let's start from the beginning:
Chapter one "understanding cloud security" is a really nice abstract. Here some of the topic, which you should be aware of: "overprivileged identites", "visibility over cloud assets", "leaving out IT", "former employees, one-time users and guest accounts that are left active", ... With knowing that the following proposal is made: "to dectect and stop cyber threats [..] first step is to see them". Therefore a singe event store should be used and a open-source validation because of validation an transparency.
The second chapter is named "securing infrastructure as code (IaC). The typical arguments for IaC are speed, scalabilty, resilience, reproducibility but what about security? IaC is created by the developers and this code has to be checked as well as the application sources. And even if IaC is checked, configuration templates in  a CI/CD pipeline will suffer from drift. "Policy as code PaC allows you to leverage a shared policy model across multiple IaC, cloud, and Kubernetes environments.  Not only does PaC provide consistency and strengthen security, but also it saves time and allows you to scale faster."
"Preventing Vulnerabilites" is the third chapter. Many images in production contain patchable vulnerabilites, which should be patched. So the selecting of container images from every source (including DockerHub) without scanning them is not a good idea. One subsection here is "Automate vulnerability scanning in the CI/CD pipeline". I think this is something you should read in the booklet in detail.
After scanning for threats, the next chapter is about detecting and responding to threats. This chapter is only about 3 pages and it is more an appetizer for Falco, which is a solution from sysdig.
The sixth chapter is named "Targeting monitoring and troubleshooting issues" is is plea for open source. "Avoiding Vendor Lock-In" is key to success at least from the perspective of the authors.
As in the beginning mentioned the last chapter is a ten point summary of the topic. This is a fast checklist, you can use.
 

All in all a very good high level introduction into "Securing Containers & Cloud". I recommend all DevOps engineers and developers to spend half an hour to read this booklet.

Feb 6, 2021

Microk8s: Running KUARD (Kubernetes Up And Running Demo) on a small cluster

There is a cool demo application, which you can use to check your kubernetes settings. This application is called kuard (https://github.com/kubernetes-up-and-running/kuard):

To get it running in a way that you can deinstall it easily run the following commands:

# kubectl create namespace kuard
namespace/kuard created
You can deploy it via "kubectl run" or create a YAML with "kubectl run ... --dry-run=client --output=yaml" and deloy via "kubectl apply":

#kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080 --dry-run=client --output=yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
or 

# kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080

To expose it in your cluster run:

# kubectl expose pod kuard --type=NodePort --port=8080 -n kuard
service/kuard exposed

And then check the port via

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d20h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

The number after 8080: is the port you can use (http://zigbee:32047/) 

With kuard you can run DNS checks on the pods or browse the filesystem, to check things... You can even set the status for the liveness and readyness probes.



Dec 23, 2020

MicroK8s: more problems - log flooding

After getting my kubernetes nodes running on ubuntu's microK8s

i got thousands of these messages in my syslog:

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.735176   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.737524   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Really annoying i found no solution for this problem. But there is an easy way to correct this problem:

snap disable microk8s
snap enable microk8s
Run this on both nodes and the problem is gone (i think rebooting will do the same job).



Dec 12, 2020

My start to a local kubernetes cluster: microK8s @ubuntu

After playing around with zigbee on raspberry pi, i decided to build up my own kubernetes cluster at home. I have to raspberry pi running ubuntu server, so i wanted to go this direction:


The start is very easy. Just follow the steps shown here:

https://microk8s.io/docs

But by adding the second node i got the following result:

root@zigbee:/home/ubuntu/kubernetes# microk8s kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
ubuntu   NotReady   <none>   98s   v1.19.3-34+b9e8e732a07cb6
zigbee   NotReady   <none>   37m   v1.19.3-34+b9e8e732a07cb6
Hmmm.

The best way to debug this problem is

# microk8s inspect
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-control-plane-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting juju
  Inspect Juju
Inspecting kubeflow
  Inspect Kubeflow

# Warning: iptables-legacy tables present, use iptables-legacy to see them
WARNING:  Docker is installed.
File "/etc/docker/daemon.json" does not exist.
You should create it and add the following lines:
{
    "insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
WARNING:  The memory cgroup is not enabled.
The cluster may not be functioning properly. Please ensure cgroups are enabled
See for example: https://microk8s.io/docs/install-alternatives#heading--arm
Building the report tarball
  Report tarball is at /var/snap/microk8s/1794/inspection-report-20201212_194335.tar.gz
And as you can see: this contains the solution!

After adding the /etc/docker/daemon.json everything went fine:

root@zigbee:~# kubectl get nodes 
NAME     STATUS   ROLES    AGE    VERSION
ubuntu   Ready    <none>   46h    v1.19.3-34+b9e8e732a07cb6
zigbee   Ready    <none>   2d3h   v1.19.3-34+b9e8e732a07cb6

Dec 11, 2020

MicroK8s: Dashboard & RBAC

If you want to access your dashboard and you have enabled RBAC (like shown here), you will get this error, if you follow the default manual (https://microk8s.io/docs/addon-dashboard):

secrets is forbidden: User "system:serviceaccount:default:default" cannot list resource "secrets" in API group "" in the namespace "default"
error
persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
error
configmaps is forbidden: User "system:serviceaccount:default:default" cannot list resource "configmaps" in API group "" in the namespace "default"
error
services is forbidden: User "system:serviceaccount:default:default" cannot list resource "services" in API group "" in the namespace "default"
error
statefulsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
error
ingresses.extensions is forbidden: User "system:serviceaccount:default:default" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
error
replicationcontrollers is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
error
jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "jobs" in API group "batch" in the namespace "default"
error
replicasets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicasets" in API group "apps" in the namespace "default"
error
deployments.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "deployments" in API group "apps" in the namespace "default"
error
events is forbidden: User "system:serviceaccount:default:default" cannot list resource "events" in API group "" in the namespace "default"
error
pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
error
daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "daemonsets" in API group "apps" in the namespace "default"
error
cronjobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "cronjobs" in API group "batch" in the namespace "default"
error
namespaces is forbidden: User "system:serviceaccount:default:default" cannot list resource "namespaces" in API group "" at the cluster scope
 
To get the right bearer token you have to this:

export K8S_USER="system:serviceaccount:default:default"
export NAMESPACE="default"
export BINDING="defaultbinding"
export ROLE="defaultrole"
kubectl create clusterrole $ROLE  --verb="*"  --resource="*.*"    
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl -n ${NAMESPACE} describe secret $(kubectl -n ${NAMESPACE} get secret | (echo "$_") | awk '{print $1}') | grep token: | awk '{print $2}'\n

(create role, add a role binding and then get the token)

But there is still one error:

To fix this, you have add the cluster-admin role to this account (if you really want clusterwide permissions):

kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=$K8S_USER

Dec 2, 2020

Kubernetes: Rights & Roles with kubectl and RBAC - How to restrict kubectl for a user to a namespace

Playing around with my MicroK8S i was thinking about restricting access to the default namespace. Why?

Every command adds something and so your default namespace gets polluted more and more and cleaning up might be a lot of work.

But:

There is neither a HOWTO nor some quickstart into this. Everything you can find is:

https://kubernetes.io/docs/reference/access-authn-authz/rbac/

But after this very detailed article you know a lot of things, but for restricting the kubectl you are as smart as before.

One thing i learned in this article:

You do not have to use these YAML files - everything can be done with commands and their options (i do not like YAML, so this was a very important understanding for me).

At the end it is very easy:

export K8S_USER="ateamuser"
export NAMESPACE="ateam"
export BINDING="ateambinding"
export ROLE="ateamrole"
kubectl create namespace $NAMESPACE
kubectl label namespaces $NAMESPACE team=a
kubectl create clusterrole ateamrole  --verb="*"  --resource="*.*"
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl create serviceaccount $K8S_USER -n $NAMESPACE
kubectl describe sa $K8S_USER -n $NAMESPACE
and just test it with:

root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n ateam  --as=ateamuser
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-cc9jv   1/1     Running   0          14m
root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n default  --as=ateamuser
Error from server (Forbidden): pods is forbidden: User "ateamuser" cannot list resource "pods" in API group "" in the namespace "default"
So there is not a big script needed - but building these commands was really a hard job...

If you want to know, how to restrict the kubectl on a remote computer, please write a comment. 

One last remark: In microK8s you enable RBAC with the command

microk8s.enable rbac

Check this with

microk8s.status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.178.57:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    ingress              # Ingress controller for external access
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
  disabled:
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory



Nov 27, 2020

Kubernetes with microK8s: First steps to expose a service to external

At home i wanted to have my own kubernetes cluster. I own 2 raspberry pi based on ubuntu, so i decided to install microK8s:

--> https://ubuntu.com/blog/what-can-you-do-with-microk8s

The installation is very well explained here:

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview

 

BUT: i found nowhere a tutorial how to run an container and expose the port in a way that i is reachable from other pc like localhost.

So here we go:

kubectl create deployment web --image=nginx
kubectl expose deployment web --type=NodePort --port=80

After that just do:

# kubectl get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-5xvfc   1/1     Running   0          112s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP        2d5h
service/web          NodePort    10.152.183.66   <none>        80:32665/TCP   105s

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web   1/1     1            1           112s

NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/web-96d5df5c8   1         1         1       112s

On you kubernetes node you can reach the service with 10.152.183.66:80.

For getting the nginx from another pc just use:

<yourkuberneteshost>:32665

For me:



 


Sep 4, 2020

Review: Container Storage for Dummies

After reading Running Containers in Production for Dummies this book fell into my hands:


 
 

Container Storage for Dummies is promoted by RedHat and consists of 5 chapters with 35 pages. 

The first chapter gives a short summary about containers. I liked this statement very much: "For example, a VM is like a heavy hammer. It assumes you’re running a server and that the server can host multiple applications. [...] And the container can run just about anywhere, even on a bare metal machine or in a VM — the container doesn’t care." The chapter ends with a motivation why containers need persistent storage: ephemeral containers are transient....
Chapter 2 has the title "Looking at Storage for and in Containers". The key argument here is: "Software-defined storage (SDS) separates storage hardware from storage controller software, enabling seamless portability across multiple forms of storage hardware. You can’t slice and dice storage using appliances or typical SAN/NAS as easily, readily, or quickly as you can with SDS." Both terms (Storage for Containers + Storage in Containers) are given a defintion (just take a look inside the book ;-)).
In chapter 3 the authors want to convince the reader about the coolness of container-native storage with phrases like "Container-Native Storage Is the Next Sliced Bread". I think the main argument in this section is, that RedHat contributes a substantial parts to open source Kubernetes so that RedHats Openshift container storage fits easily in there. And this is done by introducing the Container Storage Interface which can be used by all storage providers.
Chapter 4 motivates why developers like Container-Native storage: because it can be easily managed without SAN administrators....
The last chapter closes with ten reasons to move to Cantainer-Native storage: simplified management, more automation, scalibility, ....

As summary i think, this book is a nice starting point about the problems and possible solutions with storage for containers. It is a little bit disappointing, that openshift is not really explained - but within only 35 pages this is really impossible.
If you are working or starting to work with containers i require you to read this booklet - it is a good start into the container world!



Aug 25, 2020

Review: Running Containers in Production for dummies

 Last evening i read the following booklet:

Here my review:

Chapter one gives within 7 pages an excellent introduction into "Containers & Orchestration Platforms". From Kubernetes over Openshift/Docker Swarm up to Amazon EKS - many services are described. In my opinion Azure AKS is missing, but it is clear, that every hyperscaler will provide you its managed Kubernetes environment. At the end even Apache Mesos is listed - which is out of scope for the most of us. 
Building & Deploying Containers is the headline of chapter 2 and a brief, solid description of these topics is given. If you want to know what all the buzzwords like CI/CD/CS, Pipelines, Container Registries are about: Read that chapter and you have a good starting point.

Nearly 33% of the book(let) is abount Monitoring Containers (chapter 3). This points in to the right directions. You have to know what your containers are doing and what you have to change with continuous delivery and continuous deployment. If you are running tens or hundreds of containers, the monitoring has to be  automatic as well - or you are lost. "A best practice for using containers is to isolate workloads by running only a single process per container.  Placing a monitoring agent — which amounts to a second process or service — in each container to get visibility risks destroying a key value of containers: simplicity." - So building up a monitoring is not such easy, as is was on full-stack servers...

Chapter 4 is about Security. This focuses on the following topics: Implementing container limits against resource abuse, how to avoid outdated container images, management of secrets and image authenticity.

The last chapter closes with "Ten Container Takeaways".

 

Within 43 pages a really nice starting point to learn about the world of docker and container orchestration.

Aug 10, 2019

Review @amazon: Skalierbare Container-Infrastrukturen: Das Handbuch für Administratoren und DevOps-Teams. Inkl. Container-Orchestrierung mit Docker, Rocket, Kubernetes, Rancher & Co.

 Last week a read 

Skalierbare Container-Infrastrukturen: Das Handbuch für Administratoren und DevOps-Teams. Inkl. Container-Orchestrierung mit Docker, Rocket, Kubernetes, Rancher & Co.

 

This book offers an all-round look at really all topics: From Docker (300 pages) to registries to alternative container platforms. Then to container clusters (including Pacemaker setups) and via Docker-Swarm to the popular topic Kubernetes (also 300 pages). Finally there is Rancher, Mesosphere and Ceph. The sections on Docker and Kubernetes could each be separated out as separate books, as the descriptions are so extensive and worth knowing.

If you are interested, take a look at my review at amazon.de (like all my reviews: written in german ;-).

Jul 9, 2018

Docker: Networking with docker swarm: creating new subnets/gateways/...

In this posting i explained how to configure the network for a container on a docker machine.
If you want to do this for a docker swarm, you have to change the commands. The network driver "bridge" does not work in swarm mode:
(How to run a container inside a swarm take a look here)

docker service create  --network mybrigde --name helloworld alpine ping 192.168.178.1

Error: No such network: mybrigde
Even if you create your bridge on every node.

You have to configure an overlay network:
alpine:~# docker service create  --network myoverlay --name helloworld alpine ping 192.168.178.1
And then you can deploy your service like this:

alpine:~# docker service create --replicas 2 --network myoverlay  --name helloworld alpine ping 10.200.0.1

ij613sb26sfrgqknq8nnscqeg

overall progress: 2 out of 2 tasks 

1/2: running   [==================================================>] 

2/2: running   [==================================================>] 

verify: Service converged


Verification:

alpine:~# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

6193ebb361fa        alpine:latest       "ping 10.200.0.1"   12 seconds ago      Up 11 seconds                           helloworld.1.9zoyocdpsdthuqmlk4efk96wz

alpine:~# docker logs 6193ebb361fa

PING 10.200.0.1 (10.200.0.1): 56 data bytes

64 bytes from 10.200.0.1: seq=0 ttl=64 time=0.344 ms

64 bytes from 10.200.0.1: seq=1 ttl=64 time=0.205 ms

64 bytes from 10.200.0.1: seq=2 ttl=64 time=0.184 ms
On each docker swarm node you can find now:
node2:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5019841c7e25        bridge              bridge              local
6e795c964251        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
273dc1ddbc57        mybrigde            bridge              local
siiyo60iaojs        myoverlay           overlay             swarm
9ff819cf7ddb        none                null                local

and after stopping the service (docker service rm helloworld) the overlay "myoverlay" is removed again:
node2:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5019841c7e25        bridge              bridge              local
6e795c964251        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
273dc1ddbc57        mybrigde            bridge              local
9ff819cf7ddb        none                null                local


Jul 1, 2018

Docker: Network configuration: How to customize the network bridge and use my own subnet / netmask / CiDR

In my last posting i described how to configure the network settings of a container via docker command line:
--net none
--net bridge
Now i want to try to change the subnet from the standard 172.17.0.0/16 to another ip range.

There are some tutorials out there which say:

docker run -it  --net bridge  --fixed-cidr "10.100.0.0/24"  alpine /bin/ash
unknown flag: --fixed-cidr
but this doesa not work any more.

First you have to create new network:
docker network create --driver=bridge --subnet=10.100.0.0/24  --gateway=10.100.0.1 mybrigde
6249c9a5f6c6f7e36e7e61009b9bde7ac338173d8e222e214a65b9793d36ad6c
Just do a verification:
docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
a00386e6a5bc        bridge              bridge              local
9365e4a966d0        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
6249c9a5f6c6        mybrigde            bridge              local
9ff819cf7ddb        none                null                local
and here we go:

alpine:~# docker run -it  --network  mybrigde  alpine /bin/ash
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:64:00:02  
          inet addr:10.100.0.2  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1156 (1.1 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
Removing the network bridge is easy:
docker network rm mybrigde


and narrowing the IP range can be done like this:
alpine:~# docker network create --driver=bridge --subnet=10.100.0.0/24  --ip-range=10.100.0.128/25 --gateway=10.100.0.1 mybrigde
b0ba1d963a6ca3097d083d4f5fd979e0fb0f91f81f1279132ae773c06f821396
Just do a check:
alpine:~# docker run -it  --network  mybrigde  alpine /bin/ash
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:64:00:80  
          inet addr:10.100.0.128  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1016 (1016.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
The ip address of the container is set to 10.100.0.128 as configured with --ip-range 10.100.0.128/25.

If you are not familiar with the CIDR notation, just us this nice online tool (http://www.subnet-calculator.com/cidr.php):







Jun 19, 2018

Docker: Network configuration - none / brigde / hostname / dns entries

If you are starting your docker container you can add some network configuration details via command line.
Let's start with the easiest network setting:
docker run -it  --net none alpine /bin/ash
This setting starts the container without any connectivity to the network:
# ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
The default is --net bridge:
docker run -it  --net bridge alpine /bin/ash
With this setting your network access is via a bridge of your host:
# ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if8:  mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
This is the docker0 interface on your docker server machine:
alpine:~# ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:72:ae:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.178.46/24 brd 192.168.178.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe72:aeef/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:ba:e9:4d:6a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:baff:fee9:4d6a/64 scope link 
       valid_lft forever preferred_lft forever
4: docker_gwbridge:  mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:62:f0:92:82 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 scope global docker_gwbridge
       valid_lft forever preferred_lft forever
Finally you can configure your hostname and manipulate dns entries:
# docker run -it  --net bridge  --hostname myhostname --add-host mygoogle.com:8.8.8.8  alpine /bin/ash
/ # hostname
myhostname
/ # nslookup mygoogle.com
nslookup: can't resolve '(null)': Name does not resolve

Name:      mygoogle.com
Address 1: 8.8.8.8 mygoogle.com

Jun 16, 2018

Docker: How to limit memory usage

By starting your container you can limit the RAM usage simply by adding
-m 4M

(this limits the memory to 4 megabytes).

To check this simply run:

docker run -it -m=4M  --rm alpine /bin/ash

and on your docker machine check the following entry:

alpine:~# cat /sys/fs/cgroup/memory/docker/4ce0403caf667e7a6d446eac3820373aefafe4e73463357f680d7b38a392ba62/memory.limit_in_bytes 
4194304


May 22, 2018

Docker: Lessons learned - Logging

After some time working with docker here my experiences:

Some days ago i created my own container with a minimal web service.

Here the ncweb.sh:
#!/bin/ash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080  2>&1 >> logfile; done 
This is the Dockerfile:
FROM alpine

WORKDIR /tmp

RUN mkdir ncweb

ADD .  /tmp

ENTRYPOINT [ "/tmp/ncweb.sh" ]

After building the image
docker build -t ncweb:0.4 .
And starting the container:
docker run -d -p 8080:8080 ncweb:0.4 --name ncweb0.4
I was able to connect to the container and view the log:

To get the right command:
docker ps  |grep  ncweb:0.4 |awk '{print "docker exec -it "$1" ash"}'
and then use the output:
docker exec -it e4f9960fc8e5 ash
alpine:~/ncweb# docker exec -it e4f9960fc8e5 ash
/tmp # ls
Dockerfile  hexdump     index.html  logfile     ncweb       ncweb.sh
/tmp # cat logfile 
GET / HTTP/1.1
Host: 192.168.178.46:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en,de;q=0.7,en-US;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0

Thu May 10 10:01:23 UTC 2018 request done
But this is not the right way.
If i change the ncweb.sh to
#!/bin/ash
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080 ;done 
then you can do the following (after building a new container version):

alpine:~/ncweb# docker run -d -p 8080:8080 ncweb:0.5 --name ncweb0.5

9589f77fc289a3713354a365f8f08098279e6d0e893de99a0431d8fbd62c834a

alpine:~/ncweb# docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

9589f77fc289        ncweb:0.5           "/tmp/ncweb.sh --n..."   8 seconds ago       Up 7 seconds        0.0.0.0:8080->8080/tcp   gracious_archimedes
To get the logs (which are written to STDOUT):

alpine:~/ncweb# docker logs -f 9589f77fc289

GET / HTTP/1.1

Host: 192.168.178.46:8080

User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en,de;q=0.7,en-US;q=0.3

Accept-Encoding: gzip, deflate

Connection: keep-alive

Upgrade-Insecure-Requests: 1

Cache-Control: max-age=0


Conclusion: It is better to use STDOUT than local logfiles. Or even better: use syslog or other central logging mechanisms.

Related posts:



Apr 17, 2018

Docker: How to build you own container with your own application

atThere are many tutorials out there, how to create a docker container with a apache webserver inside or a nginx.
But you can hardly find a manual how to build your own docker container without pulling everything from a foreign repository.
Why should you not pull everything from foreign repositories?

You should read this article or this:
But since each phase of the development pipeline is built at a different time, …
…you can’t be sure that the same version of each dependency in the development version also got into your production version.
That is a good point.

As considered in this article you can put some more constraints into your docker file: 
FROM ubuntu:14.04.
or even
FROM ubuntu:0bf3461984f2fb18d237995e81faa657aff260a52a795367e6725f0617f7a56c
And that is the point where i tell you: Create a process to build your own docker containers from scratch and distribute them with your own repository or copy them to all your docker nodes (s. here)

So here the steps to create your own container from a local directory (here ncweb):

# ls -l ncweb/
total 12
-rw-r--r--    1 root     root            90 Nov 26 10:06 Dockerfile
-rw-r--r--    1 root     root           255 Nov 26 11:29 index.html
-rw-r--r--    1 root     root             0 Nov 26 11:29 logfile
-rwxr--r--    1 root     root           176 Nov 26 11:29 ncweb.sh  
The Dockerfile contains the following:

# ls -l ncweb/
alpine:~# cat ncweb/Dockerfile 
FROM alpine
WORKDIR /tmp
RUN mkdir ncweb
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]
Into this directory you have to put everything you need, e.g. a complete JDK or your binaries or ...

And then change into this directory and build your container:

ncweb# docker build -t ncweb:0.2 .
The distribution to other docker nodes can be done like this:

# docker save ncweb:0.3 | ssh 192.168.178.47 docker load 
For more details read this posting.


Related posts:



Mar 22, 2018

Java 10 released: java with some enhancements for running inside docker

After the release of Java 9 in october 2017 with its new features
Oracle released Java 10:
 A short summary of the new feature can be found
at https://blogs.oracle.com/java-platform-group/introducing-java-se-10
or you can take a look a the release notes:
http://www.oracle.com/technetwork/java/javase/10-relnote-issues-4108729.html#NewFeature

My favourites are:
  • JEP 307 Parallel Full GC for G1  Improves G1 worst-case latencies by making the full GC parallel. The G1 garbage collector is designed to avoid full collections, but when the concurrent collections can't reclaim memory fast enough a fall back full GC will occur. The old implementation of the full GC for G1 used a single threaded mark-sweep-compact algorithm. With JEP 307 the full GC has been parallelized and now use the same amount of parallel worker threads as the young and mixed collections.
and the docker enhancements:
  • JDK-8146115 Improve docker container detection and resource configuration usage
The JVM has been modified to be aware that it is running in a Docker container and will extract container specific configuration information instead of querying the operating system. The information being extracted is the number of CPUs and total memory that have been allocated to the container. The total number of CPUs available to the Java process is calculated from any specified cpu sets, cpu shares or cpu quotas. This support is only available on Linux-based platforms. This new support is enabled by default and can be disabled in the command line with the JVM option:
-XX:-UseContainerSupport
In addition, this change adds a JVM option that provides the ability to specify the number of CPUs that the JVM will use:
-XX:ActiveProcessorCount=count
This count overrides any other automatic CPU detection logic in the JVM.
  • JDK-8186248 Allow more flexibility in selecting Heap % of available RAM
Three new JVM options have been added to allow Docker container users to gain more fine grained control over the amount of system memory that will be used for the Java Heap:
-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage
These options replace the deprecated Fraction forms (-XX:InitialRAMFraction, -XX:MaxRAMFraction, and -XX:MinRAMFraction).
  • JDK-8179498 attach in Linux should be relative to /proc/pid/root and namespace aware
This bug fix corrects the attach mechanism when trying to attach from a host process to a Java process that is running in a Docker container.
Related posts:

Mar 10, 2018

Docker-CE on Ubuntu 17.10 (Artful Aardvark) (2)

Three months ago i installed docker on my ubuntu 17.10. At those days there was no straight forward howto on docher.com.

Now the installation is listed on docker.com:


The installation manual can be found here.

root@zerberus:~# apt-get install apt-transport-https ca-certificates curl   software-properties-common
root@zerberus:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  OK
root@zerberus:~# apt-key fingerprint 0EBFCD88  pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
  uid        [ unbekannt ] Docker Release (CE deb)
sub   rsa4096 2017-02-22 [S]
root@zerberus:~# add-apt-repository \
>    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
>    $(lsb_release -cs) \
>    stable"
root@zerberus:~# apt update

root@zerberus:~# apt install docker-ce
And then a check:
root@zerberus:~# docker versionClient:
 Version:    17.12.0-ce
 API version:    1.35
 Go version:    go1.9.2
 Git commit:    c97c6d6
 Built:    Wed Dec 27 20:11:14 2017
 OS/Arch:    linux/amd64

Server:
 Engine:
  Version:    17.12.0-ce
  API version:    1.35 (minimum version 1.12)
  Go version:    go1.9.2
  Git commit:    c97c6d6
  Built:    Wed Dec 27 20:09:47 2017
  OS/Arch:    linux/amd64
  Experimental:    false

Related posts:

Feb 11, 2018

Docker-Machine: how to create a docker vm on a remote virtualbox server

After doing some first steps with docker, i wanted to test docker-swarm. Because of the limited resources of my notebook, i was looking for a Linux with a minimal footprint. In the context of setting up VMs for docker-swarm i found a log of articles about doing that with the tool docker-machine.
It sounds like this tool can create VMs just with one command. (here the documentation).

So let's give it a try:
(You have to install docker-machine first, but you do not need to install docker itself)
~$ docker-machine create --driver virtualbox test
Creating CA: /home/schroff/.docker/machine/certs/ca.pem
Creating client certificate: /home/schroff/.docker/machine/certs/cert.pem
Running pre-create checks...
(test) Image cache directory does not exist, creating it at /home/schroff/.docker/machine/cache...
(test) No default Boot2Docker ISO found locally, downloading the latest release...
(test) Latest release for github.com/boot2docker/boot2docker is v17.11.0-ce
(test) Downloading /home/schroff/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v17.11.0-ce/boot2docker.iso...
(test) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(test) Copying /home/schroff/.docker/machine/cache/boot2docker.iso to /home/schroff/.docker/machine/machines/test/boot2docker.iso...
(test) Creating VirtualBox VM...
(test) Creating SSH key...
(test) Starting the VM...
(test) Check network to re-create if needed...
(test) Found a new host-only adapter: "vboxnet0"
(test) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env test
Wow.
After this command inside my virtualbox a new machine shows up with 1GB RAM, 20 GB HDD (dynamic allocated), 2 network adapters (1x NAT, 1x host only).




But it is not possible to create VMs on a remote Virtualbox server. The CLI does not allow to give a remote server IP:

But for some other environments it is possible to deploy VMs on a remote site:

--vmwarevsphere-vcenter: IP/hostname for vCenter (or ESXi if connecting directly to a single host)
If your preferred virtualization engine supports remote servers, you can check here:

Nevertheless docker-machine is an excellent tool. If you are interested in creating a swarm, read this tutorial.
The homepage of the OS boot2docker can be found here.


Related posts: