Feb 28, 2021

Kubernetes: Building Kuard for Raspberry Pi (microk8s / ARM64)

 

In one of my last posts (click here) i used KUARD (kubernetes up and running demo) to check the livenessProbes of kubernetes.

In my posting i pulled the image from gcr.io/kuar-demo/kuard-arm64:3.

But what about building this image on myself?

First step: get the sources:

git clone https://github.com/kubernetes-up-and-running/kuard.git

Second step: run docker build:

cd kuard/
docker build . -t kuard:localbuild

But this fails with:

Step 13/14 : COPY --from=build /go/bin/kuard /kuard
COPY failed: stat /var/lib/docker/overlay2/60ba596c03e23fdfbca2216f495504fa2533a2f2e8cadd81a764a200c271de86/merged/go/bin/kuard: no such file or directory

What is going wrong here?

Inside the Dockerfile(s) there is ARCH=amd64

Just correct that with "sed -i 's/amd/arm/g' Dockerfile*"

After that the image is built without any problem:

Sending build context to Docker daemon  3.379MB
Step 1/14 : FROM golang:1.12-alpine AS build
 ---> 9d993b748f32
Step 2/14 : RUN apk update && apk upgrade && apk add --no-cache git nodejs bash npm
 ---> Using cache
 ---> 54400a0a06c5
Step 3/14 : RUN go get -u github.com/jteeuwen/go-bindata/...
 ---> Using cache
 ---> afe4c54a86c3
Step 4/14 : WORKDIR /go/src/github.com/kubernetes-up-and-running/kuard
 ---> Using cache
 ---> a51084750556
Step 5/14 : COPY . .
 ---> 568ef8c90354
Step 6/14 : ENV VERBOSE=0
 ---> Running in 0b7100c53ab0
Removing intermediate container 0b7100c53ab0
 ---> f22683c1c167
Step 7/14 : ENV PKG=github.com/kubernetes-up-and-running/kuard
 ---> Running in 8a0f880ea2ca
Removing intermediate container 8a0f880ea2ca
 ---> 49374a5b3802
Step 8/14 : ENV ARCH=arm64
 ---> Running in c6a08b2057d0
Removing intermediate container c6a08b2057d0
 ---> dd871e379a96
Step 9/14 : ENV VERSION=test
 ---> Running in 07e7c373ece7
Removing intermediate container 07e7c373ece7
 ---> 9dabd61d9cd0
Step 10/14 : RUN build/build.sh
 ---> Running in 66471550192c
Verbose: 0

> webpack-cli@3.2.1 postinstall /go/src/github.com/kubernetes-up-and-running/kuard/client/node_modules/webpack-cli
> lightercollective


     *** Thank you for using webpack-cli! ***

Please consider donating to our open collective
     to help us maintain this package.

  https://opencollective.com/webpack/donate

                    ***

added 819 packages from 505 contributors and audited 887 packages in 86.018s
found 683 vulnerabilities (428 low, 4 moderate, 251 high)
  run `npm audit fix` to fix them, or `npm audit` for details

> client@1.0.0 build /go/src/github.com/kubernetes-up-and-running/kuard/client
> webpack --mode=production

Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
Hash: 52ca742bfd1307531486
Version: webpack 4.28.4
Time: 39644ms
Built at: 02/05/2021 6:48:35 PM
    Asset     Size  Chunks                    Chunk Names
bundle.js  333 KiB       0  [emitted]  [big]  main
Entrypoint main [big] = bundle.js
 [26] (webpack)/buildin/global.js 472 bytes {0} [built]
[228] (webpack)/buildin/module.js 497 bytes {0} [built]
[236] (webpack)/buildin/amd-options.js 80 bytes {0} [built]
[252] ./src/index.jsx + 12 modules 57.6 KiB {0} [built]
      | ./src/index.jsx 285 bytes [built]
      | ./src/app.jsx 7.79 KiB [built]
      | ./src/env.jsx 5.42 KiB [built]
      | ./src/mem.jsx 5.81 KiB [built]
      | ./src/probe.jsx 7.64 KiB [built]
      | ./src/dns.jsx 5.1 KiB [built]
      | ./src/keygen.jsx 7.69 KiB [built]
      | ./src/request.jsx 3.01 KiB [built]
      | ./src/highlightlink.jsx 1.37 KiB [built]
      | ./src/disconnected.jsx 3.6 KiB [built]
      | ./src/memq.jsx 6.33 KiB [built]
      | ./src/fetcherror.js 122 bytes [built]
      | ./src/markdown.jsx 3.46 KiB [built]
    + 249 hidden modules
go: finding github.com/prometheus/client_golang v0.9.2
go: finding github.com/spf13/pflag v1.0.3
go: finding github.com/miekg/dns v1.1.6
go: finding github.com/pkg/errors v0.8.1
go: finding github.com/elazarl/go-bindata-assetfs v1.0.0
go: finding github.com/BurntSushi/toml v0.3.1
go: finding github.com/felixge/httpsnoop v1.0.0
go: finding github.com/julienschmidt/httprouter v1.2.0
go: finding github.com/dustin/go-humanize v1.0.0
go: finding golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: finding github.com/spf13/viper v1.3.2
go: finding github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: finding github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: finding github.com/matttproud/golang_protobuf_extensions v1.0.1
go: finding github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: finding github.com/golang/protobuf v1.2.0
go: finding github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: finding golang.org/x/sync v0.0.0-20181108010431-42b317875d0f
go: finding golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: finding github.com/hashicorp/hcl v1.0.0
go: finding github.com/spf13/afero v1.1.2
go: finding github.com/coreos/go-semver v0.2.0
go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
go: finding github.com/fsnotify/fsnotify v1.4.7
go: finding github.com/spf13/jwalterweatherman v1.0.0
go: finding github.com/coreos/etcd v3.3.10+incompatible
go: finding gopkg.in/yaml.v2 v2.2.2
go: finding golang.org/x/text v0.3.0
go: finding github.com/pelletier/go-toml v1.2.0
go: finding github.com/magiconair/properties v1.8.0
go: finding github.com/mitchellh/mapstructure v1.1.2
go: finding github.com/stretchr/testify v1.2.2
go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
go: finding github.com/coreos/go-etcd v2.0.0+incompatible
go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
go: finding github.com/spf13/cast v1.3.0
go: finding github.com/davecgh/go-spew v1.1.1
go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
go: finding github.com/pmezard/go-difflib v1.0.0
go: downloading github.com/julienschmidt/httprouter v1.2.0
go: downloading github.com/pkg/errors v0.8.1
go: downloading github.com/miekg/dns v1.1.6
go: downloading github.com/spf13/viper v1.3.2
go: downloading github.com/felixge/httpsnoop v1.0.0
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/prometheus/client_golang v0.9.2
go: extracting github.com/pkg/errors v0.8.1
go: extracting github.com/julienschmidt/httprouter v1.2.0
go: extracting github.com/felixge/httpsnoop v1.0.0
go: extracting github.com/spf13/viper v1.3.2
go: downloading github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/spf13/pflag v1.0.3
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/dustin/go-humanize v1.0.0
go: extracting github.com/miekg/dns v1.1.6
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/hashicorp/hcl v1.0.0
go: extracting github.com/dustin/go-humanize v1.0.0
go: downloading github.com/magiconair/properties v1.8.0
go: downloading github.com/spf13/afero v1.1.2
go: extracting github.com/fsnotify/fsnotify v1.4.7
go: downloading golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/spf13/cast v1.3.0
go: extracting github.com/spf13/jwalterweatherman v1.0.0
go: extracting gopkg.in/yaml.v2 v2.2.2
go: extracting github.com/spf13/afero v1.1.2
go: extracting github.com/magiconair/properties v1.8.0
go: extracting github.com/prometheus/client_golang v0.9.2
go: downloading github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/spf13/cast v1.3.0
go: downloading golang.org/x/text v0.3.0
go: downloading golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/hashicorp/hcl v1.0.0
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: downloading github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: downloading github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: extracting github.com/pelletier/go-toml v1.2.0
go: downloading github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: extracting github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: downloading github.com/golang/protobuf v1.2.0
go: downloading github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: extracting github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/golang/protobuf v1.2.0
go: extracting golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: extracting golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: extracting golang.org/x/text v0.3.0
Removing intermediate container 66471550192c
 ---> 236f3050bc93
Step 11/14 : FROM alpine
 ---> 1fca6fe4a1ec
Step 12/14 : USER nobody:nobody
 ---> Using cache
 ---> cabde1f6b77c
Step 13/14 : COPY --from=build /go/bin/kuard /kuard
 ---> 39e8b0af8cef
Step 14/14 : CMD [ "/kuard" ]
 ---> Running in ca867aeb43ba
Removing intermediate container ca867aeb43ba
 ---> e1cb3fd58eb4
Successfully built e1cb3fd58eb4
Successfully tagged kuard:localbuild

Feb 24, 2021

Kubernetes: Run a docker image as pod or deployment?

 If you want to run a docker image inside kubernetes you can at least choose two ways:

  1. pod
  2. deployment

The first is done with these commands:

kubectl create namespace kuard
kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080
kubectl expose pod kuard --type=NodePort --port=8080 -n kuard

To  run the image inside a deployment the commands look very similar:

kubectl create namespace kuard2
kubectl create deployment kuard2 --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard2
kubectl expose deployment kuard2 -n kuard2 --type=NodePort --port=8080

Both is done with three commands, but what is the difference:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d21h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

 versus

# kubectl get all -n kuard2
NAME                        READY   STATUS    RESTARTS   AGE
pod/kuard2-f8fd6497-4f7bc   1/1     Running   0          5m38s

NAME             TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard2   NodePort   10.152.183.233   <none>        8080:32627/TCP   4m32s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kuard2   1/1     1            1           5m39s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kuard2-f8fd6497   1         1         1       5m38s

So as you clearly can see, a deployment also configure a deployment and a replicaset in addition. But this is not really a deployment you want to do in such unconfigured way (remember: livenessProbes & readinessProbes can only be configured with kubctl apply + YAML). But you can get an template via 

kubectl get deployments kuard2 -n kuard2 -o yaml

which you can use for configuring all parameters - so this is easier than writing the complete YAML manually.

Feb 13, 2021

Kubernetes: LivenessProbes - first check

One key feature of kubernetes is, that unhealthy pods will be restarted. How can this be tested?

First you should deploy KUARD (kubernetes up and runnind demo). With this docker image you can check the restart feature easily:

(To deploy kuard read this posting, but there a some small differences)

# kubectl create namespace kuard
namespace/kuard created

But then you can not use the kubectl run because there is no commandline parameter to add the livenessProbe configuration. So you have to write a yaml file:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    livenessProbe:
      httpGet:
        path: /healthy
        port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 1
      periodSeconds: 10
      failureThreshold: 3

    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
and then run

# kubectl apply -f kuard.yaml -n kuard

The exposed port will stay (this posting) untouched, so you can reach your kuard over http.

So go to the tab "liveness probe" and you will see:

Now click on "Fail" and the livenessProbe will get a http 500:

 And after 3 retries you will see:

and the command line will show 1 restart:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   1          118s

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d21h
Really cool - but really annoying, that this could not be configured via CLI but only per YAML.



Feb 6, 2021

Microk8s: Running KUARD (Kubernetes Up And Running Demo) on a small cluster

There is a cool demo application, which you can use to check your kubernetes settings. This application is called kuard (https://github.com/kubernetes-up-and-running/kuard):

To get it running in a way that you can deinstall it easily run the following commands:

# kubectl create namespace kuard
namespace/kuard created
You can deploy it via "kubectl run" or create a YAML with "kubectl run ... --dry-run=client --output=yaml" and deloy via "kubectl apply":

#kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080 --dry-run=client --output=yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
or 

# kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080

To expose it in your cluster run:

# kubectl expose pod kuard --type=NodePort --port=8080 -n kuard
service/kuard exposed

And then check the port via

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d20h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

The number after 8080: is the port you can use (http://zigbee:32047/) 

With kuard you can run DNS checks on the pods or browse the filesystem, to check things... You can even set the status for the liveness and readyness probes.



Feb 4, 2021

Kubernetes: publishing services - clusterip vs. nodeport vs. loadbalancer and connecting to the serivces

In my posting http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html i described how to expose a NGINX on a kubernetes cluster, so that i was able open the NGINX page with a browser which was not located on one of the kubernetes nodes.

After reading around here the fundamentals, why this worked and what alternatives can be used.

The command

kubectl expose deployment web --type=NodePort --port=80

can be used with the following types:

https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

So exposing to a clusterip is only exposing your service internally. If you want to access this from the outside, the follow this tutorial: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ but this is only a temporary solution.

Exposing to external without any additional component: Just use nodeport (e.g. follog my posting: http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html )

Loadbalancer uses a loadbalancer from Azure or AWS or ... (take a look here: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/ )

ExternalName adds a DNS-name to the loadbalancer IP of type Loadbalancer.





Feb 3, 2021

Windows 10 & Office 2010: File Explorer crashes by clicking on a Office document

After an upgrade from whatever version to windows 10, Microsoft installs the office 365 apps. This leads to a crashing explorer (including a restart of the menu bar of the desktop).

 

The funny thing: If you start the word/excel/... first and then use the "open file" inside your office application everything works. The file chooser has no problem and does not freeze.

The solution is very easy:
Just deinstall the office 365 app. 

All the other solutions, which are proposed in the internet do not work. Like

  • Do a repair on your office application via system settings --> programs
  • Disable the preview feature for the file explorer
  • Disable custom shell extensions for the file explorer
  • Do a clean windows reinstall (<-- this was really a tip)