Jan 31, 2021

Office 365: Enable mail forwarding to external email domains...

For a society i do some IT administration things - and now something really new: a Office 365 tenant.

First thing was to enable mail forwarding to external email accounts. Sounds easy - hmmm not really.

Configuring the forwarding in outlook.com is quite easy:

But this does not work:

Remote Server returned '550 5.7.520 Access denied, Your organization does not allow external forwarding. Please contact your administrator for further assistance. AS(7555)'

To change this behaviour you have to go to the admin settings:


Now click policy:

Then choose Anti-spam:

And the chose the third entry and click "edit policy":
And the last step: Change Automatic forwarding to "on"
After you click save the email will now forwarded to external domains...

Jan 30, 2021

Microk8s: rejoining a node after a reinstall of microk8s


If you are running a microk8s kubernetes cluster, you can hit the scenario, that you lost one node and you have to reinstall the complete os or just the microk8s.

In this case you want to join this node once again to your cluster. But removing the node does not work, because the rest of the cluster can not reach the node (because it is gone...):

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu
Removal failed. Node ubuntu is registered with dqlite. Please, run first 'microk8s leave' on the departing node.
If the node is not available anymore and will never attempt to join the cluster in the future use the '--force' flag
to unregister the node while removing it.

The solution is given in the failed answer: just add "--force"

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu --force
root@zigbee:/home/ubuntu# microk8s.add-node
From the node you wish to join to this cluster, run the following:
microk8s join
If the node you are adding is not reachable through the default interface you can use one of the following:
 microk8s join
 microk8s join
 microk8s join

And then the join works without any problem:

root@ubuntu:/home/ubuntu# microk8s join
Contacting cluster at
Waiting for this node to finish joining the cluster. ..  

Jan 27, 2021

Signal: Data backup of newer signal versions cannot be imported


I switched from Whatsapp to Signal (in terms of many communications are now on signal, but still some are left on Whatsapp) and afterwards i moved to a new smartphone.

But while doing the restore procedure for the backup (take a look here) i got this error:

Data backup of newer signal versions cannot be imported

or in german

Datensicherungen neuerer Signal-Versionen k├Ânnen nicht importiert werden


I investigated the version numbers on android playstore and both were 5.2.3.

On my new smartphone the android os was not on the latest release (on this new smartphone there was still some outstanding os versions to install).

But nothing did the job - i asked signal support, so let's see, what they are telling me...

EDIT: Even deinstalling signal on my old smartphone and reinstalling signal showed this error message...

Jan 26, 2021

MicroK8s: kubectl get componentstatus deprecated - etcd status missing


If you want to check the health of the basic components with

kubectl get componentstatuses 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok       

Then etcd is missing.

This is a problem of a change in the api of kuberentes https://kubernetes.io/docs/setup/release/notes/#deprecation-5

The command to check etcd is:

kubectl get --raw='/readyz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed

Jan 23, 2021

Microk8s: publishing the dashboard (reachable from remote/internet)


If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard

The problem is, the command

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

has to be reexecuted every time you restart your node, which you use to access the dashboard.

A better configuration can be done this way: Run the following command and change 

type: ClusterIP -->   type: NodePort

kubectl -n kube-system edit service kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
apiVersion: v1
kind: Service
    kubectl.kubernetes.io/last-applied-configuration: |
  creationTimestamp: "2021-01-22T21:19:24Z"
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3599"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 19496d44-c454-4f55-967c-432504e0401b
  - port: 443
    protocol: TCP
    targetPort: 8443
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
  loadBalancer: {}
Then run

root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   <none>        443:30713/TCP   4m14s

After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713



Jan 22, 2021

Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster

 Kubernetes cluster with microk8s on raspberry pi

If you want to join a node and you get the following error:

microk8s join

Contacting cluster at
Traceback (most recent call last):
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
    update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
    with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'

 This error happens, if you have not enabled dns on your nodes.

So just run "microk8s.enable dns" on every machine:

microk8s.enable dns

Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node
Adding argument --cluster-dns to nodes.
Configuring node
Restarting nodes.
Configuring node
DNS is enabled

And after that the join will work like expected:

root@ubuntu:/home/ubuntu# microk8s join
Contacting cluster at
Waiting for this node to finish joining the cluster. ..  
root@ubuntu:/home/ubuntu# kubectl get nodes
ubuntu   Ready    <none>   3m35s   v1.20.1-34+97978f80232b01
zigbee   Ready    <none>   37m     v1.20.1-34+97978f80232b01

Jan 20, 2021

MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady

On my little kubernetes cluster with microK8s


 i got this problem:

kubectl get nodes
zigbee   NotReady   <none>   59d   v1.19.5-34+b1af8fc278d3ef
ubuntu   Ready      <none>   59d   v1.19.6-34+e6d0076d2a0033

The solution was:

kubectl describe node zigbee

and in the output i found:

  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 14m                kubelet     Starting kubelet.
  Warning  SystemOOM                14m                kubelet     System OOM encountered, victim process: influx, pid: 3256628
  Warning  InvalidDiskCapacity      14m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientMemory
Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.

But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...

Jan 6, 2021

Review: Kafka: The Definitive Guide

Last week i read the book "Kafka: The Definitive Guide" with the subtitle "Real-Time Data and Stream Processing at Scale" which was provided by confluent.io:

The book contains 11 chapters on 288 pages - let's take look on the content:

Chapter 1 "meet Kafka" start with a motivation, why moving data is important and why you should not spend your effort not into moving but into your business. In addition an introduction to the messaging concepts like publish/subscribe, queues, messages, batches, schemas, topics, partitions, ... Many technical terms are defined there, but some are specific to Kafka and some are more general definitions. One additional info: Kafka was built by linkedin - the complete story told in the last section of this chapter.

The second chapter is about installing Kafka. Nothing special. OS, Java, Zookeeper (if clustered), Kafka.

Chapter 3 is called "Kafka producers: Writing messages to Kafka". Like the title indicates: all configuration details about sending messages are listed and explained. 

Chapter 4 is the same as the previous chapter but for reading messages. Both chapters contain many java example listings.

Chapters 5 & 6 are about clusters and reliability. Here are the nifty details explained like high water marks, message replication, timeouts, indices, ... If you want to run a high available Kafka system, you should read that and in case of failures you will know what to do.

Chapter 7 introduces Kafka Connect. Here a citation, when you should use Connect (it is not possible to summarize this):

You will use Connect to connect Kafka to datastores that you did not write and whose code you cannot or will not modify. Connect will be used to pull data from the external datastore into Kafka or push data from Kafka to an external store. For datastores where a connector already exists, Connect can be used by nondevelopers, who will only need to configure the connectors.

"Cross data cluster mirroring" is the title of chapter 8 - i do not understand why this chapter is not placed before chapter 7...

In chapter 9 and 10 administration and monitoring is explained. Very impressive is the amount of CLI examples. If you have a question: here you will find the CLI command, which provides the answer.

The last chapter "stream processing" is one of the longest chapters (>40 pages).  Here two APIs are presented to do some processing based on the messages. One example is, a stream which processes stock quotes. With stream processing it is possible to calculate the number of trades for every five-second window or the average ask price for every five-second window. Of course this chapter shows much more, but i think this gives the best impression ;-)

All in all a excellent book - even if you are not implementing Kafka ;-)