Dec 27, 2021

Running a movie on an external DVD drive on a Chromebook (like HP x360)

In a first step this task sounds very easy:

  • watch a DVD on a chromebook

But...

What are the problems?

  1. Using an external drive to access the dvd
  2. No appropriate app available in play store or chrome web store

There are different solutions out there. 

  1. Convert the DVD to a mp4 and watch this
  2. Use VLC from play store --> does not recognize the DVD
  3. Use VLC from chrome web store --> does not start at all
  4. Use linux development environment

Option 4 seemed to me as the most promising way to go.

Setting up linux is very easy:

 After that you have a debian bullseye running inside a container. Go to /etc/apt/sources.list and add "contrib" after "deb https://debian.org/debian bullseye main " ("sudo bash" to get root). Then 

apt update
apt upgrade
apt install vlc libdvd-pkg
dpkg-reconfigure libdvd-pkg

After that vlc is configured including the libdvdcss for the DVD region codes.

One last problem is to access the DVD inside this linux container. This can be done via a double tap inside the file-manager on the chromebook and then you can choose inside the context menu "share with linux (Mit Linux teilen)".

This last step has to be done each time a DVD is inserted. 

So watching DVDs on a chromebook is not impossible, but it is not really user friendly...


Dec 4, 2021

influxdb: copying data with SELECT INTO - pay attention to the TAGS (or they are transformed to fields)

 If you are using influxdb, one usecase could be, copy the data from a measurement ("table") to another.

This can be done with this statement:

select * into testtable2 from testtable1

By the way: the CLI is opened with

/usr/bin/influx -unsafeSsl -ssl -database telegraf
(if your database is named telegraf)

In my case (zigbee / mqtt / telegraf) the layout of mqtt_consumer measurement was like this:

> show tag keys from mqtt_consumer
name: mqtt_consumer
tagKey
------
host
topic
> show field keys from mqtt_consumer
name: mqtt_consumer
fieldKey    fieldType
--------    ---------
battery     float
contact     boolean
current     float
...
But after copying this to a testtable, the tags where gone and everything was a field. 

This is not a big problem - you can work with that data without a problem. BUT if you want to copy it back or merge it to the original table, you will get a table with the additional columns host_1 and topic_1.

This is because for influx you already had a column host. So it added a column field host_1. 

If a query in this new table (with host + host_1) spans over a time where both of this columns are in, you only select the data, with the entry host. If the time spans only entries with host_1, it is shown as host and you get your data. Really a unpredictable way to get data.

What is the solution? Easy:

select * into table1 from mqtt_consumer group by host,topic
The "group by" does not group anything. It just tells influx: host & topic are tags and not fields. Please do not transform them...


Nov 26, 2021

Raspberry PI on Ubuntu: yarn: Cannot find module 'worker_threads'

This evening i tried to install a nodejs application with yarn on my raspberry pi. This failed with:

/usr/local/bin/yarn install
internal/modules/cjs/loader.js:638
    throw err;
    ^
Error: Cannot find module 'worker_threads'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.Module._load (internal/modules/cjs/loader.js:562:25)
    at Module.require (internal/modules/cjs/loader.js:692:17)
    at require (internal/modules/cjs/helpers.js:25:18)
    at /opt/zwavejs2mqtt/.yarn/releases/yarn-3.1.0-rc.8.cjs:287:2642
    at Object.<anonymous> (/opt/zwavejs2mqtt/.yarn/releases/yarn-3.1.0-rc.8.cjs:585:7786)
    at Module._compile (internal/modules/cjs/loader.js:778:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)

This error occurs because the nodejs version which is delivered by ubuntu is version v.10.19.0.

You have to download the armv8 package from https://nodejs.org/en/download/

With version v16.13.0 the error was gone...

 

Nov 20, 2021

AZ-900 achieved: Microsoft Azure Fundamentals

Yesterday evening i passed Microsofts AZ-900 exam:

Taking the exam on site was no option because of COVID-19, so tried the first time the online option. Nice thing: Many schedules and i chose 20:45. 

As examinee you have to start your online session half an hour earlier and this time you really need for the onboarding: 

  1. Download the software to your PC and do some checks (audio, network, ...)
    This is an .exe - so only windows PCs are possible
  2. Install the app "Pearson VUE" on your smartphone to provide
    1. selfie
    2. passport/driver license/...
    3. photos of your room
  3. Talking to an instructor
    You are not allowed to wear a headset - even a watch is not allowed

 After that the exam is about 40 questions in 45 minutes - quite fair.

 The questions are about these topics:

  • Describe cloud concepts (20-25%)
  • Describe core Azure services (15-20%)
  • Describe core solutions and management tools on Azure (10-15%)
  • Describe general security and network security features (10-15%)
  • Describe identity, governance, privacy, and compliance features (15-20%)
  • Describe Azure cost management and Service Level Agreements (10-15%)

More information can be found here: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3VwUY

If you want to do this exam, start here:



Nov 6, 2021

Fritz!Box monitoring with grafana, influx, collectd and fritzcollectd

 A nice way to monitor your Fritz!Box is this here:


How can you achieve this:

https://fetzerch.github.io/2014/08/23/fritzcollectd/

and 

https://github.com/fetzerch/fritzcollectd

Here a list of the software packages you have to install

apt install -y collectd python3-pip libxml2 libxml2-dev libxslt1-dev influxdb nodejs git make g++ gcc npm net-tools certbot mosquitto mosquitto-clients grafana-server

for grafana-server and influxdb you have to add new repositories, because they are still not included in ubuntu.

To tell collectd, that it shoud write to influxdb, you have to uncomment the following in collectd.conf:

<Plugin network>
        Server "localhost" "25826"
</Plugin>

and in influxdb.conf:

[[collectd]]
  enabled = true
  bind-address = "localhost:25826"
  database = "collectd"
  retention-policy = ""
  typesdb = "/usr/share/collectd/types.db"
  parse-multivalue-plugin = "split"

and of course inside collectd.conf you have to add the fritzcollectd config from the github link above.

But with starting collectd you might get the error:

dlopen("/usr/lib/collectd/python.so") failed: /usr/lib/collectd/python.so: undefined symbol: PyFloat_Type

This can be solved with adding into /etc/default/collectd:

LD_PRELOAD=/usr/lib/python3.8/config-3.8-aarch64-linux-gnu/libpython3.8.so


Zigbee: Setup zigbee2mqtt with usbstick conbee II & influxdb on a raspberry pi

Just a short walkthrough of all steps which are necessary:

1.) insert the usbstick and check if this special device is there: /dev/ttyACM0 

if this device is not showing up, it might be, that your kernel does not support usbserial. In my case i had to downgrade from ubuntu server 21.10 to 21.04.

2.) follow these steps: https://www.zigbee2mqtt.io/guide/installation/01_linux.html#installing


 


apt-get install -y nodejs git make g++ gcc npm
git clone https://github.com/Koenkk/zigbee2mqtt.git /opt/zigbee2mqtt
cd /opt/zigbee2mqtt
npm ci

if you get 

prebuild-install WARN install EACCES: permission denied, access '/root/.npm/_cacache'

then you should not use root for running this command.

cd /opt/zigbee2mqtt
chown -R ubuntu node_modules
rm node_modules/*
npm ci

3.) install mqtt 


apt install mosquitto mosquitto-clients

4.) add to /etc/mosquitto/mosqitto.conf the line

listener 1883 127.0.0.1

and restart mosquitto (systemctl restart mosquitto)

 

5.) then start the zigbee2mqtt:

cd /opt/zigbee2mqtt
npm start

 if you get

Zigbee2MQTT:error 2021-11-06 09:05:23: Error: Error while opening serialport 'Error: Error: No such device or address, cannot open /dev/ttyACM0' 

then you did not really check step 1.): please check that /dev/ttyACM0 is missing - if yes: for me the kernel module (to list: lsmod) usbserial was missing. It seems, that ubuntu missed that on 21.10 - so i reinstalled 21.04....

if you get

zigbee2MQTT:error 2021-11-06 14:54:11: MQTT failed to connect: connect ECONNREFUSED 127.0.0.1:1883

 then you did not get mosquitto running. Check with systemctl status mosquitto and follow step 3 and 4.

6.) configure telegraf, so that the data from mosquitto is transferred to influxdb. So you have to add to telegraf.conf:


 

[[inputs.mqtt_consumer]]
   servers = ["tcp://127.0.0.1:1883"]
   topics = [
     "zigbee2mqtt/sensor/#",
   ]
   data_format = "json"

[[outputs.influxdb]]
   urls = ["unix:///var/run/influxdb/influxdb.sock"]
   username = "admin"
   password = "XXXXX"

7.) add this user to influxdb:


 

influx -ssl -unsafeSsl (only influx if you have not enabled SSL)

create user admin with password 'XXXXXXX' with all privileges

8.) if you have joined a device this the zigbee2mqtt, then you have to give a friendy name inside /opt/zigbee2mqtt/data/configuration.yaml

   friendly_name: 'sensor/t1'

Oct 30, 2021

Review: Mastering Azure Machine Learning

Last week i stumbled upon this book and this weekend there was enough time to walk through it:

 

The book contains 14 chapters on 409 pages - but due to the layout, i think it can fit on 200 pages on a book with "default rendering".
The book is in addition divided in 4 sections: 1 - Azure Machine Learning / 2 - Experimentation and Data Preperation / 3 - Training Machine Learning Models / 4 Optimization and Deployment of Machine Learning Models

Chapter 1 is named "Building an end-to-end machine learning pipeline in Azure". I struggled with this title, but in the first section it is explained: "You can see it as an overview of the book". The subsections cover data exploration, data preparation, choosing the model, optimization and deploying/operating models. The chapter is a teaser with many graphs, examples, stragetgies - a fast end-to-end walk through.

"Choosing a machine learning service in Azure" is the title of the second chapter. Here is everything discussed about ML vs. AI and the Azure services, which provide these techniques (e.g. Data Science Virtual Machine, Azure Batch, Azure Databricks, Azure Functions, Azure IoT Edge, Custom Vision, Azure Machine Learning Designer, Machine Learning Studio,  ...). This chapter contains many screenshots and code snippets - from my point of view to much at this point.

In chapter three (Data experimentation and visualization using Azure) it is shown how to setup your environment via Azure CLI, so that you are able to perform these steps again and again for new projects. In addition it is presented how to run everything on the local machine and track the metrics and artifacts to the Azure workspace. After that visualization is explained including code examples. Pairplots, principal component analysis, quadratic discriminant analysis, stochastic neighbor embedding - Really cool.

Chapter 4 is about "ELT, data preparation and feature extraction". Here are some nice commands with Azure CLI provided: How to batch upload data up to the Azure storage accounts and attaching them to the ML workspace. And how to access this data via python.

Chapter 5 "Azure Machine Learning Pipelines" is about to make the content of chapter 4 reusable. I think nothing to note here - a nice reference for the python code which is needed.

"Advanced feature extraction with NLP" is chapter 6. NLP = natural language processing. Nothing more to say here.

The chaper 7 to 9 are about training machine learning models. I think i will not describe each of them. But here a short summary: It starts with decision trees as explanation and then does a deep dive in how to use LightGBM including the python code. Then the same for convolutional neural networks (CNN): explanation/motivation + coding. This is followed by the description of Azure Hyperdrive: tuning and optimizing the machine learning process. The concept of hyperparameters (e.g. number of neurons in a layer)  is introduced and how to choose them with grid sampling on an elastic cloud infrastructure. And last but not least: it is described how Azure provides "a service to users that automatically preprocesses your data, selects an ML model, and trains and optimizes the model to optimally fit your training data [...]".

Chapter 10 is about using clusters. This is a nice introduction about partitioning data, workloads and synchronizing worker nodes.
"Building a recommendation engine in Azure" is the title of chapter 11. Just some catchwords from the content: non-personalized, contentbased, rating-based, hybrid recommendations. After this chapter you will know, why amazons recommendations are like they are ;-)

In chapter 11 & 12 it is described, how to register, deploy and operate a recommendation engine or machine learning model up to MLOps.

The book closes with chapter 14 "What's next?". Most important point like everywhere: Automation...

Summary: I liked this book very much, because every topic starts with an excellent introduction and there are many code examples, so that you can us this book as reference as well. The basic understanding of the author is best described with the following quote:
"the most important tasks [are]: Data acquisition,  Data cleansing, Data labeling, Selecting an error metric. We don't want to blame anyone, but some machine learning engineers love to simply skip these topics and dive right into the fun parts, namely feature engineering, model selection, parameterization, and tuning." 

That hits the bull's eye.

(The review can be found on amazon as well)

Sep 11, 2021

Review: Intent based networking for dummies

I found the book intent-based networking on linkedin posted by juniper:

The book contains 5 chapters on 44 pages.



Chapter one (expressing intent and seeing the basics of IBN) tries to give a motivation for intent based networking. And the story goes like this: "humans are slow, expensive, error prone, and inconsistent. [...] the systems are vulerable to small mistakes that can have enormous costs to business."
In addition we have "inadequate automation", "data overload", and "stale documentation". (At this point i think we are generally doomed and should stop networking at all).
BUT with IBN "you can manage what requires auto- mation, make your system standardized and reliable, and ensure you’re free to move and adjust heading into the future." The promise of IBN is to do a change from node-to-node management to an autonomic system. "The sys tem self-operates, self-adjusts, and self-corrects within the parameters of your expressed technical objectives."
So everthing should work like this: you express your intent - this intent is translated and then orchestration configuration will roll out the changes onto your network.
I think on good phrase for IBN is: "You say what, it says how"


The second chapter is named "Looking at the characteristics of IBN. This chapter does not give any helpful information at all. One nice concept is mentionend here: "Simple Pane of Glass": "t’s an important concept and a valuable benefit of having a single source of truth: You can see your entire network from a single, consistent perspective." But is think this is not possible for networks. Depending on your perspective (ethernet, vlans, ips, mpls, ...) the view is completely different. Just think about hardware ports vs virtual ports...
 

"Detailing the IBN architecture" is the titel of chapter 3. This chapter is with 9 pages the biggest chapter inside the booklet. In this chapter an example is drilled through: The intent "I want a VLAN connecting servers A, B, C, and D." is analyzed and the steps from define, translate, verify, deploy and monitor are shown.
In addition there are some subsection where the reference design, abstractions, inventory are put into relation to each other. This is illustrated with very nice figures. Really a good chapter!
 

In chapter four the book moves forward from fulfillment to assurance. "This chapter shows you why your IBN system (IBNS) requires sophisticated, deep analytics that can detect when a deployed service is drifting out of spec and either automatically make the adjustments to bring it back into compliance or alert you to the problem."
It starts with differentiating uncontrolled changes from controlled changes. This is nothing special to IBN. I think this is useful for any kind of operation in IT.
 

Chapter 5 is as always in this "dummmies" series a recap of the chapters before.


All in all a nice booklet which introduces very well in this new kind of network management system. But if IBN can keep the promises - let's see...
 




May 19, 2021

Microsoft Teams: How to prevent Teams echo bot from constantly disturbing phone conferences

Some people have found a new hobby: Blowing up Teams meetings.

How do they achieve this?

Very easy. If you are inside a Teams meeting just go to "add members" and type in "Teams echo":

The annoying things about this: 

  • This can be done by anyone who was invited and is not limited to your organzation
  • On Linux you are not able to invite the Team Echo
  • The lobby does not work for Teams Echo - that means he will join you and you have to chance to get rid of that.
  • You can not mute Teams Echo

Then click on this and you will get the following experience:

 

There is one hint i found:

https://docs.microsoft.com/en-us/answers/questions/284720/can-we-block-or-remove-39teams-echo39-bot-from-ent.html 

Microsoft itself does not really understand the issue:

https://answers.microsoft.com/en-us/msteams/forum/all/teams-echo-entering-into-meetings/3418d131-8619-4785-9ab4-0aed6acbb8c2?auth=1

But this does not work, because you do not find a "Teams echo app" inside https://admin.teams.microsoft.com/policies/manage-apps 

The problem is known:


If you know how to prevent this: Please leave a comment...

Apr 5, 2021

Microsoft Ignite: Book of News - March 2021 (Azure et al.)

If you are interested about the new features of Azure, Office 365 and other Microsoft topics, read the Book of New:

https://news.microsoft.com/ignite-march-2021-book-of-news/

 


The table of contents shows the following chapters:


In my opinion chapter 5.4 is one of the most important ones:

https://news.microsoft.com/ignite-march-2021-book-of-news/#a-541-new-security-compliance-and-identity-certifications-and-content-aim-to-close-security-skills-gap

To help address the security skills gap, Microsoft has added four new Security, Compliance and Identity certifications with supporting training and has made several updates to the Microsoft Security Technical Content Library. These certifications and content are intended to help cybersecurity professionals increase their skilling knowledge and keep up with complex cybersecurity threats.

These new certifications with supporting training are tailored to specific roles and needs, regardless of where customers are in their skilling journey:

  • The Microsoft Certified: Security, Compliance, and Identity Fundamentals certification will help individuals get familiar with the fundamentals of security, compliance and identity across cloud-based and related Microsoft services.
  • The Microsoft Certified: Information Protection Administrator Associate certification focuses on planning and implementing controls that meet organizational compliance needs.
  • The Microsoft Certified: Security Operations Analyst Associate certification helps security operational professionals design threat protection and response systems.
  • The Microsoft Certified: Identity and Access Administrator Associate certification helps individuals design, implement and operate an organization’s identity and access management systems by using Azure Active Directory (Azure AD).

In addition, the Microsoft Security Technical Content Library contains new technical content and resources.

 

Mar 13, 2021

metallb on microk8s: loadbalancer ip not reachable from clients /arp issue

 

In my last posting i wrote, how to configure and use metallb on a microk8s kubernetes cluster. This worked fine - but on the next day i was only able to reach the loadbalancer ip from clients outside the kubernetes cluster.

So what happened?

Just two things in advance:

  • metallb does not create interfaces on the node
    That means, the loadbalancer ip does not use the OS to announce the ip inside the network
  • metallb has to use its own arp  mechanism

If a client (on the same network as the kubernetes cluster) can not reach the loadbalancer ip, you have to check the arp table.

On all kubernetes nodes (except the master) you will find the loadbalancer:

arp 192.168.178.230
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.178.230          ether   dc:a6:32:65:c4:ee   C                     eth0

On the metallb controller you will find nothing:

(The controller can be found with this command:

kubectl get all -o wide -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod/speaker-hgf7l                 1/1     Running   1          21h   192.168.178.53   ubuntu   <none>           <none>
pod/controller-559b68bfd8-tgmv7   1/1     Running   1          21h   10.1.243.224     ubuntu   <none>           <none>
pod/speaker-d9d7z                 1/1     Running   1          21h   192.168.178.57   zigbee   <none>           <none>
and on this node:

arp 192.168.178.230
192.168.178.230 (192.168.178.230) -- no entry

On the client you are using, you get the same result: no arp entry for this ip. 

Option 1: the quick fix

run arp -s 192.168.178.230 dc:a6:32:65:c4:ee on your client and after that you can reach 192.168.178.230, because your client knows, which NIC (MAC) it has to reach.

Option 2:  switch the interface on the controller to promiscuous mode.

without running the interface in promicuous, metallb can not announce the ip via arp. So run ifconfig wlan0 promisc. (https://github.com/metallb/metallb/issues/284)







Mar 12, 2021

microk8s: Using the integrated loadbalancer metallb for a application/container

 

Microk8s comes with an internal loadbalancer: metallb (https://microk8s.io/docs/addons)

For project status and documentation: https://metallb.universe.tf/

My problem with this addon: It is very easy to install - but i found nearly nothing about the configuration, so that is will work... 

The only source was https://opensource.com/article/20/7/homelab-metallb

So here everthing from the beginning: 

# microk8s.enable metallb

You have to add an ip range after you hit enter. This should be some ips, which are not in use and which your DHCP should not assign to other devices.
You can check this range afterwards via:

# kubectl describe configmaps -n metallb-system
Name:         kube-root-ca.crt
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
ca.crt:
----
-----BEGIN CERTIFICATE-----
MIIDA..........=
-----END CERTIFICATE-----

Events:  <none>


Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 192.168.178.230-192.168.178.240

Events:  <none>
After this you have to write this yaml to connect your application to the metallb:

apiVersion: v1
kind: Service
metadata:
  name: kuard2
  namespace: kuard2
spec:
  selector:
    app: kuard2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
Fairly easy, but if you do not know where to start, this is almost impossible. Next step is to deploy this yaml:

# kubectl apply -f loadbalancer.yaml -n kuard2

To get the loadbalancer ip you have to issue this command:

# kubectl describe service kuard2 -n kuard2
Name:                     kuard2
Namespace:                kuard2
Labels:                   <none>
Annotations:              <none>
Selector:                 app=kuard2
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.152.183.119
IPs:                      10.152.183.119
LoadBalancer Ingress:     192.168.178.230
Port:                     <unset>  80/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31298/TCP
Endpoints:                10.1.243.220:8080,10.1.243.221:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   6m31s  metallb-controller  Assigned IP "192.168.178.230"
  Normal  nodeAssigned  6m31s  metallb-speaker     announcing from node "ubuntu"
And then your service is reachable with wget http://192.168.178.240:80 or any browser, which can connect to this ip.


Feb 28, 2021

Kubernetes: Building Kuard for Raspberry Pi (microk8s / ARM64)

 

In one of my last posts (click here) i used KUARD (kubernetes up and running demo) to check the livenessProbes of kubernetes.

In my posting i pulled the image from gcr.io/kuar-demo/kuard-arm64:3.

But what about building this image on myself?

First step: get the sources:

git clone https://github.com/kubernetes-up-and-running/kuard.git

Second step: run docker build:

cd kuard/
docker build . -t kuard:localbuild

But this fails with:

Step 13/14 : COPY --from=build /go/bin/kuard /kuard
COPY failed: stat /var/lib/docker/overlay2/60ba596c03e23fdfbca2216f495504fa2533a2f2e8cadd81a764a200c271de86/merged/go/bin/kuard: no such file or directory

What is going wrong here?

Inside the Dockerfile(s) there is ARCH=amd64

Just correct that with "sed -i 's/amd/arm/g' Dockerfile*"

After that the image is built without any problem:

Sending build context to Docker daemon  3.379MB
Step 1/14 : FROM golang:1.12-alpine AS build
 ---> 9d993b748f32
Step 2/14 : RUN apk update && apk upgrade && apk add --no-cache git nodejs bash npm
 ---> Using cache
 ---> 54400a0a06c5
Step 3/14 : RUN go get -u github.com/jteeuwen/go-bindata/...
 ---> Using cache
 ---> afe4c54a86c3
Step 4/14 : WORKDIR /go/src/github.com/kubernetes-up-and-running/kuard
 ---> Using cache
 ---> a51084750556
Step 5/14 : COPY . .
 ---> 568ef8c90354
Step 6/14 : ENV VERBOSE=0
 ---> Running in 0b7100c53ab0
Removing intermediate container 0b7100c53ab0
 ---> f22683c1c167
Step 7/14 : ENV PKG=github.com/kubernetes-up-and-running/kuard
 ---> Running in 8a0f880ea2ca
Removing intermediate container 8a0f880ea2ca
 ---> 49374a5b3802
Step 8/14 : ENV ARCH=arm64
 ---> Running in c6a08b2057d0
Removing intermediate container c6a08b2057d0
 ---> dd871e379a96
Step 9/14 : ENV VERSION=test
 ---> Running in 07e7c373ece7
Removing intermediate container 07e7c373ece7
 ---> 9dabd61d9cd0
Step 10/14 : RUN build/build.sh
 ---> Running in 66471550192c
Verbose: 0

> webpack-cli@3.2.1 postinstall /go/src/github.com/kubernetes-up-and-running/kuard/client/node_modules/webpack-cli
> lightercollective


     *** Thank you for using webpack-cli! ***

Please consider donating to our open collective
     to help us maintain this package.

  https://opencollective.com/webpack/donate

                    ***

added 819 packages from 505 contributors and audited 887 packages in 86.018s
found 683 vulnerabilities (428 low, 4 moderate, 251 high)
  run `npm audit fix` to fix them, or `npm audit` for details

> client@1.0.0 build /go/src/github.com/kubernetes-up-and-running/kuard/client
> webpack --mode=production

Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
Hash: 52ca742bfd1307531486
Version: webpack 4.28.4
Time: 39644ms
Built at: 02/05/2021 6:48:35 PM
    Asset     Size  Chunks                    Chunk Names
bundle.js  333 KiB       0  [emitted]  [big]  main
Entrypoint main [big] = bundle.js
 [26] (webpack)/buildin/global.js 472 bytes {0} [built]
[228] (webpack)/buildin/module.js 497 bytes {0} [built]
[236] (webpack)/buildin/amd-options.js 80 bytes {0} [built]
[252] ./src/index.jsx + 12 modules 57.6 KiB {0} [built]
      | ./src/index.jsx 285 bytes [built]
      | ./src/app.jsx 7.79 KiB [built]
      | ./src/env.jsx 5.42 KiB [built]
      | ./src/mem.jsx 5.81 KiB [built]
      | ./src/probe.jsx 7.64 KiB [built]
      | ./src/dns.jsx 5.1 KiB [built]
      | ./src/keygen.jsx 7.69 KiB [built]
      | ./src/request.jsx 3.01 KiB [built]
      | ./src/highlightlink.jsx 1.37 KiB [built]
      | ./src/disconnected.jsx 3.6 KiB [built]
      | ./src/memq.jsx 6.33 KiB [built]
      | ./src/fetcherror.js 122 bytes [built]
      | ./src/markdown.jsx 3.46 KiB [built]
    + 249 hidden modules
go: finding github.com/prometheus/client_golang v0.9.2
go: finding github.com/spf13/pflag v1.0.3
go: finding github.com/miekg/dns v1.1.6
go: finding github.com/pkg/errors v0.8.1
go: finding github.com/elazarl/go-bindata-assetfs v1.0.0
go: finding github.com/BurntSushi/toml v0.3.1
go: finding github.com/felixge/httpsnoop v1.0.0
go: finding github.com/julienschmidt/httprouter v1.2.0
go: finding github.com/dustin/go-humanize v1.0.0
go: finding golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: finding github.com/spf13/viper v1.3.2
go: finding github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: finding github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: finding github.com/matttproud/golang_protobuf_extensions v1.0.1
go: finding github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: finding github.com/golang/protobuf v1.2.0
go: finding github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: finding golang.org/x/sync v0.0.0-20181108010431-42b317875d0f
go: finding golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: finding github.com/hashicorp/hcl v1.0.0
go: finding github.com/spf13/afero v1.1.2
go: finding github.com/coreos/go-semver v0.2.0
go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
go: finding github.com/fsnotify/fsnotify v1.4.7
go: finding github.com/spf13/jwalterweatherman v1.0.0
go: finding github.com/coreos/etcd v3.3.10+incompatible
go: finding gopkg.in/yaml.v2 v2.2.2
go: finding golang.org/x/text v0.3.0
go: finding github.com/pelletier/go-toml v1.2.0
go: finding github.com/magiconair/properties v1.8.0
go: finding github.com/mitchellh/mapstructure v1.1.2
go: finding github.com/stretchr/testify v1.2.2
go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
go: finding github.com/coreos/go-etcd v2.0.0+incompatible
go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
go: finding github.com/spf13/cast v1.3.0
go: finding github.com/davecgh/go-spew v1.1.1
go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
go: finding github.com/pmezard/go-difflib v1.0.0
go: downloading github.com/julienschmidt/httprouter v1.2.0
go: downloading github.com/pkg/errors v0.8.1
go: downloading github.com/miekg/dns v1.1.6
go: downloading github.com/spf13/viper v1.3.2
go: downloading github.com/felixge/httpsnoop v1.0.0
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/prometheus/client_golang v0.9.2
go: extracting github.com/pkg/errors v0.8.1
go: extracting github.com/julienschmidt/httprouter v1.2.0
go: extracting github.com/felixge/httpsnoop v1.0.0
go: extracting github.com/spf13/viper v1.3.2
go: downloading github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/spf13/pflag v1.0.3
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/dustin/go-humanize v1.0.0
go: extracting github.com/miekg/dns v1.1.6
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/hashicorp/hcl v1.0.0
go: extracting github.com/dustin/go-humanize v1.0.0
go: downloading github.com/magiconair/properties v1.8.0
go: downloading github.com/spf13/afero v1.1.2
go: extracting github.com/fsnotify/fsnotify v1.4.7
go: downloading golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/spf13/cast v1.3.0
go: extracting github.com/spf13/jwalterweatherman v1.0.0
go: extracting gopkg.in/yaml.v2 v2.2.2
go: extracting github.com/spf13/afero v1.1.2
go: extracting github.com/magiconair/properties v1.8.0
go: extracting github.com/prometheus/client_golang v0.9.2
go: downloading github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/spf13/cast v1.3.0
go: downloading golang.org/x/text v0.3.0
go: downloading golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/hashicorp/hcl v1.0.0
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: downloading github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: downloading github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: extracting github.com/pelletier/go-toml v1.2.0
go: downloading github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: extracting github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: downloading github.com/golang/protobuf v1.2.0
go: downloading github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: extracting github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/golang/protobuf v1.2.0
go: extracting golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: extracting golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: extracting golang.org/x/text v0.3.0
Removing intermediate container 66471550192c
 ---> 236f3050bc93
Step 11/14 : FROM alpine
 ---> 1fca6fe4a1ec
Step 12/14 : USER nobody:nobody
 ---> Using cache
 ---> cabde1f6b77c
Step 13/14 : COPY --from=build /go/bin/kuard /kuard
 ---> 39e8b0af8cef
Step 14/14 : CMD [ "/kuard" ]
 ---> Running in ca867aeb43ba
Removing intermediate container ca867aeb43ba
 ---> e1cb3fd58eb4
Successfully built e1cb3fd58eb4
Successfully tagged kuard:localbuild

Feb 24, 2021

Kubernetes: Run a docker image as pod or deployment?

 If you want to run a docker image inside kubernetes you can at least choose two ways:

  1. pod
  2. deployment

The first is done with these commands:

kubectl create namespace kuard
kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080
kubectl expose pod kuard --type=NodePort --port=8080 -n kuard

To  run the image inside a deployment the commands look very similar:

kubectl create namespace kuard2
kubectl create deployment kuard2 --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard2
kubectl expose deployment kuard2 -n kuard2 --type=NodePort --port=8080

Both is done with three commands, but what is the difference:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d21h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

 versus

# kubectl get all -n kuard2
NAME                        READY   STATUS    RESTARTS   AGE
pod/kuard2-f8fd6497-4f7bc   1/1     Running   0          5m38s

NAME             TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard2   NodePort   10.152.183.233   <none>        8080:32627/TCP   4m32s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kuard2   1/1     1            1           5m39s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kuard2-f8fd6497   1         1         1       5m38s

So as you clearly can see, a deployment also configure a deployment and a replicaset in addition. But this is not really a deployment you want to do in such unconfigured way (remember: livenessProbes & readinessProbes can only be configured with kubctl apply + YAML). But you can get an template via 

kubectl get deployments kuard2 -n kuard2 -o yaml

which you can use for configuring all parameters - so this is easier than writing the complete YAML manually.

Feb 13, 2021

Kubernetes: LivenessProbes - first check

One key feature of kubernetes is, that unhealthy pods will be restarted. How can this be tested?

First you should deploy KUARD (kubernetes up and runnind demo). With this docker image you can check the restart feature easily:

(To deploy kuard read this posting, but there a some small differences)

# kubectl create namespace kuard
namespace/kuard created

But then you can not use the kubectl run because there is no commandline parameter to add the livenessProbe configuration. So you have to write a yaml file:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    livenessProbe:
      httpGet:
        path: /healthy
        port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 1
      periodSeconds: 10
      failureThreshold: 3

    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
and then run

# kubectl apply -f kuard.yaml -n kuard

The exposed port will stay (this posting) untouched, so you can reach your kuard over http.

So go to the tab "liveness probe" and you will see:

Now click on "Fail" and the livenessProbe will get a http 500:

 And after 3 retries you will see:

and the command line will show 1 restart:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   1          118s

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d21h
Really cool - but really annoying, that this could not be configured via CLI but only per YAML.



Feb 6, 2021

Microk8s: Running KUARD (Kubernetes Up And Running Demo) on a small cluster

There is a cool demo application, which you can use to check your kubernetes settings. This application is called kuard (https://github.com/kubernetes-up-and-running/kuard):

To get it running in a way that you can deinstall it easily run the following commands:

# kubectl create namespace kuard
namespace/kuard created
You can deploy it via "kubectl run" or create a YAML with "kubectl run ... --dry-run=client --output=yaml" and deloy via "kubectl apply":

#kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080 --dry-run=client --output=yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
or 

# kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080

To expose it in your cluster run:

# kubectl expose pod kuard --type=NodePort --port=8080 -n kuard
service/kuard exposed

And then check the port via

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d20h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

The number after 8080: is the port you can use (http://zigbee:32047/) 

With kuard you can run DNS checks on the pods or browse the filesystem, to check things... You can even set the status for the liveness and readyness probes.



Feb 4, 2021

Kubernetes: publishing services - clusterip vs. nodeport vs. loadbalancer and connecting to the serivces

In my posting http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html i described how to expose a NGINX on a kubernetes cluster, so that i was able open the NGINX page with a browser which was not located on one of the kubernetes nodes.

After reading around here the fundamentals, why this worked and what alternatives can be used.

The command

kubectl expose deployment web --type=NodePort --port=80

can be used with the following types:

https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

So exposing to a clusterip is only exposing your service internally. If you want to access this from the outside, the follow this tutorial: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ but this is only a temporary solution.

Exposing to external without any additional component: Just use nodeport (e.g. follog my posting: http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html )

Loadbalancer uses a loadbalancer from Azure or AWS or ... (take a look here: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/ )

ExternalName adds a DNS-name to the loadbalancer IP of type Loadbalancer.





Feb 3, 2021

Windows 10 & Office 2010: File Explorer crashes by clicking on a Office document

After an upgrade from whatever version to windows 10, Microsoft installs the office 365 apps. This leads to a crashing explorer (including a restart of the menu bar of the desktop).

 

The funny thing: If you start the word/excel/... first and then use the "open file" inside your office application everything works. The file chooser has no problem and does not freeze.

The solution is very easy:
Just deinstall the office 365 app. 

All the other solutions, which are proposed in the internet do not work. Like

  • Do a repair on your office application via system settings --> programs
  • Disable the preview feature for the file explorer
  • Disable custom shell extensions for the file explorer
  • Do a clean windows reinstall (<-- this was really a tip)

Jan 31, 2021

Office 365: Enable mail forwarding to external email domains...

For a society i do some IT administration things - and now something really new: a Office 365 tenant.

First thing was to enable mail forwarding to external email accounts. Sounds easy - hmmm not really.

Configuring the forwarding in outlook.com is quite easy:

But this does not work:

Remote Server returned '550 5.7.520 Access denied, Your organization does not allow external forwarding. Please contact your administrator for further assistance. AS(7555)'

To change this behaviour you have to go to the admin settings:

https://protection.office.com/antispam

Now click policy:

Then choose Anti-spam:

And the chose the third entry and click "edit policy":
And the last step: Change Automatic forwarding to "on"
After you click save the email will now forwarded to external domains...


Jan 30, 2021

Microk8s: rejoining a node after a reinstall of microk8s

 

If you are running a microk8s kubernetes cluster, you can hit the scenario, that you lost one node and you have to reinstall the complete os or just the microk8s.

In this case you want to join this node once again to your cluster. But removing the node does not work, because the rest of the cluster can not reach the node (because it is gone...):

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu
Removal failed. Node ubuntu is registered with dqlite. Please, run first 'microk8s leave' on the departing node.
If the node is not available anymore and will never attempt to join the cluster in the future use the '--force' flag
to unregister the node while removing it.

The solution is given in the failed answer: just add "--force"

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu --force
root@zigbee:/home/ubuntu# microk8s.add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
If the node you are adding is not reachable through the default interface you can use one of the following:
 microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
 microk8s join 172.17.0.1:25000/de0736090ce0055e45aff1c5897deba0
 microk8s join 10.1.190.192:25000/de0736090ce0055e45aff1c5897deba0

And then the join works without any problem:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
 

Jan 27, 2021

Signal: Data backup of newer signal versions cannot be imported

 

I switched from Whatsapp to Signal (in terms of many communications are now on signal, but still some are left on Whatsapp) and afterwards i moved to a new smartphone.

But while doing the restore procedure for the backup (take a look here) i got this error:

Data backup of newer signal versions cannot be imported

or in german

Datensicherungen neuerer Signal-Versionen können nicht importiert werden


 

I investigated the version numbers on android playstore and both were 5.2.3.

On my new smartphone the android os was not on the latest release (on this new smartphone there was still some outstanding os versions to install).

But nothing did the job - i asked signal support, so let's see, what they are telling me...

EDIT: Even deinstalling signal on my old smartphone and reinstalling signal showed this error message...

Jan 26, 2021

MicroK8s: kubectl get componentstatus deprecated - etcd status missing


 

If you want to check the health of the basic components with

kubectl get componentstatuses 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok       

Then etcd is missing.

This is a problem of a change in the api of kuberentes https://kubernetes.io/docs/setup/release/notes/#deprecation-5


The command to check etcd is:

kubectl get --raw='/readyz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed


Jan 23, 2021

Microk8s: publishing the dashboard (reachable from remote/internet)

 

If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard

The problem is, the command

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

has to be reexecuted every time you restart your node, which you use to access the dashboard.

A better configuration can be done this way: Run the following command and change 

type: ClusterIP -->   type: NodePort

kubectl -n kube-system edit service kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-01-22T21:19:24Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3599"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
  clusterIP: 10.152.183.81
  clusterIPs:
  - 10.152.183.81
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
Then run

root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.152.183.81   <none>        443:30713/TCP   4m14s

After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713