Dec 31, 2020

Samsung A50: boot loop problem after last Samsung OS update

 I used a Samsung A50 for nearly 1,5 years and was very satisfied with the device. 128GB internal storage and dual sim - i do not need more :)

But last week the monthly "security" update was done by Samsung and after booting the new OS everything seems to fine. But only a few hours later (i did not install any new software - was just browsing in the web on my favourite news page) the smartphone froze and after that it keeps showing this screen for hours:

With pressing "Volume Up" and "Power" i was able to open the recovery mode, but after a factory reset, still the boot screen is shown...

Anyone else with this problem? Please leave a comment!


Dec 30, 2020

My son started at blogspot.com

My son started its own blog 

https://holzgeschenkebasteln.blogspot.com/


Of course this blog is in german, but it is nice to see, that he managed to get everything running and configured.

I am curious, if he will write some more postings...

Dec 23, 2020

MicroK8s: more problems - log flooding

After getting my kubernetes nodes running on ubuntu's microK8s

i got thousands of these messages in my syslog:

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.735176   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Dec 22 21:15:00 ubuntu microk8s.daemon-kubelet[10978]: W1122 21:15:00.737524   10978 clientconn.go:1223] grpc: addrConn.createTransport fail
ed to connect to {unix:///var/snap/microk8s/common/run/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error whil
e dialing dial unix:///var/snap/microk8s/common/run/containerd.sock: timeout". Reconnecting...

Really annoying i found no solution for this problem. But there is an easy way to correct this problem:

snap disable microk8s
snap enable microk8s
Run this on both nodes and the problem is gone (i think rebooting will do the same job).



Dec 18, 2020

Review: AIOPS for dummies - the newest buzzword in town...

 Today i ran over an article in linkedin, where this book was announced:


The next big thing behind DevOps is AIOps?

Moogsoft says about themselves: "Moogsoft is a pioneer and leading provider of AIOps solutions that help IT teams work faster and smarter. With patented AI analyzing billions of events daily across the world's most complex IT environments, the Moogsoft AIOps platform helps the world's top enterprises avoid outages, automate service assurance, and accelerate digital transformation initiatives...."

So let's take a look inside this book with 43 pages and 7 chapters:

Chapter one start with the declaration of the problem: DevOps & reliability need improvements in incident resolution, meeting SLAs and accelerating digital transformation. Very nice is the short case study, which is provided there.

The beginning of chapter 2 starts with this setence: "AI is technology used to create machines that imitate intelligent human behaviour." YES! They are not talking over the almighty AI - this sounds very promising. AI is for moogsoft statistics, probabilites, calculations and algebra - as physicist i strongly agree with that "legacy" approach. Then this book covers very brief the ai learning techniques.

In chapter 3 the AIOps workflow is presented. Without going into any details here: Moogsoft uses a very nice iconic design, which explains their procedure well. At this point i would recommend you, to take a look on that...

Chapter 4 provides some more use cases for AIOps. Nice - but nothing really new.

Chapter 5 claims, that AIOps is providing a unified view for monitoring, observability and change data. Sounds good - but i think digging into details will show limits of the promise. But page 32 shows a list of systems which are already integrated - this is really a very impressive list.

In chapter 6 moogsoft advertise their small entry solution "moogsoft express".  

The last chapter closes with the typical "ten tips".

 

All in all a nice idea and let's see how this solution performs on the market!

Dec 16, 2020

zigbee: moving data from mqtt to influxdb - transforming strings to integers

After some first steps with zigbee devices and storing the data in an influxdb, i noticed that string values are suboptimal for building graphs. 

Moving the data from mqtt to influxdb was done with telegraf:

https://www.influxdata.com/time-series-platform/telegraf/

And i was wondering, how i can change string to integers, but this i very easy:

  [[processors.enum]]
    order = 2
    [[processors.enum.mapping]]
      field = "state"
      [processors.enum.mapping.value_mappings]
        "ON" = 1
        "OFF" = 0
    [[processors.enum.mapping]]
      field = "contact"
      [processors.enum.mapping.value_mappings]
        "true" = 2
        "false" = 1
    [[processors.enum.mapping]]
      field = "tamper"
      [processors.enum.mapping.value_mappings]
        "true" = 1
        "false" = 0
    [[processors.enum.mapping]]
      field = "water_leak"
      [processors.enum.mapping.value_mappings]
        "true" = 1
        "false" = 0
Next problem: if the column "water_leak" was already added inside your influxdb, you can not add numbers - so you have to drop the table and loose your data...

(This is not the full truth: you can export the data via a select to a file and insert the data afterwards - with the appropriate numbers...)
 


Dec 12, 2020

My start to a local kubernetes cluster: microK8s @ubuntu

After playing around with zigbee on raspberry pi, i decided to build up my own kubernetes cluster at home. I have to raspberry pi running ubuntu server, so i wanted to go this direction:


The start is very easy. Just follow the steps shown here:

https://microk8s.io/docs

But by adding the second node i got the following result:

root@zigbee:/home/ubuntu/kubernetes# microk8s kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
ubuntu   NotReady   <none>   98s   v1.19.3-34+b9e8e732a07cb6
zigbee   NotReady   <none>   37m   v1.19.3-34+b9e8e732a07cb6
Hmmm.

The best way to debug this problem is

# microk8s inspect
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-control-plane-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting juju
  Inspect Juju
Inspecting kubeflow
  Inspect Kubeflow

# Warning: iptables-legacy tables present, use iptables-legacy to see them
WARNING:  Docker is installed.
File "/etc/docker/daemon.json" does not exist.
You should create it and add the following lines:
{
    "insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
WARNING:  The memory cgroup is not enabled.
The cluster may not be functioning properly. Please ensure cgroups are enabled
See for example: https://microk8s.io/docs/install-alternatives#heading--arm
Building the report tarball
  Report tarball is at /var/snap/microk8s/1794/inspection-report-20201212_194335.tar.gz
And as you can see: this contains the solution!

After adding the /etc/docker/daemon.json everything went fine:

root@zigbee:~# kubectl get nodes 
NAME     STATUS   ROLES    AGE    VERSION
ubuntu   Ready    <none>   46h    v1.19.3-34+b9e8e732a07cb6
zigbee   Ready    <none>   2d3h   v1.19.3-34+b9e8e732a07cb6

Dec 11, 2020

MicroK8s: Dashboard & RBAC

If you want to access your dashboard and you have enabled RBAC (like shown here), you will get this error, if you follow the default manual (https://microk8s.io/docs/addon-dashboard):

secrets is forbidden: User "system:serviceaccount:default:default" cannot list resource "secrets" in API group "" in the namespace "default"
error
persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
error
configmaps is forbidden: User "system:serviceaccount:default:default" cannot list resource "configmaps" in API group "" in the namespace "default"
error
services is forbidden: User "system:serviceaccount:default:default" cannot list resource "services" in API group "" in the namespace "default"
error
statefulsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
error
ingresses.extensions is forbidden: User "system:serviceaccount:default:default" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
error
replicationcontrollers is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
error
jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "jobs" in API group "batch" in the namespace "default"
error
replicasets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "replicasets" in API group "apps" in the namespace "default"
error
deployments.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "deployments" in API group "apps" in the namespace "default"
error
events is forbidden: User "system:serviceaccount:default:default" cannot list resource "events" in API group "" in the namespace "default"
error
pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
error
daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list resource "daemonsets" in API group "apps" in the namespace "default"
error
cronjobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list resource "cronjobs" in API group "batch" in the namespace "default"
error
namespaces is forbidden: User "system:serviceaccount:default:default" cannot list resource "namespaces" in API group "" at the cluster scope
 
To get the right bearer token you have to this:

export K8S_USER="system:serviceaccount:default:default"
export NAMESPACE="default"
export BINDING="defaultbinding"
export ROLE="defaultrole"
kubectl create clusterrole $ROLE  --verb="*"  --resource="*.*"    
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl -n ${NAMESPACE} describe secret $(kubectl -n ${NAMESPACE} get secret | (echo "$_") | awk '{print $1}') | grep token: | awk '{print $2}'\n

(create role, add a role binding and then get the token)

But there is still one error:

To fix this, you have add the cluster-admin role to this account (if you really want clusterwide permissions):

kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=$K8S_USER

Dec 5, 2020

Securing InfluxDB

In my monitoring setup i am heavily using InfluxDB. Starting with one linux server with grafana which loads the data from its local influxdb, i wanted to setup a second linux server.

My options:

  1. new telegraf, new influxdb, new grafana
    but then i have two url (because of two grafanas and i can not copy graphs from one dashboard to the other)
  2. new telegraf, new influxdb, but grafana from first server
    grafana has to get the data over the network
  3. new telegraf, influxdb & grafana from first server
    what is happening if telegraf can not reach influxdb, because of network problem? what if the first server is down?
  4. completely remote monitoring
    what is happening if telegraf can not reach the other server? what if the first server is down? 

As you can see, option 2 is the favorite here.

But therefore InfluxDB has to be secured: SSL + user/password.

So let's start with creating some certificates:

openssl req -new -x509 -nodes -out server-cert.pem -days 3650 -keyout server-key.pem

So that you get:

zigbee:/etc/influxdb# ls -lrt *pem
-rw-r--r-- 1 influxdb root  1704 Nov  7 09:48 key.pem
-rw-r--r-- 1 influxdb root  1411 Nov  7 09:48 cert.pem

Then add this in /etc/influxdb/influxdb.conf

 https-enabled = true
 https-certificate = "/etc/influxdb/cert.pem"
 https-private-key = "/etc/influxdb/key.pem"

But still a user is missing, so we have to create users (via bash):

influx -ssl -unsafeSsl

create user admin with password 'XXXXXXX' with all privileges

After that you can test this with

root@zigbee:# influx -ssl -unsafeSsl  
Connected to https://localhost:8086 version 1.6.4
InfluxDB shell version: 1.6.4
> show databases
ERR: unable to parse authentication credentials
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".
> auth
username: admin
password:
> show databases
name: databases
name
----
_internal

 


 

Dec 4, 2020

AVM Fritz.Box: how to do an automatic login and get the active WLAN devices

The AVM Fritz.Box is really a great device - but the possibilities to get monitoring data are very limited. (Please read this posting)

Which data do i want?


I want the data, which is presented in the networking tab:

If i trace the networking with the developer tools, i the the following:

To reproduce this on my command line, i have to enter this into my bash:

curl 'http://fritz.box/data.lua' 
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:82.0) Gecko/20100101 Firefox/82.0'
-H 'Accept: */*'
-H 'Accept-Language: de,en;q=0.7,en-US;q=0.3' --compressed
-H 'Content-Type: application/x-www-form-urlencoded'
-H 'Origin: http://fritz.box' -H 'Connection: keep-alive'
-H 'Referer: http://fritz.box/' -H 'Pragma: no-cache'
-H 'Cache-Control: no-cache'
--data-raw 'xhr=1&sid=cb......SID&lang=de&page=netDev&xhrId=cleanup&useajax=1&no_sidrenew='

(you have to add the line breaks and the SID in the last line).

Then you will get a JSON object beginning with these lines:

{
  "pid": "netDev",
  "hide": {
    "ssoEmail": true,
    "shareUsb": true,
    "liveTv": true,
    "faxSet": true,
    "dectMoniEx": true,
    "rss": true,
    "mobile": true,
and all the other information.

The problem: How to get this SID?

If you trace the login, it is not so easy, that the password is just send to the Fritz.Box. They use PBDFK2 to encrypt the password and then send it to the Fritz.Box.

You can find some information about that here:

https://avm.de/fileadmin/user_upload/Global/Service/Schnittstellen/AVM%20Technical%20Note%20-%20Session%20ID_EN%20-%20Nov2020.pdf


Inside this document a PHP program is stated, which does the login (not really - i think it does the job years ago - but now it does a fallback to md5 authentication. I fixed this, just post a comment, if you want this pbkdf2 enabled php script). I wrote a small javascript, which i execute with node and after that i was able to log the data into my influxdb and build a show it inside grafana:


If you are interested in the configuration, the js script and the collect commands, then post me a comment...

Dec 2, 2020

Kubernetes: Rights & Roles with kubectl and RBAC - How to restrict kubectl for a user to a namespace

Playing around with my MicroK8S i was thinking about restricting access to the default namespace. Why?

Every command adds something and so your default namespace gets polluted more and more and cleaning up might be a lot of work.

But:

There is neither a HOWTO nor some quickstart into this. Everything you can find is:

https://kubernetes.io/docs/reference/access-authn-authz/rbac/

But after this very detailed article you know a lot of things, but for restricting the kubectl you are as smart as before.

One thing i learned in this article:

You do not have to use these YAML files - everything can be done with commands and their options (i do not like YAML, so this was a very important understanding for me).

At the end it is very easy:

export K8S_USER="ateamuser"
export NAMESPACE="ateam"
export BINDING="ateambinding"
export ROLE="ateamrole"
kubectl create namespace $NAMESPACE
kubectl label namespaces $NAMESPACE team=a
kubectl create clusterrole ateamrole  --verb="*"  --resource="*.*"
kubectl create rolebinding $BINDING --clusterrole=$ROLE --user=$K8S_USER -n $NAMESPACE
kubectl create serviceaccount $K8S_USER -n $NAMESPACE
kubectl describe sa $K8S_USER -n $NAMESPACE
and just test it with:

root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n ateam  --as=ateamuser
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-cc9jv   1/1     Running   0          14m
root@zigbee:/home/ubuntu/kubernetes# kubectl get pods -n default  --as=ateamuser
Error from server (Forbidden): pods is forbidden: User "ateamuser" cannot list resource "pods" in API group "" in the namespace "default"
So there is not a big script needed - but building these commands was really a hard job...

If you want to know, how to restrict the kubectl on a remote computer, please write a comment. 

One last remark: In microK8s you enable RBAC with the command

microk8s.enable rbac

Check this with

microk8s.status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.178.57:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    ingress              # Ingress controller for external access
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
  disabled:
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory



Nov 27, 2020

Kubernetes with microK8s: First steps to expose a service to external

At home i wanted to have my own kubernetes cluster. I own 2 raspberry pi based on ubuntu, so i decided to install microK8s:

--> https://ubuntu.com/blog/what-can-you-do-with-microk8s

The installation is very well explained here:

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview

 

BUT: i found nowhere a tutorial how to run an container and expose the port in a way that i is reachable from other pc like localhost.

So here we go:

kubectl create deployment web --image=nginx
kubectl expose deployment web --type=NodePort --port=80

After that just do:

# kubectl get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-5xvfc   1/1     Running   0          112s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP        2d5h
service/web          NodePort    10.152.183.66   <none>        80:32665/TCP   105s

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web   1/1     1            1           112s

NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/web-96d5df5c8   1         1         1       112s

On you kubernetes node you can reach the service with 10.152.183.66:80.

For getting the nginx from another pc just use:

<yourkuberneteshost>:32665

For me:



 


Nov 20, 2020

ZigBee@Linux: Getting Data from ZigBee Devices via MQTT to InfluxDB and Grafana

Getting sensors with zigbee integrated with my linux raspberry pi, i did some monitoring tasks on my raspberry pi.

  1. Monitoring my raspberry pi:
    There is a very nice tutorial:
    https://medium.com/@andreea.sonda31/monitor-raspberry-pi-resources-and-parameters-with-grafana-board-part-1-ab0567303e8
    Or even better: Just use this from grafana:
    https://grafana.com/grafana/dashboards/10578
    1. add deb https://packages.grafana.com/oss/deb stable main to a file in /etc/apt/sources.list.d/
    2. apt install grafana telegraf influxdb
    3. configure telegraf for your influxdb
    4. import the json from the grafana.com-link above



  2. Monitoring my Fritz.Box with Grafana:
    https://grafana.com/grafana/dashboards/713 
    and follow the given tutorial https://fetzerch.github.io/2014/08/23/fritzcollectd/
After these steps i have the following infrastructures running:
  1. zigbee2mqtt --> MQTT -->FHEM


  2. Fritz.box --> collectd --> InfluxDB --> Grafana

  3. raspberry --> telegraf --> InfluxDB --> Grafana


For  2 and 3 it is very easy to create graphics and the presentation looks little bit prettier than 1 (imho). 

AND there is only one frontend to configure. So what about the following chain for my zigbee sensors:

  1. zigbee2mqtt --> MQTT -->telegraf --> InfluxDB --> Grafana 

Looks like some more steps, but the telegraf --> InfluxDB --> Grafana chain is already there for monitoring my raspberry pi.

So i only had to add the following on /etc/telegraf/telegraf.conf:

[[inputs.mqtt_consumer]]
   servers = ["tcp://127.0.0.1:1883"]
   topics = [
     "zigbee2mqtt/0x00158d000542239e",
     "zigbee2mqtt/0x00158d00044a6378",
     "zigbee2mqtt/0x00158d0003f0faad",
     "zigbee2mqtt/0x00158d00044a72a2",
   data_format = "json"

And after that i was able to use the data in Grafana:


 


Nov 15, 2020

ZigBee@Linux: Securing zigbee2mqtt & MQTT@FHEM & FHEM


After my setup is running, just some words about securing the whole setup.

The web gui of FHEM was already setup with SSL/HTTPS but the MQTT server is listening for all ips.

The easiest way to get this secure is change the listener to localhost, so that no connections from outside can be made. Just change in /opt/fhem/fhem.cfg:

define MQTT2_FHEM_Server MQTT2_SERVER 1883 127.0.0.1

Just a checklist, if we secured everything:
  • FHEM
  • zigbee2mqtt
    • add permit_join: false to configuration.yaml




Nov 14, 2020

ZigBee@Linux: Integration of zigbee2mqtt with FHEM (mqtt server) on ubuntu server

After the setup of FHEM and zigbee2mqtt the integration of both components has to be done.

What has to be done?

After reading the excellent documentation of FHEM it is very easy - FHEM can be configured, so that it is providing a mqtt server. 

First you have to add the following line in /opt/zigbee2mqtt/data/configuration.yaml inside the "mqtt:" section:

  client_id: 'zigbee_pi'

Then go to the command prompt of the FHEM webgui and enter the following:

define MQTT2_FHEM_Server MQTT2_SERVER 1883 global
defmod MQTT2_zigbee_pi MQTT2_DEVICE zigbee_pi
attr MQTT2_zigbee_pi IODev MQTT2_FHEM_Server
attr MQTT2_zigbee_pi bridgeRegexp zigbee2mqtt/([A-Za-z0-9]*)[/]?.*:.* "zigbee_$1"
After that you should see something like this:

(you can change the style of the page via "select style" on the left column)

Then you should save:


To create a graph just click on the file which is created for your zigbee device:


and then there should be something like:

Here you can click on "Create SVG plot" and on:

click on "write .gplot file" and your first graph is there... Repeat this and you can get:







 


Zigbee@Linux: Infrastructure - Setup

On my way to home automation with zigbee@linux my decision (as i wrote in this posting) was

  • Hardware
  • OS
    • Ubuntu server
  • Software
    • FHEM (which is the acronym for Freundliche Hausautomation und Energie-Messung = Friendly home automation and energy metering)
      This includes the server with MQTT infrastructure & webserver & gui based on perl
    • zigbee2mqtt
      The server which does the communication with the usb zigbee stick and talking to the MQTT infrastructure based on nodejs

 



The installation of FHEM was quite easy (see here) and the installation of zigbee2mqtt just worked like described here.

  1. Problem:
    FHEM is per default installed without SSL/HTTPS and without user authentication
  2. Problem:
    The communication between both components has to be setup

Here the solution for problem 1:

Login to your raspberry and type the following commands:

cd /opt/fhem
chown fhem:dialout certs
cd certs/
openssl req -new -x509 -nodes -out server-cert.pem -days 3650 -keyout server-key.pem
chown fhem:dialout *
apt  install libio-socket-ssl-perl
After that move the webgui (something like http://yourraspberry:8083) and submit the following commands on the prompt:

attr WEB sslVersion TLSv12:!SSLv3
attr WEB HTTPS 1

And then open your webfrontend with https://yourraspberry:8083.

To add a user:

@bash

echo -n fhem:MYPASSWD| base64

@Webfrontend:

attr WEB basicAuth BASE64String

 The second problem will be solved on a future posting. Just wait...



Nov 8, 2020

Home automation with linux: How to use zigbee sensors on an ubuntu raspberry pi...

To the end of the year i wanted to start a new project: Home automation...

I decided to use a linux system (of course) on a raspberry pi (see the OS installation here) and the zigbee protocol.

The main problem: What packages are needed 

  • to get a communication with zigbee components?
  • to get a website or app to get the data / visualize the data?
  • to set up a daemon/server which controls the devices?

Let's start with the third point: I will try FHEM.

The installation is described here:

https://debian.fhem.de/

wget -qO - http://debian.fhem.de/archive.key | apt-key add -
echo "deb http://debian.fhem.de/nightly/ /" >> /etc/apt/sources.list
apt update
apt upgrade
apt install fhem

After you follow the steps you can check, if FHEM is running with

root@zigbee:/home/ubuntu# netstat -ltnup | grep 8083
tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN 19446/perl

 or just connect to your raspberry via browser: http://zigbee:8083

 
 
and here a screenshot of the goal i want to achieve (maybe with some graphs added):


Here a list of the supported hardware:

https://wiki.fhem.de/wiki/Kategorie:Hardware

and a list of all supported protocols:

https://wiki.fhem.de/wiki/System%C3%BCbersicht#Protokolle


 

 

 

 

Nov 6, 2020

Raspberry PI: Installing OS with a linux pc/laptop (ubuntu)

 If you want to run a raspberry pi and you are wondering how to install the os onto the sd card please try this:

https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#1-overview

What is the benefit of this way?

Ubuntu provides on page 2 a package for the rpi-imager:

and with this tool (will be started with rpi-imager on the cli), you will get this:



 As you can see: you can choose of many different OS and the imager will do the download and the installation for you including resizing the partition to the complete sd card...

And of course you can select your own image (e.g. in *xz format) from your disk...

After the installation it will be verified and you can start your raspberry pi...


 


Oct 25, 2020

Review: Terraform Up & Running

Because of doing many project in the cloud, terraform is the tool which i use regularly. And to get better, i decided to read this book:

If you are working with a cloud of one of the hyperscalers, the you should take a look at terraform and perhaps you should read this book ;-)

If you are interested, take a look at my review at amazon.de (like all my reviews: written in german ;-).


Oct 14, 2020

Influxdb: colletcd database not created...

Yesterday i followed a tutorial for building a dashboard for my fritzbox with Grafana.


By follwoing the tutorial i had to setup collectd and influxdb.

My problem: i did not copy and paste the collectd config for influxdb.conf - i just uncommented the lines provided by the ubuntu package.

And there was /usr/local/share/collectd/types.db mentioned.

This caused the following problem:

zerberus influxd[16239]: run: open server: open service: Stat(): stat /usr/local/share/collectd/types.db: no such file or directory

So i justed touched this file, because i thought this is something where influxdb wants to store data.

But this was wrong and in /var/log/syslog i saw the following errors:

unable to find "current" in TypesDB
unable to find "if_octets" in TypesDB
unable to find "if_errors" in TypesDB
unable to find "if_dropped" in TypesDB
unable to find "if_packets" in TypesDB
unable to find "if_octets" in TypesDB
unable to find "if_errors" in TypesDB
unable to find "if_dropped" in TypesDB
unable to find "if_packets" in TypesDB
unable to find "if_octets" in TypesDB
unable to find "if_errors" in TypesDB
unable to find "disk_octets" in TypesDB
unable to find "disk_ops" in TypesDB
unable to find "disk_time" in TypesDB
unable to find "disk_io_time" in TypesDB
unable to find "disk_octets" in TypesDB
unable to find "disk_ops" in TypesDB
unable to find "disk_time" in TypesDB
unable to find "disk_io_time" in TypesDB
unable to find "disk_octets" in TypesDB
unable to find "disk_ops" in TypesDB

?

The solution: Search for types.db in /usr and use this as entry for

typesdb = "/usr/share/collectd/types.db"

inside the section [[collectd]] in influxdb.conf...

 

 

Sep 24, 2020

Mission accomplished: OpenHack: Migrating Microsoft Workloads to Azure

 After three days of hard work i got my first OPENHACK badge:


Authorized by Microsoft

Here the details from Microsoft:

Earners of the OpenHack: Migration badge understand how to execute an end-to-end migration through optimization. They have shown that they can utilize Azure Migrate to migrate virtual machines to Microsoft Azure and can modernize legacy applications by migrating to PaaS services such as Azure SQL Database and Azure App Service. They have also have a foundational understanding of Azure identity, including hybrid identity with Azure AD and how to leverage Azure RBAC to govern and secure workloads.

It was really a great challenge to discuss and implement all the goals. Thanks to the excellent coaches and for providing the infrastructure!

Sep 5, 2020

Cloning my dual boot ubuntu to a larger SSD

 After working a while with my laptop i reached the disk limit with my SSD (256GB). First impression: Oh no - how to migrate onto a new, larger SDD...

But the prices have dropped so i bought a 1TB SSD and an external SSD box:

Now i was thinking about copying the partitions with dd from the original disk to the other - or better doing a dd for the complete disk?

A friend offered me to use acronis, but the software refused to start on my laptop...

I googled a bit around and found the following solution (inspired by http://www.geekyprojects.com/storage/how-to-clone-hard-drive-to-smaller-drive/):

clonzilla.org

And this worked excellent.
After cloning my old ssd to the new one i removed the CD with clonzilla and my laptop immediately bootet from the new SSD which was still inside the SSD box.

Really cool!

I checked windows without replacing the SSD inside my laptop and this worked as well as the Ubuntu.

Next step was to boot with an GPARTED iso (can be found here) and resize the linux partition up to the new limits.

So last step was to open up my laptop and insert the 1TB SSD...

(Totally amazing that i did not have to run grub or change the uefi settings.)

Sep 4, 2020

Review: Container Storage for Dummies

After reading Running Containers in Production for Dummies this book fell into my hands:


 
 

Container Storage for Dummies is promoted by RedHat and consists of 5 chapters with 35 pages. 

The first chapter gives a short summary about containers. I liked this statement very much: "For example, a VM is like a heavy hammer. It assumes you’re running a server and that the server can host multiple applications. [...] And the container can run just about anywhere, even on a bare metal machine or in a VM — the container doesn’t care." The chapter ends with a motivation why containers need persistent storage: ephemeral containers are transient....
Chapter 2 has the title "Looking at Storage for and in Containers". The key argument here is: "Software-defined storage (SDS) separates storage hardware from storage controller software, enabling seamless portability across multiple forms of storage hardware. You can’t slice and dice storage using appliances or typical SAN/NAS as easily, readily, or quickly as you can with SDS." Both terms (Storage for Containers + Storage in Containers) are given a defintion (just take a look inside the book ;-)).
In chapter 3 the authors want to convince the reader about the coolness of container-native storage with phrases like "Container-Native Storage Is the Next Sliced Bread". I think the main argument in this section is, that RedHat contributes a substantial parts to open source Kubernetes so that RedHats Openshift container storage fits easily in there. And this is done by introducing the Container Storage Interface which can be used by all storage providers.
Chapter 4 motivates why developers like Container-Native storage: because it can be easily managed without SAN administrators....
The last chapter closes with ten reasons to move to Cantainer-Native storage: simplified management, more automation, scalibility, ....

As summary i think, this book is a nice starting point about the problems and possible solutions with storage for containers. It is a little bit disappointing, that openshift is not really explained - but within only 35 pages this is really impossible.
If you are working or starting to work with containers i require you to read this booklet - it is a good start into the container world!



Aug 25, 2020

Review: Running Containers in Production for dummies

 Last evening i read the following booklet:

Here my review:

Chapter one gives within 7 pages an excellent introduction into "Containers & Orchestration Platforms". From Kubernetes over Openshift/Docker Swarm up to Amazon EKS - many services are described. In my opinion Azure AKS is missing, but it is clear, that every hyperscaler will provide you its managed Kubernetes environment. At the end even Apache Mesos is listed - which is out of scope for the most of us. 
Building & Deploying Containers is the headline of chapter 2 and a brief, solid description of these topics is given. If you want to know what all the buzzwords like CI/CD/CS, Pipelines, Container Registries are about: Read that chapter and you have a good starting point.

Nearly 33% of the book(let) is abount Monitoring Containers (chapter 3). This points in to the right directions. You have to know what your containers are doing and what you have to change with continuous delivery and continuous deployment. If you are running tens or hundreds of containers, the monitoring has to be  automatic as well - or you are lost. "A best practice for using containers is to isolate workloads by running only a single process per container.  Placing a monitoring agent — which amounts to a second process or service — in each container to get visibility risks destroying a key value of containers: simplicity." - So building up a monitoring is not such easy, as is was on full-stack servers...

Chapter 4 is about Security. This focuses on the following topics: Implementing container limits against resource abuse, how to avoid outdated container images, management of secrets and image authenticity.

The last chapter closes with "Ten Container Takeaways".

 

Within 43 pages a really nice starting point to learn about the world of docker and container orchestration.

Aug 7, 2020

openssl: strange error.... (at first glance) error:2008F002:BIO

Some days ago i wanted to do a check of a certificate of an ip address. No big deal - so i did:
schroff@zerberus:~$ openssl s_client -showcerts  -connect 82.165.229.87.87:443

140011908769088:error:2008F002:BIO routines:BIO_lookup_ex:system
lib:../crypto/bio/b_addr.c:726:Name or service not known
connect:errno=22
So i opened google to find a solution.
But on google i found nothing really helpful.

?

The answer was very easy:
If i read the command line carfully, i would have detected my error:

THE IP ADDRESS WAS INVALID

I wrote an ipv4 with 5 numbers and not with 4...

After using a correct ipv4 number the command worked like expected:
schroff@zerberus:~$ openssl s_client -showcerts -connect 82.165.229.87:443 #
CONNECTED(00000003)
Can't use SSL_get_servername
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = GeoTrust RSA CA 2018
verify return:1
depth=0 C = DE, ST = Rheinland-Pfalz, L = Montabaur, O = 1&1 Mail & Media GmbH, CN = gmx.net
verify return:1
---

Jun 13, 2020

Google GSI: Generic System Images for Smartphones

After building my own ROM i got some problems with the devices drivers for the modem (the dual SIM was not recognized).
I discussed that with a few very skilled Android developers and the device drivers are the most important problem for building ROMs.

But there is something called GSI: Generic System Images.

(s. https://source.android.com/setup/build/gsi)

and:

The good point is that for my Samsung J530 there was a developer which built a project which allows to install GSIs:


With this plus Havoc 3.5



And here the steps to Android 10 (which where provided to me by Micro[ice]:
  1. install TWRP 3.3.0
  2. install create vendor 2.0
  3. reboot recovery
  4. install project spaget x
    (if u get symlink error 7 flash revert vendor 2.0 and repeat from step 1 without revert vendor 2.0)
  5. install GSI (Havoc)-OS to system partition
    (dont reboot after u flash project spaget x)
  6. if u need to flash gapps first u need to go
    Wipe -> Advanced Wipe -> Tick System -> Repair -> Resize
    (if u get error 1 resize again it will be successful) then u flash gapps
  7. (optional) flash areskernel rc2
  8. (optional) flash magisk
  9. reboot
  10. enjoy
And after that i have a running Android 10 on my Samsung J5... (without any Samsung Bloatware)