Report and chart TensorFlow Keras Metrics into Dynatrace

Artificial-Intelligence and machine-learning models are trained, tested and used with their accuracy in mind and therefore its crucial to closely observe and monitor your AI model during the design phase.

TensorFlow offers a convenient way to attach a TensorBoard callback hook to your machine-learning model to receive and visualise the training and test performance of your model.

Now as TensorBoard is really a great tool and I use it a lot, for production systems this approach is less useful, as TensorBoard is built for the design and dev stage of your AI model.

In production, you want to attach a stable monitoring platform, such as Dynatrace to closely watch the thrift of your prediction models over time, when confronted with new or changing input data.

Last week I came across a convenient way to write your own callback listener that automatically receives the metric logs of your model during all stages and forwards this information to Dynatrace.

Find the necessary TensorFlow to Dynatrace callback class on GitHub.

It’s extremely simple to register such a Dynatrace callback listener, as it is shown below:

dt_callback = DynatraceKerasCallback(metricprefix='tensorflow',modelname='model', url='https://your.live.dynatrace.com/api/v2/metrics/ingest',apitoken='yoursecret') model.fit(x=train_texts, y=target, epochs=100, callbacks=[dt_callback])

Then start training your model, as shown below:

As the Dynatrace callback was registered before the training stage, the TensorFlow metrics are now shown in Dynatrace:

Integrate Dynatrace Software Intelligence into your GitHub CI/CD Pipeline

It’s common knowledge today that seamless monitoring and observability of all your production software stacks is key for successful service delivery. Along with a tight integration into your CI/CD pipeline, service and software monitoring offers a lot of insights on what is going wrong during your build, test and release workflows and how to quickly remediate outages.

As a Cloud Native SaaS platform, GitHub represents the home of most of the popular Open Source projects worldwide. It offers all the important features that are necessary to support the entire software lifecycle of your project.

GitHub Actions is one of those priceless features, as it allows you to choose from more than 6.000 individual CI/CD steps that allow you to automatically build, test and release your projects on virtual machines.

Dynatrace on the other hand represents the leading software observability and intelligence platform according to analysts, such as Gartner. A Dynatrace monitoring environment allows you to closely observe the production behaviour of your software in realtime and to get notified about abnormal incidences that could lead to outages.

That said, its pretty obvious that a tight connection between your GitHub CI/CD pipeline with your Dynatrace monitoring environment offers a lot of benefits.

Within my last project I did implement a purpose built Dynatrace GitHub action that allows you to directly push information, such as events and metrics into your monitoring environment.

Use-cases here are to inform your DevOps team about broken builds of your software or to collect statistics data about your build workflows, such as the number of code commits on your services or the number of failed builds versus successful builds.

You even can use Dynatrace to define dedicated SLOs (Service Level Objectives) for your CI/CD pipeline by using those metrics as Service Level Indicators (SLIs).

See below a typical GitHub build workflow that uses the Dynatrace GitHub Action to push a metric into a monitoring environment and that informs about broken builds as well as on successful builds. Mind that I am sending a total count metric as well as both failed count and success count, which I will use later as SLI metric in my Dynatrace CI/CD Pipeline SLO.

See my GitHub workflow below:

name: 'build-test'
on: # rebuild any PRs and main branch changes
  pull_request:
  push:
    branches:
      - main
      - 'releases/*'

jobs:
  build: # make sure build/ci work properly
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: |
          npm install
      - run: |
          npm run all

  test: # clean machine without building
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Inform Dynatrace about a successful build (
        if: ${{ success() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.success"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Successful Build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} was successfully built"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
      - name: Inform Dynatrace about a failed build 
        if: ${{ failure() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.fails"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Failed build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} build failed!"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang

The conditional Dynatrace step within the GitHub workflow above is then executed with every commit of your repository, as it is shown below:

Conditional Dynatrace GitHub Action Steps, either on success or on failure

After a successful run of your workflow, you will see both, the event as well as the metric appear in your Dynatrace environment, as it is shown below:

Dynatrace event sent from your GitHub CI/CD workflow
GitHub CI/CD pipeline metrics

Define a Service-Level-Objective (SLO) for your GitHub CI/CD Pipeline

Now that Dynatrace is informed about each build success and fail, we can easily define a SLO for our CI/CD pipeline to continuously observe the quality of our build.

See below the selection of the total count as well as the success count metric as the SLI metric for our SLO within Dynatrace:

GitHub CI/CD pipeline SLO defined in Dynatrace

Now we see the current SLO state within the list of Dynatrace SLOs and we can put our SLO state onto any of your Dynatrace dashboards:

Your new GitHub build workflow SLO

Summary

I came to love the simplicity and efficiency of GitHub actions within the last weeks. They helped me a lot to fully automate my own GitHub projects CI/CD pipeline, to save time during releases and to generally raise the quality of my projects.

The logical next step for me was to tightly integrate the GitHub workflow into my Dynatrace monitoring environment and to define SLOs for measuring the quality of my builds in realtime.

By implementing and publishing a Dynatrace GitHub action, a tight integration between your GitHub workflows is now possible for everybody with a simple click into the Github Marketplace.

Automate your Android CI/CD Pipeline with GitHub Actions

When I came to play around with GitHub Actions CI/CD pipeline framework recently, I could not believe how simple and effective that functionality is!

It does not really matter, if you just want to automatically check each of your Git commits by using lint or to fully build your artefacts, GitHub actions allows you to do that with a simple YAML configuration.

GitHub actions allows the definition of different jobs that are automatically triggered on events happening within your Git repository (such as commits, pull, creation of a tag or releases, comments, creation of issues, and many more). As those job definitions are living in the same Git repository, its the perfect solution for managing your CI/CD pipeline as code within a self-contained GitHub repository.

Within this post, I will describe how I came to fully automate the CI/CD pipeline of my production Android App (TabShop) by using GitHub Actions.

GitHub action tab within my Android app repository

Kudos to Niraj Prajapati who wrote such a great blog post and who inspired me to fully automate my own Android app’s CI/CD pipeline.

Why – Whats the value for app publishers?

I can’t emphasise the value of a fully automated CI/CD pipeline enough! I spent hours over hours on manually building and testing my Android app, to finally sign it and push it to the Google Play Store. So far, I released 182 versions over 6 years. The build, test and release process gets more and more complex and error-prone. Freelance app publishers, like me, invest a significant amount of time into manual CI/CD processes that are much better spent in building innovations into the app itself.

That said, GitHub Actions does allow me to create and run a feature rich CI/CD release process fully automatically in the cloud, which helps me to save time and effort and to innovate!

Scope of my Android CI/CD Pipeline

This blog shows step-by-step how to implement the following tasks into your own GitHub Actions CI/CD pipeline:

  1. Build your app using the Gradle Build Tool
  2. Run your unit-tests
  3. Build a release app bundle
  4. Sign the app bundle
  5. Upload and expose the app bundle
  6. Push and release the app bundle in Google Play Console

Step 1: Automate your Android app build

The first step within our Android app’s CI/CD pipeline is to create a GitHub Action YAML file and to add a trigger that defines when the job should be run.

Navigate to your GitHub project repository and click on the ‘Actions’ tab where you find a button to create a new ‘workflow’.

GitHub offers a lot of standard build workflows for the most popular technology stacks. In our case we either choose to skip the template selection or we choose the Android CI workflow as shown below:

Choose the Android Gradle CI workflow

The resulting workflow will create an Android build job that already fulfills our first goal, which is to startup a Ubuntu instance, checkout your apps sourcecode and to execute the Gradle build file, as it is shown below:

Simple Android Gradle Build Job

The workflow above is triggered every time a ‘push’ or a ‘pull_request’ is triggered within your repository.

Step 2: Execute your unit-tests

A good unit tests coverage is recommended to safeguard your app against failing or buggy code contributions. In most Android app projects, the unit test code is part of your Git repository, so Gradle is also used to build and execute your tests by adding following step to your workflow:

Gradle step that runs your unit tests

Step 3: Build a release app bundle

Within the next step we will trigger the build of a release app bundle (AAB) that we will sign in the next step. App release bundles are the preferred way of shipping apps through the Google Play Appstore, as they are optimised in size and stripped of unnecessary content.

See below the workflow step that automatically builds our application release bundle:

Step 4: Sign the app bundle

Application bundles are typically signed with the certificate of a trustworthy app publisher, so that users can trust the origin of the installed app and that no third-party injected malicious parts into your app.

App marketplaces such as Google Play require apps to be signed with the certificate of the publisher to ensure the integrity of all published apps.

Therefore we will automatically sign our app bundle once its built by adding the below workflow step:

Sign an Android app bundle by using a GitHub action

The signing step above does need some additional information about your own certificate as well as the key store password and alias, which we will provide as safe GitHub secret placeholders as shown below:

  • secrets.SIGNING_KEY
  • secrets.ALIAS
  • secrets.KEY_STORE_PASSWORD
  • secrets.KEY_PASSWORD

Convert your certificate file into a base64 encoded string that can be used as a GitHub repository secret within the placeholder ‘secrets.SIGNING_KEY’. In case you are using a Mac you are lucky as the command for converting your secret file into a base64 encoded string is already provided by openssl, as it is shown below:

openssl base64 -in my-release-key.keystore -out my-release-key.keystore.base64

See the resulting list of GitHub secrets within the screenshot below:

GitHub Secrets used by the signing workflow step

Find the signin GitHub action that we used in our workflow below:

Step 5: Upload and expose the app bundle

Each workflow run does spin up a completely clean Ubuntu instance that is wiped after its finished.

If you would like to keep a build artefact for later download you have to define a build step to upload and persist the artefact, as it is shown below:

After your workflow is successfully finished you will find your file within the workflow execution screen:

Download Build Artefact

Step 6: Push and release the app bundle in Google Play Console

Now that we successfully built and signed our application, we would like to automatically push the app as a new beta release into your Google Play Console.

Again there is a dedicated GitHub Action that helps to achieve this cumbersome task, see below:

Another important prerequisite for a successful Google Play upload is the creation of a ‘Service account’ that holds the necessary IAM role for uploading artefacts into your Google Play account.

To create a new service account you have to navigate to your Google Play Console > Settings > API Access as it is shown in the screenshot below:

Google Play Console Service Account Creation

Create a new Service Account with release access right for your application. In case you are a Google Cloud user as well, you have to create the Service Account user within Google Cloud Console instead and then grant access to the selected app project.

Once you have your service account created, you have to create a JSON key for that service account and put it in a GitHub secret placeholder again. Just copy the JSON string into a GitHub secret field with the name ‘SERVICE_ACCOUNT_JSON’.

Create and download a JSON key for your Google Service Account

Once you have stored your service account key in a GitHub secret, you can create a workflow step to download it during the workflow run and store it in a file (service_account.json), as it is shown below:

Download the key to a local json file

The final step is to use the Upload Action to publish your application bundle to Google Play Console as it is shown below:

Upload and push an Android application bundle to Google Play

Important note here is that you will receive an error message if you did not enable the App Signing in your Google Play account. To opt-into app signing, you simply navigate to Google Play Console > Your App > Setting > App Signing, as shown below. You have to upload your signing key as private key file (which can be exported by Android Studio).

Summary

It’s amazing how easy and productive it is to use a GitHub Actions workflow to completely automate your Android app release process. It helps you to ensure consistent release quality and safes a lot of time especially for small and independent app publishers. See the running CI/CD workflow below.

Well done GitHub and Microsoft!

Kick out annoying Ads by using Pi-Hole and your Synology NAS!

First things first: I did fell in love with my Synology NAS! After a year of running my DS-218+, I can’t believe how I used to work without it before. What is so special about the DS-218+ Synology DiskStation is not that it is an incredibly flexible network storage, BUT much more that it is capable of running docker containers!

When I first realized that I can seamlessly run my home assistant automation system as well as my MQTT broker right from my Synology NAS I was astonished.

No more additional hardware, no additional power consumption, just run it inside your NAS (which is powered on anyway).

But now, I came across another absolutely amazing use-case, which is to block all the annoying advertisements from every website I am reading. By running a Pi-Hole Docker container on my DiscStation, I can route all DNS requests through that local DNS server in order to block all the advertisement domains.

Sounds cool? It definitely is, as it is transparently blocking all ads for all your devices in your local network without any change within your browser.

Best thing is: As your browser is not even aware that all ads are automatically blocked by DNS, all the news sites can’t detect that you are blocking their content requests.

Blocking all the ads within the web pages you are loading by DNS is even speeding up your local network, as it simply avoids to load all the ad resources and annoying video ads and it renders web pages much faster than before.

How to set up Pi-Hole with your DiskStation?

See below the necessary steps for installing Pi-Hole on your NAS. I will go into detail for each of the steps in the following sections:

  1. Install Docker package within your DiskStation
  2. Install Pi-Hole docker image
  3. Launch Pi-Hole docker image on your NAS
  4. Configure your router to use your NAS as new DNS server
  5. Alternatively, configure your local devices network to use the NAS as new DNS server
  6. You are ready!

1. Install Docker package

First step, if you not already done so, install the docker package within your Synology package manager as shown below:

2. Install Pi-hole Docker image

After installing the docker package, you are ready to download your Pi-Hole docker image. You do so by navigating to the docker package, open it, search for the Pi-hole container, as shown below and download the image:

3. Configure and Launch Pi-Hole Image

Launch the Pi-hole docker image and configure all its ports to ‘Auto’ except the DNS ports, as shown below:

Once the Pi-hole image is launched, you can check which port was automatically assigned to the HTTP administration interface. In my case it’s the port 32781. If you open your web browser you can reach your local Pi-Hole web interface by typing your IP along with YOUR_NAS-IP:32781/admin.

Your Pi-Hole web interface will show statistics about how many ads were already blocked, as shown below:

4. Configure your Router to use Pi-Hole as DNS server

The router configuration depends on your own router model. Check your router manual and search for the configuration of the DNS servers. Typically, you will find a Google DNS server 8.8.8.8 configured there, which you delete and replace with the IP of your own Synology DiskStation.

Once you replaced the DNS configuration on your router to point to the address of your DiskStation IP, all devices within your network will route the DNS queries through your DiskStation’s Pi-Hole DNS server. The Pi-Hole server will then only return a correct DNS address for non-advertising addresses, which leaves all your browsers to not show the embedded adverts.

5. Alternatively, configure your device to use Pi-Hole as DNS

In my case, unfortunately my router does not offer the possibility to configure the DNS address.

An alternative here is to change the config on all your local devices, such as laptop, PC and tablets, to use your own Pi-Hole DNS server, as it is shown below:

I hope that my short article did gave you some ideas how to get rid of all the annoying ads within the websites you are reading day by day.

Overall, the Pi-Hole DNS server is a great way of kicking out the ads and to speed up your browsing experience.

Again a fine solution running on my beloved Synology drive.

Finally, I want to thank the team around Pi-Hole for building and maintaining such a great solution!! 🚀 🚀

Unpacking and Testing my brand new Original Prusa MINI 3D Printer

When ordering my Original Prusa MINI way back in April this year, midst the Corona crisis, I did not anticipate the 4 months of inpatient waiting time. Prusa Research stopped the shipping of 3D printers during the Corona crisis, to produce the urgently needed face shields for hospitals, which stands for Josef Prusa’s innovative personality.

Now, I am assembling the arrived Prusa MINI 3D printer, which can be done with some minor and simple steps within around 30 minutes.

Acquiring an original Prusa model was a deliberate decision, as I am a big fan of local innovation. That said, Prague only 250 km away from my hometown, I loved to see that Josef Prusa did set up his Prusa Research company in the heart of the beautiful city of Prague, rather than producing in China or anywhere abroad.

The assembly instructions already prove this argument right, as it delivers a clear step-by-step guide, how to finalize the printer, interrupted by occasional intake of Haribo sweets (that came with the printer).

See below the finished assembly of the printer:

Finished assembly of my Original Prusa MINI printer
Finished assembly of my Original Prusa MINI printer

Starting the printer the first time, it starts the self-test that automatically checks if the critical parts are all working within the given tolerances. Once the self-test is finished, the printer needs a first calibration of the desired z-distance, which is critical for the overall quality of each print. By using the click wheel, the user has to adjust the offset position of the extruder towards the print bed.

Calibration process is really user friendly and the click wheel appears to be a simple and effective input method, which reminds me about the iPod wheel.

The color display and menu structure is exceptionally well defined, again a prove that an original Prusa printer is way ahead of no-name China competition.

Another great benefit of a Prusa printer is the lively community and active development of its firmware, that constantly thrives forward in terms of adding features and fine tuning of stability and usability.

After I performed my first print on the fresh calibrated 3D printer, I found that its level of detail and smooth surface is comparable, or even superior to the 2K Euro printer we operate at work.

Next step for me will be to explore the newly added Ethernet connectivity features, that came along with the Prusa Mini. A connection to your network means that you can remotely monitor the progress of your print and to read out some telemetry data during each print. Josef Prusa states that the networking capability is just a first glance of what is coming in the future, as we can imagine a lot of beneficial networked features that can come out of that statement. A networked 3D printer is definitely a must for operating large scale 3D printing farms, as print jobs can be distributed automatically.

Overall, I am happy that my decision to buy an original Prusa designed printer turned out as the best choice!

See below some examples of our first steps with the newly arrived Prusa Mini printer:

Pokemon print with Natural translucent Filament
Print of a small rabbit with Prusa Filament
Translucent, natural Filament

New TabShop Help Page

Within the last weeks I did prepare a completely new help and tutorial page for all our TabShop users, which can be found here:

https://tabshop.smartlab.at/help.html

The help page is focused on how to solve typical Point of Sale (POS) related use-cases within TabShop. Examples are how to define your own product stock lists, how to checkout and print an invoice and how to change the appearance of your TabShop Android Point of Sale system.

The help page will grow in terms of content over the next couple of weeks and cover more and more use-cases for mobile cashiers.

TabShop Point of Sale (POS) Celebrating 500K Downloads

TabShop began in 2012, when it was first published in Google Play store. Back then, I could not imagine how popular this Android application would become.

Now, 8 years later TabShop POS will reach 500K overall downloads, with hundreds of shops using TabShop day by day worldwide.

The app came a long way, adding feature after feature and battling with competitors such as Square POS for place one in Android Play Store year over year.

Recently, TabShop returned to its original single app strategy and removed the PRO version from Pay Store. Instead in-app upgrade to PRO mode is offered directly within the application. This allows users a more seamless conversion between free and Pro version without switching the app.

The free Android POS app TabShop still is offered without any annoying advertisements in free forever mode.

To celebrate the 500.000 TabShop downloads, we created a new intro video.

Covid-19 (Corona) Visualization for Austria

Dealing with data, statistics and visualization in my daily job and locked down at home through the governmental precautions, I thought I build a Covid-19 (Corona) virus information dashboard. The dashboard is built by using Johns Hopkins University (“2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE”, you can download and review the raw data in thisĀ Github repository.).

The dashboard is of course focused on Austria and its neighbor countries, as thats where I am living. See screenshot below:

Use a Telegram Bot to talk with your Smart Home Automation

Telegram is a great way to receive home assistant (HASS) smart home automation information directly pushed to your Android or iPhone. By creating your own Telegram bot you get a lot of flexibility on sending home automation messages or even images from your security cameras to your mobile phone.

The first step of attaching a Telegram bot with your own home assistant (HASS) environment is to create your own Telegram bot. Contact the BotFather in Telegram as it is shown below and follow the creation wizard to receive your bot secret.

Then configure your Home assistant system to communicate with your newly created bot, as it is shown below:

# Example configuration.yaml entry for the Telegram Bot

telegram_bot:

  – platform: polling

    api_key: !secret telegram

    allowed_chat_ids:

      – 123456789

      – 123456780

As a result you can extend all your Home assistant automation scripts to use your own Telegram bot to send out important notifications and images to your phone, as it is shown below:

Read more about Telegram integration and how to automate your home with the open source Home assistant platform within my own ebook available at Amazon.

Open Source Home Automation with Esp32 and Home Assistant (Hass.io)

ebook Open Source Home Automation - Introduction to Home assistant and Esp32 based automation

As a dedicated Home Assistant user for years now, I came to write an ebook covering all the important topics around home automation and tinkering your own Esp32 based
sensors and actuators. Home Assistant is Open Source, written in Python and a lively community maintains over 3000 custom made components that allow you to control nearly everything in your home.
For everything else there is the cheap and handy Esp32 or ESP8266 microcontroller that comes with builtin wireless network support and the capability to control any hardware you can think of.
Soldering your own ESP8266 based sensors is really fun and allows you to add a lot of flexibility to your home automation. With a very low price tag of 5$ the ESP8266 microcontroller is a practical basis for all your custom made sensors and actuators.
Read more about the details on how to solder your own sensors and how to attach them to your own home assistant system in my brand new eBook: ‘Open Source Home Automation‘.