Integrate Dynatrace Software Intelligence into your GitHub CI/CD Pipeline

It’s common knowledge today that seamless monitoring and observability of all your production software stacks is key for successful service delivery. Along with a tight integration into your CI/CD pipeline, service and software monitoring offers a lot of insights on what is going wrong during your build, test and release workflows and how to quickly remediate outages.

As a Cloud Native SaaS platform, GitHub represents the home of most of the popular Open Source projects worldwide. It offers all the important features that are necessary to support the entire software lifecycle of your project.

GitHub Actions is one of those priceless features, as it allows you to choose from more than 6.000 individual CI/CD steps that allow you to automatically build, test and release your projects on virtual machines.

Dynatrace on the other hand represents the leading software observability and intelligence platform according to analysts, such as Gartner. A Dynatrace monitoring environment allows you to closely observe the production behaviour of your software in realtime and to get notified about abnormal incidences that could lead to outages.

That said, its pretty obvious that a tight connection between your GitHub CI/CD pipeline with your Dynatrace monitoring environment offers a lot of benefits.

Within my last project I did implement a purpose built Dynatrace GitHub action that allows you to directly push information, such as events and metrics into your monitoring environment.

Use-cases here are to inform your DevOps team about broken builds of your software or to collect statistics data about your build workflows, such as the number of code commits on your services or the number of failed builds versus successful builds.

You even can use Dynatrace to define dedicated SLOs (Service Level Objectives) for your CI/CD pipeline by using those metrics as Service Level Indicators (SLIs).

See below a typical GitHub build workflow that uses the Dynatrace GitHub Action to push a metric into a monitoring environment and that informs about broken builds as well as on successful builds. Mind that I am sending a total count metric as well as both failed count and success count, which I will use later as SLI metric in my Dynatrace CI/CD Pipeline SLO.

See my GitHub workflow below:

name: 'build-test'
on: # rebuild any PRs and main branch changes
  pull_request:
  push:
    branches:
      - main
      - 'releases/*'

jobs:
  build: # make sure build/ci work properly
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: |
          npm install
      - run: |
          npm run all

  test: # clean machine without building
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Inform Dynatrace about a successful build (
        if: ${{ success() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.success"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Successful Build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} was successfully built"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
      - name: Inform Dynatrace about a failed build 
        if: ${{ failure() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.fails"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Failed build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} build failed!"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang

The conditional Dynatrace step within the GitHub workflow above is then executed with every commit of your repository, as it is shown below:

Conditional Dynatrace GitHub Action Steps, either on success or on failure

After a successful run of your workflow, you will see both, the event as well as the metric appear in your Dynatrace environment, as it is shown below:

Dynatrace event sent from your GitHub CI/CD workflow
GitHub CI/CD pipeline metrics

Define a Service-Level-Objective (SLO) for your GitHub CI/CD Pipeline

Now that Dynatrace is informed about each build success and fail, we can easily define a SLO for our CI/CD pipeline to continuously observe the quality of our build.

See below the selection of the total count as well as the success count metric as the SLI metric for our SLO within Dynatrace:

GitHub CI/CD pipeline SLO defined in Dynatrace

Now we see the current SLO state within the list of Dynatrace SLOs and we can put our SLO state onto any of your Dynatrace dashboards:

Your new GitHub build workflow SLO

Summary

I came to love the simplicity and efficiency of GitHub actions within the last weeks. They helped me a lot to fully automate my own GitHub projects CI/CD pipeline, to save time during releases and to generally raise the quality of my projects.

The logical next step for me was to tightly integrate the GitHub workflow into my Dynatrace monitoring environment and to define SLOs for measuring the quality of my builds in realtime.

By implementing and publishing a Dynatrace GitHub action, a tight integration between your GitHub workflows is now possible for everybody with a simple click into the Github Marketplace.

Automate your Android CI/CD Pipeline with GitHub Actions

When I came to play around with GitHub Actions CI/CD pipeline framework recently, I could not believe how simple and effective that functionality is!

It does not really matter, if you just want to automatically check each of your Git commits by using lint or to fully build your artefacts, GitHub actions allows you to do that with a simple YAML configuration.

GitHub actions allows the definition of different jobs that are automatically triggered on events happening within your Git repository (such as commits, pull, creation of a tag or releases, comments, creation of issues, and many more). As those job definitions are living in the same Git repository, its the perfect solution for managing your CI/CD pipeline as code within a self-contained GitHub repository.

Within this post, I will describe how I came to fully automate the CI/CD pipeline of my production Android App (TabShop) by using GitHub Actions.

GitHub action tab within my Android app repository

Kudos to Niraj Prajapati who wrote such a great blog post and who inspired me to fully automate my own Android app’s CI/CD pipeline.

Why – Whats the value for app publishers?

I can’t emphasise the value of a fully automated CI/CD pipeline enough! I spent hours over hours on manually building and testing my Android app, to finally sign it and push it to the Google Play Store. So far, I released 182 versions over 6 years. The build, test and release process gets more and more complex and error-prone. Freelance app publishers, like me, invest a significant amount of time into manual CI/CD processes that are much better spent in building innovations into the app itself.

That said, GitHub Actions does allow me to create and run a feature rich CI/CD release process fully automatically in the cloud, which helps me to save time and effort and to innovate!

Scope of my Android CI/CD Pipeline

This blog shows step-by-step how to implement the following tasks into your own GitHub Actions CI/CD pipeline:

  1. Build your app using the Gradle Build Tool
  2. Run your unit-tests
  3. Build a release app bundle
  4. Sign the app bundle
  5. Upload and expose the app bundle
  6. Push and release the app bundle in Google Play Console

Step 1: Automate your Android app build

The first step within our Android app’s CI/CD pipeline is to create a GitHub Action YAML file and to add a trigger that defines when the job should be run.

Navigate to your GitHub project repository and click on the ‘Actions’ tab where you find a button to create a new ‘workflow’.

GitHub offers a lot of standard build workflows for the most popular technology stacks. In our case we either choose to skip the template selection or we choose the Android CI workflow as shown below:

Choose the Android Gradle CI workflow

The resulting workflow will create an Android build job that already fulfills our first goal, which is to startup a Ubuntu instance, checkout your apps sourcecode and to execute the Gradle build file, as it is shown below:

Simple Android Gradle Build Job

The workflow above is triggered every time a ‘push’ or a ‘pull_request’ is triggered within your repository.

Step 2: Execute your unit-tests

A good unit tests coverage is recommended to safeguard your app against failing or buggy code contributions. In most Android app projects, the unit test code is part of your Git repository, so Gradle is also used to build and execute your tests by adding following step to your workflow:

Gradle step that runs your unit tests

Step 3: Build a release app bundle

Within the next step we will trigger the build of a release app bundle (AAB) that we will sign in the next step. App release bundles are the preferred way of shipping apps through the Google Play Appstore, as they are optimised in size and stripped of unnecessary content.

See below the workflow step that automatically builds our application release bundle:

Step 4: Sign the app bundle

Application bundles are typically signed with the certificate of a trustworthy app publisher, so that users can trust the origin of the installed app and that no third-party injected malicious parts into your app.

App marketplaces such as Google Play require apps to be signed with the certificate of the publisher to ensure the integrity of all published apps.

Therefore we will automatically sign our app bundle once its built by adding the below workflow step:

Sign an Android app bundle by using a GitHub action

The signing step above does need some additional information about your own certificate as well as the key store password and alias, which we will provide as safe GitHub secret placeholders as shown below:

  • secrets.SIGNING_KEY
  • secrets.ALIAS
  • secrets.KEY_STORE_PASSWORD
  • secrets.KEY_PASSWORD

Convert your certificate file into a base64 encoded string that can be used as a GitHub repository secret within the placeholder ‘secrets.SIGNING_KEY’. In case you are using a Mac you are lucky as the command for converting your secret file into a base64 encoded string is already provided by openssl, as it is shown below:

openssl base64 -in my-release-key.keystore -out my-release-key.keystore.base64

See the resulting list of GitHub secrets within the screenshot below:

GitHub Secrets used by the signing workflow step

Find the signin GitHub action that we used in our workflow below:

Step 5: Upload and expose the app bundle

Each workflow run does spin up a completely clean Ubuntu instance that is wiped after its finished.

If you would like to keep a build artefact for later download you have to define a build step to upload and persist the artefact, as it is shown below:

After your workflow is successfully finished you will find your file within the workflow execution screen:

Download Build Artefact

Step 6: Push and release the app bundle in Google Play Console

Now that we successfully built and signed our application, we would like to automatically push the app as a new beta release into your Google Play Console.

Again there is a dedicated GitHub Action that helps to achieve this cumbersome task, see below:

Another important prerequisite for a successful Google Play upload is the creation of a ‘Service account’ that holds the necessary IAM role for uploading artefacts into your Google Play account.

To create a new service account you have to navigate to your Google Play Console > Settings > API Access as it is shown in the screenshot below:

Google Play Console Service Account Creation

Create a new Service Account with release access right for your application. In case you are a Google Cloud user as well, you have to create the Service Account user within Google Cloud Console instead and then grant access to the selected app project.

Once you have your service account created, you have to create a JSON key for that service account and put it in a GitHub secret placeholder again. Just copy the JSON string into a GitHub secret field with the name ‘SERVICE_ACCOUNT_JSON’.

Create and download a JSON key for your Google Service Account

Once you have stored your service account key in a GitHub secret, you can create a workflow step to download it during the workflow run and store it in a file (service_account.json), as it is shown below:

Download the key to a local json file

The final step is to use the Upload Action to publish your application bundle to Google Play Console as it is shown below:

Upload and push an Android application bundle to Google Play

Important note here is that you will receive an error message if you did not enable the App Signing in your Google Play account. To opt-into app signing, you simply navigate to Google Play Console > Your App > Setting > App Signing, as shown below. You have to upload your signing key as private key file (which can be exported by Android Studio).

Summary

It’s amazing how easy and productive it is to use a GitHub Actions workflow to completely automate your Android app release process. It helps you to ensure consistent release quality and safes a lot of time especially for small and independent app publishers. See the running CI/CD workflow below.

Well done GitHub and Microsoft!

Kick out annoying Ads by using Pi-Hole and your Synology NAS!

First things first: I did fell in love with my Synology NAS! After a year of running my DS-218+, I can’t believe how I used to work without it before. What is so special about the DS-218+ Synology DiskStation is not that it is an incredibly flexible network storage, BUT much more that it is capable of running docker containers!

When I first realized that I can seamlessly run my home assistant automation system as well as my MQTT broker right from my Synology NAS I was astonished.

No more additional hardware, no additional power consumption, just run it inside your NAS (which is powered on anyway).

But now, I came across another absolutely amazing use-case, which is to block all the annoying advertisements from every website I am reading. By running a Pi-Hole Docker container on my DiscStation, I can route all DNS requests through that local DNS server in order to block all the advertisement domains.

Sounds cool? It definitely is, as it is transparently blocking all ads for all your devices in your local network without any change within your browser.

Best thing is: As your browser is not even aware that all ads are automatically blocked by DNS, all the news sites can’t detect that you are blocking their content requests.

Blocking all the ads within the web pages you are loading by DNS is even speeding up your local network, as it simply avoids to load all the ad resources and annoying video ads and it renders web pages much faster than before.

How to set up Pi-Hole with your DiskStation?

See below the necessary steps for installing Pi-Hole on your NAS. I will go into detail for each of the steps in the following sections:

  1. Install Docker package within your DiskStation
  2. Install Pi-Hole docker image
  3. Launch Pi-Hole docker image on your NAS
  4. Configure your router to use your NAS as new DNS server
  5. Alternatively, configure your local devices network to use the NAS as new DNS server
  6. You are ready!

1. Install Docker package

First step, if you not already done so, install the docker package within your Synology package manager as shown below:

2. Install Pi-hole Docker image

After installing the docker package, you are ready to download your Pi-Hole docker image. You do so by navigating to the docker package, open it, search for the Pi-hole container, as shown below and download the image:

3. Configure and Launch Pi-Hole Image

Launch the Pi-hole docker image and configure all its ports to ‘Auto’ except the DNS ports, as shown below:

Once the Pi-hole image is launched, you can check which port was automatically assigned to the HTTP administration interface. In my case it’s the port 32781. If you open your web browser you can reach your local Pi-Hole web interface by typing your IP along with YOUR_NAS-IP:32781/admin.

Your Pi-Hole web interface will show statistics about how many ads were already blocked, as shown below:

4. Configure your Router to use Pi-Hole as DNS server

The router configuration depends on your own router model. Check your router manual and search for the configuration of the DNS servers. Typically, you will find a Google DNS server 8.8.8.8 configured there, which you delete and replace with the IP of your own Synology DiskStation.

Once you replaced the DNS configuration on your router to point to the address of your DiskStation IP, all devices within your network will route the DNS queries through your DiskStation’s Pi-Hole DNS server. The Pi-Hole server will then only return a correct DNS address for non-advertising addresses, which leaves all your browsers to not show the embedded adverts.

5. Alternatively, configure your device to use Pi-Hole as DNS

In my case, unfortunately my router does not offer the possibility to configure the DNS address.

An alternative here is to change the config on all your local devices, such as laptop, PC and tablets, to use your own Pi-Hole DNS server, as it is shown below:

I hope that my short article did gave you some ideas how to get rid of all the annoying ads within the websites you are reading day by day.

Overall, the Pi-Hole DNS server is a great way of kicking out the ads and to speed up your browsing experience.

Again a fine solution running on my beloved Synology drive.

Finally, I want to thank the team around Pi-Hole for building and maintaining such a great solution!! 🚀 🚀

Unpacking and Testing my brand new Original Prusa MINI 3D Printer

When ordering my Original Prusa MINI way back in April this year, midst the Corona crisis, I did not anticipate the 4 months of inpatient waiting time. Prusa Research stopped the shipping of 3D printers during the Corona crisis, to produce the urgently needed face shields for hospitals, which stands for Josef Prusa’s innovative personality.

Now, I am assembling the arrived Prusa MINI 3D printer, which can be done with some minor and simple steps within around 30 minutes.

Acquiring an original Prusa model was a deliberate decision, as I am a big fan of local innovation. That said, Prague only 250 km away from my hometown, I loved to see that Josef Prusa did set up his Prusa Research company in the heart of the beautiful city of Prague, rather than producing in China or anywhere abroad.

The assembly instructions already prove this argument right, as it delivers a clear step-by-step guide, how to finalize the printer, interrupted by occasional intake of Haribo sweets (that came with the printer).

See below the finished assembly of the printer:

Finished assembly of my Original Prusa MINI printer
Finished assembly of my Original Prusa MINI printer

Starting the printer the first time, it starts the self-test that automatically checks if the critical parts are all working within the given tolerances. Once the self-test is finished, the printer needs a first calibration of the desired z-distance, which is critical for the overall quality of each print. By using the click wheel, the user has to adjust the offset position of the extruder towards the print bed.

Calibration process is really user friendly and the click wheel appears to be a simple and effective input method, which reminds me about the iPod wheel.

The color display and menu structure is exceptionally well defined, again a prove that an original Prusa printer is way ahead of no-name China competition.

Another great benefit of a Prusa printer is the lively community and active development of its firmware, that constantly thrives forward in terms of adding features and fine tuning of stability and usability.

After I performed my first print on the fresh calibrated 3D printer, I found that its level of detail and smooth surface is comparable, or even superior to the 2K Euro printer we operate at work.

Next step for me will be to explore the newly added Ethernet connectivity features, that came along with the Prusa Mini. A connection to your network means that you can remotely monitor the progress of your print and to read out some telemetry data during each print. Josef Prusa states that the networking capability is just a first glance of what is coming in the future, as we can imagine a lot of beneficial networked features that can come out of that statement. A networked 3D printer is definitely a must for operating large scale 3D printing farms, as print jobs can be distributed automatically.

Overall, I am happy that my decision to buy an original Prusa designed printer turned out as the best choice!

See below some examples of our first steps with the newly arrived Prusa Mini printer:

Pokemon print with Natural translucent Filament
Print of a small rabbit with Prusa Filament
Translucent, natural Filament

New TabShop Help Page

Within the last weeks I did prepare a completely new help and tutorial page for all our TabShop users, which can be found here:

https://tabshop.smartlab.at/help.html

The help page is focused on how to solve typical Point of Sale (POS) related use-cases within TabShop. Examples are how to define your own product stock lists, how to checkout and print an invoice and how to change the appearance of your TabShop Android Point of Sale system.

The help page will grow in terms of content over the next couple of weeks and cover more and more use-cases for mobile cashiers.

TabShop Point of Sale (POS) Celebrating 500K Downloads

TabShop began in 2012, when it was first published in Google Play store. Back then, I could not imagine how popular this Android application would become.

Now, 8 years later TabShop POS will reach 500K overall downloads, with hundreds of shops using TabShop day by day worldwide.

The app came a long way, adding feature after feature and battling with competitors such as Square POS for place one in Android Play Store year over year.

Recently, TabShop returned to its original single app strategy and removed the PRO version from Pay Store. Instead in-app upgrade to PRO mode is offered directly within the application. This allows users a more seamless conversion between free and Pro version without switching the app.

The free Android POS app TabShop still is offered without any annoying advertisements in free forever mode.

To celebrate the 500.000 TabShop downloads, we created a new intro video.

Covid-19 (Corona) Visualization for Austria

Dealing with data, statistics and visualization in my daily job and locked down at home through the governmental precautions, I thought I build a Covid-19 (Corona) virus information dashboard. The dashboard is built by using Johns Hopkins University (“2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE”, you can download and review the raw data in this Github repository.).

The dashboard is of course focused on Austria and its neighbor countries, as thats where I am living. See screenshot below:

Use a Telegram Bot to talk with your Smart Home Automation

Telegram is a great way to receive home assistant (HASS) smart home automation information directly pushed to your Android or iPhone. By creating your own Telegram bot you get a lot of flexibility on sending home automation messages or even images from your security cameras to your mobile phone.

The first step of attaching a Telegram bot with your own home assistant (HASS) environment is to create your own Telegram bot. Contact the BotFather in Telegram as it is shown below and follow the creation wizard to receive your bot secret.

Then configure your Home assistant system to communicate with your newly created bot, as it is shown below:

# Example configuration.yaml entry for the Telegram Bot

telegram_bot:

  – platform: polling

    api_key: !secret telegram

    allowed_chat_ids:

      – 123456789

      – 123456780

As a result you can extend all your Home assistant automation scripts to use your own Telegram bot to send out important notifications and images to your phone, as it is shown below:

Read more about Telegram integration and how to automate your home with the open source Home assistant platform within my own ebook available at Amazon.

Open Source Home Automation with Esp32 and Home Assistant (Hass.io)

ebook Open Source Home Automation - Introduction to Home assistant and Esp32 based automation

As a dedicated Home Assistant user for years now, I came to write an ebook covering all the important topics around home automation and tinkering your own Esp32 based
sensors and actuators. Home Assistant is Open Source, written in Python and a lively community maintains over 3000 custom made components that allow you to control nearly everything in your home.
For everything else there is the cheap and handy Esp32 or ESP8266 microcontroller that comes with builtin wireless network support and the capability to control any hardware you can think of.
Soldering your own ESP8266 based sensors is really fun and allows you to add a lot of flexibility to your home automation. With a very low price tag of 5$ the ESP8266 microcontroller is a practical basis for all your custom made sensors and actuators.
Read more about the details on how to solder your own sensors and how to attach them to your own home assistant system in my brand new eBook: ‘Open Source Home Automation‘.

Build a wireless MQTT temperature and humidity sensor for your Home Assistant

Over the last months, I became more and more addicted to Home Assistant (Hass.io) and MQTT low cost wireless sensors. I was already familiar with several home and industrial automation systems that all come with a certain hardware (and price) and build upon a completely proprietary software stack. So long story short, I was searching for a good community-backed open source home automation system that is easy to set up and runs on my old Raspberry.

As home automation seems to be a broad area of interest I thought there should be hundreds of open source community projects out there. I was not as easy as I thought and there are not so many home automation projects out there. It seems as if the market is broadly dominated by large vendors that offer integrated solutions.

After some cumbersome fails I was finally able to find a real gem in the home automation area, which is called the Home Assistant, or short Hass.io. Home Assistant comes as a lightweight installation that perfectly fulfills following requirements:

  1. Its lightweight, low resource consuming
  2. Easy to set up
  3. Nice web interface, that also comes pretty well with my tablet and smartphone (no app required, responsive web UI is great on your mobile device too) See a live demo here.
  4. Lots of community components available (>1000), such as Alexa, IFTTT, Hue, Sonos, Cromecast, Webcam, and many more.
  5. Fully configurable through plaintext YAML files
  6. It comes with an integrated MQTT broker!
  7. Supports automation scripts, such as turn light on at sunset
  8. Best of all its written in Python and its open source

The first step towards building my own MQTT wireless weather station was to set up a Home Assistant instance on my old Linux laptop. If you already got Python3 running on your system, the set up process is pretty straight forward, just type:

python3 -m pip install homeassistant

After successful installation you just enter the .homeassistant configuration folder and adapt the .yaml configurations that control what your Home Assistant instance is showing and how elements are organized in Web UI.

The most important configuration files are configuration.yaml that contains the core configuration about sensors and components and groups.yaml that groups all your components into visual tabs within the UI. Within my installation i chose to use a default group, one for my living room and one for controlling my pool, as i is shown in the screenshot below:

As my screenshot already shows, my Home Assistant instance already contains some MQTT based sensors for continuously informing me about the temperature and humidity (outside, and in living room). You can put the sensor output into any of your configured tabs. The same sensor info can also be present in multiple tabs at the same time.

To add a new MQTT sensor into your core configuration file, simply add following sensor section into your core configuration.yaml file:

sensor:
  - platform: mqtt
    name: "Temperature"
    state_topic: "/home/outdoor/sensor1"
    value_template: "{{ value_json.temperature }}"
    unit_of_measurement: '°C'
  - platform: mqtt
    name: "Humidity"
    state_topic: "/home/outdoor/sensor1"
    value_template: "{{ value_json.humidity }}"
    unit_of_measurement: '%'

You can then show this newly added sensor value in any of your configured groups, as shown below:

default_view:
  name: Home
  view: yes
  entities:
    - sensor.airquality
    - sensor.temperature
    - sensor.humidity
    - sensor.yr_symbol
    - sun.sun
    - camera.mjpeg_camera
    - device_tracker.alice
    - device_tracker.bob
    - switch.robby
    - switch.lamp
indoor:
  name: Livingroom
  view: yes
  entities:
    - sensor.temperaturelivingroom
    - sensor.humiditylivingroom
    - media_player.livingroom
pool:
  name: Pool
  view: yes
  entities:
    - sensor.watertemperature
    - switch.poolcover
    - switch.poollight
    - switch.poolpump
    - switch.poolbot

Now its time to test if the sensor would show a value in case it receives an MQTT value through the configured MQTT topic. Therefore, Home Assistant offers a simple MQTT test message UI in which you can simulate any incoming MQTT message, as shown below. Just enter your MQTT topic and send a static value:

After a click on the ‘publish’ button those two values 30 and 70 will appear in your sensors for temperature and humidity. You can do that try-run for all of your MQTT bound sensors, which is a convenient feature for testing the server side functionality of your home automation.

Next step is to build a cheap temperature and humidity sensor that sends its measurements over WLAN to your Home Assistant MQTT broker. As base sensor board I decided to use an ESP8266 or an equivalent ESP32 microcontroller board that offers a cheap (~5 USD platform) with integrated WLAN stack and many digital and analog input pins. See below an image of the chosen Esp32 board:

The ESP8266 board can easily be flashed over a USB cable and it runs with a standard Arduino bootloader. You can use your Arduino Studio to program your tiny ESP8266 board. To measure the temperature and humidity, the combined digital DHT22 sensor was used, as shown below:

To connect the DHT22 sensor to your ESP8266 board simply attach the Vin pin to the 3V pin of the ESP8266 board, the Ground to any of the Ground pins and the signal pin to any of the ESP8266 digital input pins.

Following Arduino code snippet shows how to initialize the DHT22 sensor and how to read and report the sensor value through a MQTT message:

#include <ESP8266WiFi.h>
#include <EEPROM.h>
#include <DHT.h>
#include <DHT_U.h>
#include <PubSubClient.h>
#include <ArduinoJson.h>

/* Globals used for business logic only */
#define MQTT_VERSION MQTT_VERSION_3_1_1
// MQTT: ID, server IP, port, username and password
const PROGMEM char* MQTT_CLIENT_ID = "sensor2_dht22_s";
const PROGMEM uint16_t MQTT_SERVER_PORT = 1883;
// MQTT: topic
const PROGMEM char* MQTT_SENSOR_TOPIC = "/home/house/sensor1";
// sleeping time
const PROGMEM uint16_t SLEEPING_TIME_IN_SECONDS = 60; // 10 minutes x 60 seconds
// DHT - D1/GPIO5
#define DHTPIN 5

#define DHTTYPE DHT22

DHT dht(DHTPIN, DHTTYPE);
WiFiClient wifiClient;
PubSubClient client(wifiClient);

/* Business logic */
// function called to publish the temperature and the humidity
void publishData(float p_temperature, float p_humidity, float p_airquality) {
    // create a JSON object
    StaticJsonBuffer<200> jsonBuffer;
    JsonObject& root = jsonBuffer.createObject();
    // INFO: the data must be converted into a string; a problem occurs when using floats...
    root["temperature"] = (String)p_temperature;
    root["humidity"] = (String)p_humidity;
    root["airquality"] = (String)p_airquality;
    root.prettyPrintTo(Serial);
    Serial.println("");
    /*
    {
    "temperature": "23.20" ,
    "humidity": "43.70"
    }
   */
    char data[200];
    root.printTo(data, root.measureLength() + 1);
    client.publish(MQTT_SENSOR_TOPIC, data, true);
    yield();
}

setup() {
    dht.begin();
    Serial.print("INFO: Connecting to ");
    WiFi.mode(WIFI_STA);
    WiFi.begin(cconfig.ssid, cconfig.pwd);
    while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
    }
    Serial.println("");
    Serial.println("INFO: WiFi connected");
    Serial.println("INFO: IP address: ");
    Serial.println(WiFi.localIP());
    // init the MQTT connection
    client.setServer(cconfig.mqtt, MQTT_SERVER_PORT);
}

 

void loop() {
    dht.begin();

    if (WiFi.status() != WL_CONNECTED) {
        WiFi.mode(WIFI_STA);
        WiFi.begin(cconfig.ssid, cconfig.pwd);
         
        // Reading temperature or humidity takes about 250 milliseconds!
        // Sensor readings may also be up to 2 seconds 'old' (its a very slow sensor)
        float h = dht.readHumidity();
        // Read temperature as Celsius (the default)
        float t = dht.readTemperature();
         
        if (isnan(h) ||isnan(t)) {
            Serial.println("ERROR: Failed to read from DHT sensor!");
        }
        else {
            publishData(t, h, aq);
        }
        delay(5000);
    }
}

Download the full source code at Github.

After connecting, flashing and running our tiny 15 USD wireless sensor we will continuously receive updates of actual temperature and humidity measurements. Those measurements are shown within your Home Assistant views. A very nice feature of Home Assistant is also that it stores historic measurements and that you can get a chart of past trends by a single click into the UI, as shown below:

Overall, Home Assistant is the perfect open source platform for your own home automation projects, no matter if you run it on your old laptop or on a tiny Raspberry Pi. It offers all the flexibility in terms of attaching any kind of MQTT sensor or message provider and is a great platform for playing around with your electronics hardware and it has a cool Web UI too!

Read more in my ebook on ‘Open Source Home automation’.

Open Source Home Automation: Introduction to Home Assistant (Hass.io) and ESP32 based Automation (English Edition) von [Beer, Wolfgang]