Report and chart TensorFlow Keras Metrics into Dynatrace

Artificial-Intelligence and machine-learning models are trained, tested and used with their accuracy in mind and therefore its crucial to closely observe and monitor your AI model during the design phase.

TensorFlow offers a convenient way to attach a TensorBoard callback hook to your machine-learning model to receive and visualise the training and test performance of your model.

Now as TensorBoard is really a great tool and I use it a lot, for production systems this approach is less useful, as TensorBoard is built for the design and dev stage of your AI model.

In production, you want to attach a stable monitoring platform, such as Dynatrace to closely watch the thrift of your prediction models over time, when confronted with new or changing input data.

Last week I came across a convenient way to write your own callback listener that automatically receives the metric logs of your model during all stages and forwards this information to Dynatrace.

Find the necessary TensorFlow to Dynatrace callback class on GitHub.

It’s extremely simple to register such a Dynatrace callback listener, as it is shown below:

dt_callback = DynatraceKerasCallback(metricprefix='tensorflow',modelname='model', url='https://your.live.dynatrace.com/api/v2/metrics/ingest',apitoken='yoursecret') model.fit(x=train_texts, y=target, epochs=100, callbacks=[dt_callback])

Then start training your model, as shown below:

As the Dynatrace callback was registered before the training stage, the TensorFlow metrics are now shown in Dynatrace:

Integrate Dynatrace Software Intelligence into your GitHub CI/CD Pipeline

It’s common knowledge today that seamless monitoring and observability of all your production software stacks is key for successful service delivery. Along with a tight integration into your CI/CD pipeline, service and software monitoring offers a lot of insights on what is going wrong during your build, test and release workflows and how to quickly remediate outages.

As a Cloud Native SaaS platform, GitHub represents the home of most of the popular Open Source projects worldwide. It offers all the important features that are necessary to support the entire software lifecycle of your project.

GitHub Actions is one of those priceless features, as it allows you to choose from more than 6.000 individual CI/CD steps that allow you to automatically build, test and release your projects on virtual machines.

Dynatrace on the other hand represents the leading software observability and intelligence platform according to analysts, such as Gartner. A Dynatrace monitoring environment allows you to closely observe the production behaviour of your software in realtime and to get notified about abnormal incidences that could lead to outages.

That said, its pretty obvious that a tight connection between your GitHub CI/CD pipeline with your Dynatrace monitoring environment offers a lot of benefits.

Within my last project I did implement a purpose built Dynatrace GitHub action that allows you to directly push information, such as events and metrics into your monitoring environment.

Use-cases here are to inform your DevOps team about broken builds of your software or to collect statistics data about your build workflows, such as the number of code commits on your services or the number of failed builds versus successful builds.

You even can use Dynatrace to define dedicated SLOs (Service Level Objectives) for your CI/CD pipeline by using those metrics as Service Level Indicators (SLIs).

See below a typical GitHub build workflow that uses the Dynatrace GitHub Action to push a metric into a monitoring environment and that informs about broken builds as well as on successful builds. Mind that I am sending a total count metric as well as both failed count and success count, which I will use later as SLI metric in my Dynatrace CI/CD Pipeline SLO.

See my GitHub workflow below:

name: 'build-test'
on: # rebuild any PRs and main branch changes
  pull_request:
  push:
    branches:
      - main
      - 'releases/*'

jobs:
  build: # make sure build/ci work properly
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: |
          npm install
      - run: |
          npm run all

  test: # clean machine without building
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Inform Dynatrace about a successful build (
        if: ${{ success() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.success"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Successful Build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} was successfully built"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
      - name: Inform Dynatrace about a failed build 
        if: ${{ failure() }}
        uses: wolfgangB33r/dynatrace-action@v4
        with:
          url: '${{ secrets.DT_URL }}'
          token: '${{ secrets.DT_TOKEN }}'
          metrics: |
            - metric: "github.build.total"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
            - metric: "github.build.fails"
              value: "1.0"
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang
          events: |
            - title: "Failed build"
              type: CUSTOM_INFO    
              description: "GitHub project ${{ github.repository }} build failed!"
              source: GitHub
              entities:
                - MOBILE_APPLICATION-C061BED4799B41C5
              dimensions:
                project: "${{ github.repository }}"
                branch: "${{ github.ref }}"
                event: "${{ github.event_name }}"
                owner: wolfgang

The conditional Dynatrace step within the GitHub workflow above is then executed with every commit of your repository, as it is shown below:

Conditional Dynatrace GitHub Action Steps, either on success or on failure

After a successful run of your workflow, you will see both, the event as well as the metric appear in your Dynatrace environment, as it is shown below:

Dynatrace event sent from your GitHub CI/CD workflow
GitHub CI/CD pipeline metrics

Define a Service-Level-Objective (SLO) for your GitHub CI/CD Pipeline

Now that Dynatrace is informed about each build success and fail, we can easily define a SLO for our CI/CD pipeline to continuously observe the quality of our build.

See below the selection of the total count as well as the success count metric as the SLI metric for our SLO within Dynatrace:

GitHub CI/CD pipeline SLO defined in Dynatrace

Now we see the current SLO state within the list of Dynatrace SLOs and we can put our SLO state onto any of your Dynatrace dashboards:

Your new GitHub build workflow SLO

Summary

I came to love the simplicity and efficiency of GitHub actions within the last weeks. They helped me a lot to fully automate my own GitHub projects CI/CD pipeline, to save time during releases and to generally raise the quality of my projects.

The logical next step for me was to tightly integrate the GitHub workflow into my Dynatrace monitoring environment and to define SLOs for measuring the quality of my builds in realtime.

By implementing and publishing a Dynatrace GitHub action, a tight integration between your GitHub workflows is now possible for everybody with a simple click into the Github Marketplace.

Automate your Android CI/CD Pipeline with GitHub Actions

When I came to play around with GitHub Actions CI/CD pipeline framework recently, I could not believe how simple and effective that functionality is!

It does not really matter, if you just want to automatically check each of your Git commits by using lint or to fully build your artefacts, GitHub actions allows you to do that with a simple YAML configuration.

GitHub actions allows the definition of different jobs that are automatically triggered on events happening within your Git repository (such as commits, pull, creation of a tag or releases, comments, creation of issues, and many more). As those job definitions are living in the same Git repository, its the perfect solution for managing your CI/CD pipeline as code within a self-contained GitHub repository.

Within this post, I will describe how I came to fully automate the CI/CD pipeline of my production Android App (TabShop) by using GitHub Actions.

GitHub action tab within my Android app repository

Kudos to Niraj Prajapati who wrote such a great blog post and who inspired me to fully automate my own Android app’s CI/CD pipeline.

Why – Whats the value for app publishers?

I can’t emphasise the value of a fully automated CI/CD pipeline enough! I spent hours over hours on manually building and testing my Android app, to finally sign it and push it to the Google Play Store. So far, I released 182 versions over 6 years. The build, test and release process gets more and more complex and error-prone. Freelance app publishers, like me, invest a significant amount of time into manual CI/CD processes that are much better spent in building innovations into the app itself.

That said, GitHub Actions does allow me to create and run a feature rich CI/CD release process fully automatically in the cloud, which helps me to save time and effort and to innovate!

Scope of my Android CI/CD Pipeline

This blog shows step-by-step how to implement the following tasks into your own GitHub Actions CI/CD pipeline:

  1. Build your app using the Gradle Build Tool
  2. Run your unit-tests
  3. Build a release app bundle
  4. Sign the app bundle
  5. Upload and expose the app bundle
  6. Push and release the app bundle in Google Play Console

Step 1: Automate your Android app build

The first step within our Android app’s CI/CD pipeline is to create a GitHub Action YAML file and to add a trigger that defines when the job should be run.

Navigate to your GitHub project repository and click on the ‘Actions’ tab where you find a button to create a new ‘workflow’.

GitHub offers a lot of standard build workflows for the most popular technology stacks. In our case we either choose to skip the template selection or we choose the Android CI workflow as shown below:

Choose the Android Gradle CI workflow

The resulting workflow will create an Android build job that already fulfills our first goal, which is to startup a Ubuntu instance, checkout your apps sourcecode and to execute the Gradle build file, as it is shown below:

Simple Android Gradle Build Job

The workflow above is triggered every time a ‘push’ or a ‘pull_request’ is triggered within your repository.

Step 2: Execute your unit-tests

A good unit tests coverage is recommended to safeguard your app against failing or buggy code contributions. In most Android app projects, the unit test code is part of your Git repository, so Gradle is also used to build and execute your tests by adding following step to your workflow:

Gradle step that runs your unit tests

Step 3: Build a release app bundle

Within the next step we will trigger the build of a release app bundle (AAB) that we will sign in the next step. App release bundles are the preferred way of shipping apps through the Google Play Appstore, as they are optimised in size and stripped of unnecessary content.

See below the workflow step that automatically builds our application release bundle:

Step 4: Sign the app bundle

Application bundles are typically signed with the certificate of a trustworthy app publisher, so that users can trust the origin of the installed app and that no third-party injected malicious parts into your app.

App marketplaces such as Google Play require apps to be signed with the certificate of the publisher to ensure the integrity of all published apps.

Therefore we will automatically sign our app bundle once its built by adding the below workflow step:

Sign an Android app bundle by using a GitHub action

The signing step above does need some additional information about your own certificate as well as the key store password and alias, which we will provide as safe GitHub secret placeholders as shown below:

  • secrets.SIGNING_KEY
  • secrets.ALIAS
  • secrets.KEY_STORE_PASSWORD
  • secrets.KEY_PASSWORD

Convert your certificate file into a base64 encoded string that can be used as a GitHub repository secret within the placeholder ‘secrets.SIGNING_KEY’. In case you are using a Mac you are lucky as the command for converting your secret file into a base64 encoded string is already provided by openssl, as it is shown below:

openssl base64 -in my-release-key.keystore -out my-release-key.keystore.base64

See the resulting list of GitHub secrets within the screenshot below:

GitHub Secrets used by the signing workflow step

Find the signin GitHub action that we used in our workflow below:

Step 5: Upload and expose the app bundle

Each workflow run does spin up a completely clean Ubuntu instance that is wiped after its finished.

If you would like to keep a build artefact for later download you have to define a build step to upload and persist the artefact, as it is shown below:

After your workflow is successfully finished you will find your file within the workflow execution screen:

Download Build Artefact

Step 6: Push and release the app bundle in Google Play Console

Now that we successfully built and signed our application, we would like to automatically push the app as a new beta release into your Google Play Console.

Again there is a dedicated GitHub Action that helps to achieve this cumbersome task, see below:

Another important prerequisite for a successful Google Play upload is the creation of a ‘Service account’ that holds the necessary IAM role for uploading artefacts into your Google Play account.

To create a new service account you have to navigate to your Google Play Console > Settings > API Access as it is shown in the screenshot below:

Google Play Console Service Account Creation

Create a new Service Account with release access right for your application. In case you are a Google Cloud user as well, you have to create the Service Account user within Google Cloud Console instead and then grant access to the selected app project.

Once you have your service account created, you have to create a JSON key for that service account and put it in a GitHub secret placeholder again. Just copy the JSON string into a GitHub secret field with the name ‘SERVICE_ACCOUNT_JSON’.

Create and download a JSON key for your Google Service Account

Once you have stored your service account key in a GitHub secret, you can create a workflow step to download it during the workflow run and store it in a file (service_account.json), as it is shown below:

Download the key to a local json file

The final step is to use the Upload Action to publish your application bundle to Google Play Console as it is shown below:

Upload and push an Android application bundle to Google Play

Important note here is that you will receive an error message if you did not enable the App Signing in your Google Play account. To opt-into app signing, you simply navigate to Google Play Console > Your App > Setting > App Signing, as shown below. You have to upload your signing key as private key file (which can be exported by Android Studio).

Summary

It’s amazing how easy and productive it is to use a GitHub Actions workflow to completely automate your Android app release process. It helps you to ensure consistent release quality and safes a lot of time especially for small and independent app publishers. See the running CI/CD workflow below.

Well done GitHub and Microsoft!

Open Source Home Automation with Esp32 and Home Assistant (Hass.io)

ebook Open Source Home Automation - Introduction to Home assistant and Esp32 based automation

As a dedicated Home Assistant user for years now, I came to write an ebook covering all the important topics around home automation and tinkering your own Esp32 based
sensors and actuators. Home Assistant is Open Source, written in Python and a lively community maintains over 3000 custom made components that allow you to control nearly everything in your home.
For everything else there is the cheap and handy Esp32 or ESP8266 microcontroller that comes with builtin wireless network support and the capability to control any hardware you can think of.
Soldering your own ESP8266 based sensors is really fun and allows you to add a lot of flexibility to your home automation. With a very low price tag of 5$ the ESP8266 microcontroller is a practical basis for all your custom made sensors and actuators.
Read more about the details on how to solder your own sensors and how to attach them to your own home assistant system in my brand new eBook: ‘Open Source Home Automation‘.

Build a wireless MQTT temperature and humidity sensor for your Home Assistant

Over the last months, I became more and more addicted to Home Assistant (Hass.io) and MQTT low cost wireless sensors. I was already familiar with several home and industrial automation systems that all come with a certain hardware (and price) and build upon a completely proprietary software stack. So long story short, I was searching for a good community-backed open source home automation system that is easy to set up and runs on my old Raspberry.

As home automation seems to be a broad area of interest I thought there should be hundreds of open source community projects out there. I was not as easy as I thought and there are not so many home automation projects out there. It seems as if the market is broadly dominated by large vendors that offer integrated solutions.

After some cumbersome fails I was finally able to find a real gem in the home automation area, which is called the Home Assistant, or short Hass.io. Home Assistant comes as a lightweight installation that perfectly fulfills following requirements:

  1. Its lightweight, low resource consuming
  2. Easy to set up
  3. Nice web interface, that also comes pretty well with my tablet and smartphone (no app required, responsive web UI is great on your mobile device too) See a live demo here.
  4. Lots of community components available (>1000), such as Alexa, IFTTT, Hue, Sonos, Cromecast, Webcam, and many more.
  5. Fully configurable through plaintext YAML files
  6. It comes with an integrated MQTT broker!
  7. Supports automation scripts, such as turn light on at sunset
  8. Best of all its written in Python and its open source

The first step towards building my own MQTT wireless weather station was to set up a Home Assistant instance on my old Linux laptop. If you already got Python3 running on your system, the set up process is pretty straight forward, just type:

python3 -m pip install homeassistant

After successful installation you just enter the .homeassistant configuration folder and adapt the .yaml configurations that control what your Home Assistant instance is showing and how elements are organized in Web UI.

The most important configuration files are configuration.yaml that contains the core configuration about sensors and components and groups.yaml that groups all your components into visual tabs within the UI. Within my installation i chose to use a default group, one for my living room and one for controlling my pool, as i is shown in the screenshot below:

As my screenshot already shows, my Home Assistant instance already contains some MQTT based sensors for continuously informing me about the temperature and humidity (outside, and in living room). You can put the sensor output into any of your configured tabs. The same sensor info can also be present in multiple tabs at the same time.

To add a new MQTT sensor into your core configuration file, simply add following sensor section into your core configuration.yaml file:

sensor:
  - platform: mqtt
    name: "Temperature"
    state_topic: "/home/outdoor/sensor1"
    value_template: "{{ value_json.temperature }}"
    unit_of_measurement: '°C'
  - platform: mqtt
    name: "Humidity"
    state_topic: "/home/outdoor/sensor1"
    value_template: "{{ value_json.humidity }}"
    unit_of_measurement: '%'

You can then show this newly added sensor value in any of your configured groups, as shown below:

default_view:
  name: Home
  view: yes
  entities:
    - sensor.airquality
    - sensor.temperature
    - sensor.humidity
    - sensor.yr_symbol
    - sun.sun
    - camera.mjpeg_camera
    - device_tracker.alice
    - device_tracker.bob
    - switch.robby
    - switch.lamp
indoor:
  name: Livingroom
  view: yes
  entities:
    - sensor.temperaturelivingroom
    - sensor.humiditylivingroom
    - media_player.livingroom
pool:
  name: Pool
  view: yes
  entities:
    - sensor.watertemperature
    - switch.poolcover
    - switch.poollight
    - switch.poolpump
    - switch.poolbot

Now its time to test if the sensor would show a value in case it receives an MQTT value through the configured MQTT topic. Therefore, Home Assistant offers a simple MQTT test message UI in which you can simulate any incoming MQTT message, as shown below. Just enter your MQTT topic and send a static value:

After a click on the ‘publish’ button those two values 30 and 70 will appear in your sensors for temperature and humidity. You can do that try-run for all of your MQTT bound sensors, which is a convenient feature for testing the server side functionality of your home automation.

Next step is to build a cheap temperature and humidity sensor that sends its measurements over WLAN to your Home Assistant MQTT broker. As base sensor board I decided to use an ESP8266 or an equivalent ESP32 microcontroller board that offers a cheap (~5 USD platform) with integrated WLAN stack and many digital and analog input pins. See below an image of the chosen Esp32 board:

The ESP8266 board can easily be flashed over a USB cable and it runs with a standard Arduino bootloader. You can use your Arduino Studio to program your tiny ESP8266 board. To measure the temperature and humidity, the combined digital DHT22 sensor was used, as shown below:

To connect the DHT22 sensor to your ESP8266 board simply attach the Vin pin to the 3V pin of the ESP8266 board, the Ground to any of the Ground pins and the signal pin to any of the ESP8266 digital input pins.

Following Arduino code snippet shows how to initialize the DHT22 sensor and how to read and report the sensor value through a MQTT message:

#include <ESP8266WiFi.h>
#include <EEPROM.h>
#include <DHT.h>
#include <DHT_U.h>
#include <PubSubClient.h>
#include <ArduinoJson.h>

/* Globals used for business logic only */
#define MQTT_VERSION MQTT_VERSION_3_1_1
// MQTT: ID, server IP, port, username and password
const PROGMEM char* MQTT_CLIENT_ID = "sensor2_dht22_s";
const PROGMEM uint16_t MQTT_SERVER_PORT = 1883;
// MQTT: topic
const PROGMEM char* MQTT_SENSOR_TOPIC = "/home/house/sensor1";
// sleeping time
const PROGMEM uint16_t SLEEPING_TIME_IN_SECONDS = 60; // 10 minutes x 60 seconds
// DHT - D1/GPIO5
#define DHTPIN 5

#define DHTTYPE DHT22

DHT dht(DHTPIN, DHTTYPE);
WiFiClient wifiClient;
PubSubClient client(wifiClient);

/* Business logic */
// function called to publish the temperature and the humidity
void publishData(float p_temperature, float p_humidity, float p_airquality) {
    // create a JSON object
    StaticJsonBuffer<200> jsonBuffer;
    JsonObject& root = jsonBuffer.createObject();
    // INFO: the data must be converted into a string; a problem occurs when using floats...
    root["temperature"] = (String)p_temperature;
    root["humidity"] = (String)p_humidity;
    root["airquality"] = (String)p_airquality;
    root.prettyPrintTo(Serial);
    Serial.println("");
    /*
    {
    "temperature": "23.20" ,
    "humidity": "43.70"
    }
   */
    char data[200];
    root.printTo(data, root.measureLength() + 1);
    client.publish(MQTT_SENSOR_TOPIC, data, true);
    yield();
}

setup() {
    dht.begin();
    Serial.print("INFO: Connecting to ");
    WiFi.mode(WIFI_STA);
    WiFi.begin(cconfig.ssid, cconfig.pwd);
    while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
    }
    Serial.println("");
    Serial.println("INFO: WiFi connected");
    Serial.println("INFO: IP address: ");
    Serial.println(WiFi.localIP());
    // init the MQTT connection
    client.setServer(cconfig.mqtt, MQTT_SERVER_PORT);
}

 

void loop() {
    dht.begin();

    if (WiFi.status() != WL_CONNECTED) {
        WiFi.mode(WIFI_STA);
        WiFi.begin(cconfig.ssid, cconfig.pwd);
         
        // Reading temperature or humidity takes about 250 milliseconds!
        // Sensor readings may also be up to 2 seconds 'old' (its a very slow sensor)
        float h = dht.readHumidity();
        // Read temperature as Celsius (the default)
        float t = dht.readTemperature();
         
        if (isnan(h) ||isnan(t)) {
            Serial.println("ERROR: Failed to read from DHT sensor!");
        }
        else {
            publishData(t, h, aq);
        }
        delay(5000);
    }
}

Download the full source code at Github.

After connecting, flashing and running our tiny 15 USD wireless sensor we will continuously receive updates of actual temperature and humidity measurements. Those measurements are shown within your Home Assistant views. A very nice feature of Home Assistant is also that it stores historic measurements and that you can get a chart of past trends by a single click into the UI, as shown below:

Overall, Home Assistant is the perfect open source platform for your own home automation projects, no matter if you run it on your old laptop or on a tiny Raspberry Pi. It offers all the flexibility in terms of attaching any kind of MQTT sensor or message provider and is a great platform for playing around with your electronics hardware and it has a cool Web UI too!

Read more in my ebook on ‘Open Source Home automation’.

Open Source Home Automation: Introduction to Home Assistant (Hass.io) and ESP32 based Automation (English Edition) von [Beer, Wolfgang]

Teach your Kids to code: Build your own OttoDIY robot

Coding is the lingua franca for all citizen in a modern technological society. By adapting any programming language your kids can learn very important skills, such as abstraction of a problem, defining and structuring a solution and to use a sequence of simple steps to fulfill complex tasks. Beside all the educational benefits of learning to use a programming language it is a lot of fun to see and experience your own programs while performing their autonomous tasks.

Another important skill within the actual technological society is to understand and control robotic hardware or electronics in general.

Nothing is more exiting for your kids as if something moves, makes a sound or blinks a lot of lights. Believe me when I say that kids are native robot and automation enthusiasts!

That said, I was really exited as I read about a vivid community of electronics and programming experts that shared the same idea of building the open educational robotics platform OttoDIY. OttoDIY offers all necessary resources, such as electronics, servos, sensors along with 3D printing models of the robot’s body parts to quickly jump into the world of electronics and robotic motion.

The OttoDIY community does share all information that is necessary to quickly print your own Otto robot and assemble the electronics.

Fortunately, the company I work for (kudos to Dynatrace) strongly supports innovation and coding for kids. Therefore, I had the chance to print our own Otto robot within the Dynatrace lab and I was astonished how easy it is to reproduce the body parts offered on thingiverse. See some impressions of the printing process below:

OttoDIY print UltimakerOttoDIY print Ultimaker

Otto’s brain arrived some weeks later and we immediately started to assemble the complete OttoDIY robot. With the assembly instructions given by Camilo Parra Palacio it was pretty easy to set the complete bot up and get it running within an hour.

One important hint here is to first check if the shipped servos do exactly fit into the dedicated sockets within your 3D print. Otherwise, you have to disassemble the complete bot again and rasp some more space.

After we assembled the complete OttoDIY bot, we downloaded the mBlock coding environment that was specifically built for kids and children. mBlock is a combination of Scratch and Arduino that allows kids to play around with physical computing and program first hardware and bots by simply using a structured visual block programming language, as it is shown below:

After some practice we finally were able to teach our Otto robot some quite cool dance moves, see below:

 

Kaggle: Join the global machine learning and AI community

Around a halve year back I stumbled over Kaggle.com, a vital community portal of Artificial Intelligence and machine learning experts. Kaggle not only encourages people around the world to share thoughts and example data sets on popular machine learning tasks, they also host great AI challenges.

Since I joined the Kaggle community 6 month ago, I was fascinated about the individual challenges that were published. Those challenges range from predicting Mercari product prices over detecting icebergs from radar data to speech recognition tasks.

Many companies such as Google, Mercari or Zillow are hosting challenges where more than thousand of teams try to predict the best results. Often it is unbelievable how those teams solve these complex machine learning tasks.

Besides providing the challenges and the data sets necessary to wake the interest of global leaders within the machine learning and AI community, Kaggle also offers a tremendously powerful kernel execution environment. This execution environment consists of preconfigured Docker containers that were specifically designed for training models. In order to design and execute a machine learning kernel you simply edit the code online (Python, R, Notebook) and execute it within the Kaggle infrastructure.

As Kaggle docker containers are completely preconfigured you save a lot of time to download and prepare your environment.     

 

 

Kaggle really pushes the AI community forward in terms of offering a flexible and open platform for executing kernels and to quickly get hands on interesting data sets. The community platform also does a pretty good job in bringing the global community together and stimulates a broader and practical discussion outside the theoretical scientific community.

Besides if you need a quick start tutorial on how to train your first neural network, grab my eBook at Amazon:

Android Paint and Draw for your Kids

Over the last years I kept asking my now 5 year old daughter how she would design a simple painting App and how this Painting App should look like. We discussed about the background color as well as how to change and choose colors, the absolute requirement to add the color pink and how to change the brush thickness. The design of the paint and drawing App went completely without the need to read a single piece of written text or menu. We came up with a very simple and intuitive way to touch-draw images for children and to store these images as .png pictures. You can find your free Android Painting App for Kids and Children in the Google App store.

Software Structure Analysis and Metric Calculation with Neo4J and Cypher

During the last weeks the Software Analytics and Evolution research team at the Software Competence Center Hagenberg (which is the group i am actually working in) built a software tool for parsing large scale legacy software systems, such as C, C++ but also FORTRAN, Structured Text (IEC 61131 Machinery and Robot programs) or Matlab source code with the goal to analyse its structure by using a Neo4J graph database and Cypher queries. As you can see within this demo video, the tool is able to visualize important aspects and metrics as well as the software architecture and structure of the analyzed software system. The tool is meant for supporting companies to develop and maintain their large software systems and code bases.

Lean Tablet Cash Register for Entrepreneurs and Small Businesses

TabShopLogoProMany startups, entrepreneurs and small businesses are spending a large amount of their spare budget for operating a cash register system. Most of these systems are built upon old stationary touchscreen hardware that is on the one hand quite expensive and on the other hand quite unflexible. These old point of sale systems do not represent the lean and flexible spirit of todays entrepreneurs and startups. Nowadays small businesses are moving fast, offer high mobility and react flexible on new opportunity.
By offering a complete cash register and stock management system in your pocket TabShop climbed the top position in Google Play Marketplace. TabShop is the leading point of sale system app on Android tablets and smartphones. It allows users to manage their stock and directly checkout the customers invoices. With TabShop entrepreneurs always take their stock information and cashier system with them. So small businesses are always ready to take up great selling opportunities.

Screenshot_2014-04-16-08-13-18