Your way to proper machine data: Building the data acquisition

You have a lack of suitable machine and process data? No problem! Our hardware and software engineers collaborate with you to gain a clear understanding of the infrastructure required for the business model. Which variables need to be measured within which value range? What measurement accuracy is required? Which kind of hardware is needed for your application? Together, we define the data acquisition and communication and choose suitable hardware and software.

Content:

  • Introduction to the basics of cloud, hybrid and edge solutions
  • Specification of necessary infrastructure components with respect to functionality, architecture and pricing, etc.
  • Identification of relevant measuring points and planning of sensor equipment
  • Accurate and manufacturer-independent specification of requirements
  • Determination of target location and selection of machines for initial data collection


Approach:

  • Carrying out of up to three workshops - depending on the use case and preparation effort
  • Workshop chaired by 1x consultant and 1x software or hardware expert
  • Participants from your side: min. 1x process expert and 1x project manager
  • Partial or complete preparation of specifications

Realize the potential of your machine data

Monetisation of your machine data

Content:

  • Understanding the data acquisition
  • Pre-processing of the data
  • Data potential analysis


Approach:

  • Providing the data as, for instance, CSV, JSON, HDF5 or similar
  • Reviewing the recorded data, assessing the data quality and derive suggestions for adding more data points
  • Pre-processing of the data including normalisation and compensation of missing values
  • Visualisation of individual variables as time series and histograms
  • Assessing the data correlation with respect to your objectives

Request data analysis now

You have lots of data, but don't know how to create added value with it? Our experienced data science team supports you with ensuring the necessary data availability and data quality: How many different signals need to be processed and what are the dependencies between the individual signals? Are there any trends or jumps in the values of your data? Do there exist outliers, implausible or missing data points? In close cooperation, we reduce the effort for data acquisition and processing. We review the existing machine data and evaluate the significance for the intended business model.

Development and implementation of a suitable analysis

Based on the previous steps, we develop your individual analysis procedure by applying suitable statistical methods or machine learning models for your machines. We integrate them into your processes either locally or via a cloud solution.

Content:

  • Development and validation of a data analysis baseline model using statistical methods or machine learning, considering the available data.
  • Configuration of the data pipeline
  • Setting up an environment for training the models
  • Connecting and testing the functionality
  • Providing a notification service or visualisation
  • Presentation of the proof of concept
  • Effort estimation for the deployment into the live system


Approach:

  • We adapt the actual approach to your individual needs
  • Jointly, we want to find the simplest solution that fully meets your specific quality requirements

Insight into our way of working

In practice, implementing a data-based business model for your machines is not always the same as dealing with complex algorithms or artificial intelligence. As Albert Einstein would have said, we prefer solutions that are as simple as possible, but not simpler. In the following, we give you a brief insight into our way of working.

As simple as possible, as complex as needed

We offer you an analysis service that fulfils your task. This does not necessarily have to be a complex machine learning model. Many tasks can be solved very reliable and efficient using conventional statistics, without the need for very large amounts of data.

For instance, the SARIMA method can often be used very well to forecast seasonal cyclic time series data. Methods such as Support Vector Machine or Random Forest are for example suitable for the classification of certain data.

Selective application of Artificial Intelligence

Depending on your task, we develop complex machine learning algorithms for you, for example on the basis of Neural Networks.

Recurrent neural networks such as Long Short-Term Memory (LSTM) or Gated Recurrent Units (GRU) are ideal for analysing large time series data sets. Designed as autoencoders, these techniques can be used, for instance, to predict complex time series over the long term.

Smart Data instead of Big Data

Together with your process expertise, it is often possible to build suitable simulation models or use Generative Neural Networks. In this way, a small dataset can be enlarged systematically and then be used for the training of complex machine learning algorithms. Together with your experts, we ensure that the artificially generated data represent reality.

Easy integration into field applications

To enable you to make use of our sophisticated algorithms and models, we work with common methods for software deployment. Based on Docker containers, we are able to deploy our applications at your site, either locally or as a cloud service.

Especially with respect to cloud solutions, a high level of data security and trustworthiness is essential. With Microsoft Azure, we are using a well established platform to meet these requirements. The Azure Cloud offers excellent possibilities to realise a lean development and maintainance process of the source code and the application. Edge applications are running directly at the machine, on the other hand, applications with heavy performance or storage requirements are executed in the cloud. The Microsoft Azure Cloud provides also options for linking both worlds.