Data Driven processing

Data Driven processing

When building Flows for industrial integration use cases, we often come across protocols like Modbus, OPC UA and S7. For those protocols, we provide modules that allow you to use ‘resource-files’ to define which data you want to select/subscribe to. The output of those modules is typically a value, or set of values with some additional protocol specific properties. Once the information is available, you design your flow so that it covers your use case. Usually, you will get various different tag-types like temperature, velocity, pressure, … from that one source which leads to the question, how can you introduce tag-specific logic to your flow?

For this purpose, you can now add ‘metadata’ to each tag in your resource file which will become part of the flow message. Once the metadata is part of the message, you can apply specific processing based on it.

Concept

  • Add metadata to your data tags in resource file.

  • Metadata will be added to flow messages.

  • Define processing logic in your flow that uses the metadata.

  • Simplify your flows.

  • Gain advantages in regards to scalability.

  • Fully controllable via API.


Example flow

Resource files

Once the concept is understood, you need to think about what metadata is useful in your specific use case. We will show two use case examples by the end of this article to give you some ideas.


Example S7 resource file

The above example shows the required information needed to define a tag for the S7 Reader module. Any additional properties you add to the tag definition will be treated as custom metadata and will be copied to the output messages for this tag. You can add basic data types (numbers, strings and booleans), but also objects and arrays.

Let's say we now want to add the ‘site’ and ‘production_line’ because we want to be able to run analytics for specific production lines in specific sites. In addition to that, we add the unit because some values have to be converted, ie. °C to °F. Beside that, we want to route different data tags to different topics on the local MQTT broker to make them available for other flows or external applications.


Example S7 resource file with custom metadata

Resource files are often created with external tools outside of Crosser and are then uploaded to the Crosser Control Center via the API or through the UI. Once the resource file is available, you can map it into the module configuration (either direct or via parameter overwrite).

How to use metadata for tag specific processing

Some modules expect settings to either be configured, or to be available on the incoming flow message on a specific topic. As an example, the MQTT Pub Client module will check the topic which is configured. If the setting is not specified, it expects the topic information to be part of the flow message, on the property ‘topic’. You can also use the template syntax in other modules to pick up the value from the incoming message {topic}.



The module will use the tag-specific topic from the resource file

Use case examples

Use case #1 - Energy meter data acquisition

Introduction
  • Different production sites across the globe.

  • Main-Meters and Trafos connected via Modbus TCP, Sub-Meters via Modbus RTU to Main-Meters.

  • Energy meters separated into different levels.

  • Requirements:

    • Create topic hierarchy within the flow.

    • Label every data tag for advanced analytics in cloud.

    • Introduce tag-specific logic due to transformer ratios.

    • Unify flows to re-use across all sites.

Resource file and flow

Use case #2 - Data routing and granular analysis

Introduction
  • Crosser Node runs on distributed assets in end-customer environments.

  • S7 PLC controls different physical assets (Scanners, Sensors,..).

  • Requirements:

    • Separate data acquisition from processing.

    • One processing flow per physical asset.

    • End-customer specific data routing to cloud.

    • End-customer specific processing in cloud.



Message change throughout the flow


    • Related Articles

    • IPA, Data Mapper and new UI

      IPA, Data Mapper and new UI October 20, 2021 IPA (Intelligent Process Automation) Crosser Cloud is now available in two editions: IIoT and IPA. The IPA edition focuses on integration between enterprise systems. We have verified over 700 systems that ...
    • Messages, Modules and Flows

      The Crosser Edge node is a real-time engine designed to process Messages. A message can be pushed from an external source, such as a MQTT or HTTP client, or it can be pulled from the Edge node using connectors for different protocols/services. ...
    • Solution Overview

      Solution Overview The Crosser Solution is entirely focused on streaming analytics, i.e. analyzing data in motion. It is optimized for collecting data close to the source and then analyze the data in real time. The results can be delivered to ...
    • Using the Python Bridge module

      Introduction Crosser offers two ways to run Python code from within a flow: the IronPython and the Python Bridge modules. The Crosser flow engine is built in .NET and Python is not a native .NET language. The IronPython module uses a Python ...
    • Flow to Flow communication

      Introduction One of the benefits of the Crosser solution is that you can deploy multiple flows (processes) into one existing container. Due to that, you can add new use cases without influencing running processes at the edge, even without restarting ...