Installing Crosser Control Center On-premise

Installing Crosser Control Center On-premise

This guide describes how to install Crosser Control Center on your own infrastructure, either on premise or in your own cloud account.

Introduction

The Crosser Control Center consists of several microservices, each deployed as its own container inside a Kubernetes cluster. There are also database and storage services residing outside the Kubernetes cluster, that are used by these services. Some services can run inside the cluster for test setups, but needs to run outside the cluster for high-availability production setups (see the 'External' column in the services table below).



The following table provides a short summary of each service:

Name
Required
External
Description
Ingress controller
Yes
No

Services manager
Yes
No

K8s API
Yes
No

Edgedirector
Yes
No
Back-end service for Crosser Control Cloud
IoT
Yes
No
Front-end service for 'old' UI pages (will be depreciated)
IPA
Yes
No
Front-end service for 'new' UI pages
Analyzer
Yes
No
Dependency manager for packages used by modules
Module registry
Yes
No
Repository for all modules used in Flows and by Nodes. Acts as a cache for the external nuget.org repository, where external dependencies are fetched.
Admin
Yes
No
Administration service for organization management. Only used by Crosser and partners, not accessible for end-users
Node manager
No
No
Sandbox manager
External requester
No
No
Makes external requests (over HTTP) when using the test tool in the Universal Connector wizard
Redis
Yes
For HA
Redis cache used by the Edgedirector service
Analyzer cache
Yes
No
Redis cache used by the Analyzer service
InfluxDB
Yes
For HA
Timeseries database used to store status information and metrics from Nodes
Database
Yes
For HA
Database that holds all configuration data, including accounts, users, nodes, flows, credentials...
Blob storage
Yes
Yes
Storage for resources and other large objects

Installation

Requirements

With a Kubernetes cluster available and Helm installed, we can install Crosser Control Center in the Kubernetes cluster using the Crosser Helm chart.

Initialize a Helm chart configuration file

Helm charts contain templates that can be rendered to the Kubernetes resources to be installed. A user of a Helm chart can override the chart’s default values to influence how the templates render.

In this step we will initialize a chart configuration file for you to adjust your installation of Crosser Control Cloud. We will name and refer to it as config.yaml going forward.

Introduction to YAML

If you haven’t worked with YAML before, investing some minutes learning about it will likely be worth your time.

As of version 1.0.0, you don’t need any configuration to get started so you can just create a config.yaml file with some helpful comments.

In case you are working from a terminal and are unsure how to create this file, you can try with vi config.yaml or try our simple config (see attachements).

  # This file can update the Crosser Cloud Helm chart's default configuration values.
  #
  # For reference see the configuration reference and default values, but make
  # sure to refer to the Helm chart version of interest to you!
  #
  # Introduction to YAML:     https://www.youtube.com/watch?v=cdLNKUoMc6c
  # Chart config reference:   https://urltodocs/setup/configuration-reference/
  #

Install Crosser Control Center

  1. Make Helm aware of the Crosser Control Center Helm chart repository so you can install the Crosser Control Center chart from it without having to use a long URL name.

    helm repo add --username <username> --password <password> crosser-cloud https://registry.crosser.io/chartrepo/cloud
    helm repo update
    

    where:

    • <username> and <password> refers to the chart credentials you will get from Crosser when you have an On-premise contract.

    This should show output like:

    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    ...Successfully got an update from the "stable" chart repository
    ...Successfully got an update from the "crosser-cloud" chart repository
    Update Complete. ⎈ Happy Helming!⎈
    
  2. Now install the chart configured by your config.yaml by running this command from the directory that contains your config.yaml:

    helm upgrade --cleanup-on-fail \
      --install <helm-release-name> crosser-cloud/crosser-cloud \
      --namespace <k8s-namespace> \
      --create-namespace \
      --version=<chart-version> \
      --values config.yaml \
      --timeout 3600s
    

    where:

    • <helm-release-name> refers to a Helm release name, an identifier used to differentiate chart installations. You need it when you are changing or deleting the configuration of this chart installation. If your Kubernetes cluster will contain multiple Crosser Control Centers make sure to differentiate them. You can list your Helm releases with helm list.
    • <k8s-namespace> refers to a Kubernetes namespace, an identifier used to group Kubernetes resources, in this case all Kubernetes resources associated with the Crosser Cloud chart. You’ll need the namespace identifier for performing any commands with kubectl.
    • This step may take a moment, during which time there will be no output to your terminal. Crosser Control Center is being installed in the background.
    • If you get a release named <helm-release-name> already exists error, then you should delete the release by running helm delete <helm-release-name>. Then reinstall by repeating this step. If it persists, also do kubectl delete namespace <k8s-namespace> and try again.
    • In general, if something goes wrong with the install step, delete the Helm release by running helm delete <helm-release-name> before re-running the install command.
    • If you’re pulling from a large Docker image you may get a Error: timed out waiting for the condition error, add a --timeout=<number-of-minutes>m parameter to the helm command.
    • The --version parameter corresponds to the version of the Helm chart, not the version of Crosser Control Center. Each version of the Crosser Control Center Helm chart is paired with a specific version of Crosser Control Center. E.g., 0.11.1 of the Helm chart runs Crosser Control Center 1.3.0. For a list of which Crosser Control Center version is installed in each version of the Crosser Control Center Helm Chart, see the release notes.
  3. While Step 2 is running, you can see the pods being created by entering in a different terminal:

    kubectl get pod --namespace <k8s-namespace>
    

    To remain sane we recommend that you enable autocompletion for kubectl (follow the kubectl installation instructions for your platform to find the shell autocompletion instructions)

    and set a default value for the --namespace flag:

    kubectl config set-context $(kubectl config current-context) --namespace <k8s-namespace>
    
  4. Wait for the edgedirector pod to enter the Running state and jobs pod to enter the Done state.

    NAME                                                              READY   STATUS    RESTARTS        AGE
    analyzer-fb896ff8f-ppwbc                                          1/1     Running   0               40s
    edbridge-7f45cfc4dc-f67cq                                         1/1     Running   0               40s
    edgedirector-d8d685d9c-w8n6s                                      1/1     Running   0               39s
    iiot-54497456f9-6wsjj                                             1/1     Running   0               20s
    .........................
    
  5. When Step 2 is done the output should show something like this.

    Release "<helm-release-name>" does not exist. Installing it now.
    NAME: <helm-release-name>
    LAST DEPLOYED: Wed Dec  10 23:49:21 2022
    NAMESPACE: <k8s-namespace>
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    The Crosser Cloud has been installed.
    
    Cloud:
        http://crossercloud.domain.tld
    
    Moduleregistry:
        http://crossercloud-module.domain.tld
    
  6. Now you can use Crosser Control Center, enter the URL for the Cloud in to a browser. Crosser Control Center is running with a default admin user that you configured in your config.yaml

    Congratulations! Now that you have basic Crosser Control Center running.

    If you for example wants to run Crosser Control Center in High Availability you can read how in the Customization Guide section.

Upgrading Crosser Control Center

This section covers best-practices in upgrading your Crosser Control Center deployment via updates to the Helm Chart.

Upgrading from one version of the Helm Chart to the next should be as seamless as possible, and generally shouldn’t require major changes to your deployment. Check the release notes for each release to find out if there are any breaking changes in the newest version.

For additional help, feel free to reach out to us.

Before upgrading

These steps are critical before performing an upgrade.

  1. Always backup your database!

  2. Review the release notes for incompatible changes and upgrade instructions.

  3. Update your configuration accordingly.

  4. If you are planning an upgrade of a critical major installation, we recommend you test the upgrade out on a staging cluster first before applying it to production.

Upgrading

After modifying your config.yaml file according to the release notes, you will need to run the upgrade commands. To find , run:

helm list --namespace <k8s-namespace>

Make sure to test the upgrade on a staging environment before doing the upgrade on a production system!

To run the upgrade:
helm upgrade --cleanup-on-fail \
   <helm-release-name> crosser-cloud/crosser-cloud \
   --version=<chart-version> \
   --values config.yaml \
   --namespace <k8s-namespace> \
   --timeout 3600s

For example, to upgrade to version 1.0.1 with a helm release name of staging in the k8s namespace of crosser-cloud:

helm upgrade --cleanup-on-fail \
  staging crosser-cloud/crosser-cloud \
  --version=1.0.1 \
  --values config.yaml \
  --namespace crosser-cloud
  --timeout 3600s

Troubleshooting

If the upgrade is failing on a test system, you can try deleting the helm chart using:

helm delete <helm-release-name> --namespace <k8s-namespace>

helm list –namespace may be used to find zzz.

Customization Guide

Customizing your Deployment

The Helm chart used to install your Crosser Control Center deployment has a lot of options for you to tweak. For a semi-complete reference list of the options, see the Configuration Reference section below.

Applying configuration changes

The general method to modify your Kubernetes deployment is to:
  1. Make a change to your config.yaml.

  2. Run a helm upgrade:

    helm upgrade --cleanup-on-fail \
      <helm-release-name> crosser-cloud/crosser-cloud \
      --namespace <k8s-namespace> \
      --version=<chart-version> \
      --values config.yaml
    

    Note that helm list should display <YOUR_RELEASE_NAME> if you forgot it.

  3. Verify that all pods entered the Running state after the upgrade completed.

    kubectl get pod --namespace <k8s-namespace>
    

For information about the many things you can customize with changes to your Helm chart through values provided to its templates through config.yaml, see the customization-guide.

Deploy with High Availability

This page contains instructions how to customize Crosser Control Center in High Availability. For a list of all the configurable Helm chart options, see the Configuration Reference section below. Edit your config.yaml or try our sample HA config (see attachments).

Requirements

  • Access to blobstorage/shared filesystem
    • S3 Blobstorage
    • Azure Blobstorage
    • Multiwrite filesystem (nfs)
  • External PostgreSQL with HA
  • External Redis with HA

External PostgreSQL with HA

If you don’t have an PostgreSQL with HA available we will provide you with some links how to install.

External Redis with HA

If you don’t have an Redis cluster available we will provide you with some links how to install.

Configuration

Configure container replicas

Add this to your config.yaml file

iiot:
  replicas: 3
ipa:
  replicas: 3
analyzer:
  replicas: 3

Configure storage for Edgedirector

The edgedirector container need storage that can be accessible from multiple containers. The supported types are S3 blobstorageAzure blobstorage and PVC ReadWriteMany. Below you find howto configure them in your config.yaml.

S3 Blobstorage

Add this to your config.yaml file and fill in your settings

edgedirector:
  replicas: 3
  config:
    blobStorage:
      type: s3
      s3:
        accessKey: "<access-key>"
        accessKeyId: "<access-key-id>"
        region: "<region>"
        bucket: "<bucket>"
        serverURL: "<server-url>"

Azure Blobstorage

Add this to your config.yaml file and fill in your settings

edgedirector:
  replicas: 3
  config:
    blobStorage:
      azure:
        serverURL: "<server-url>"
        sasQueryString: "<your-sas-querystring>"
        edgenodeContainer: "<storage-container-for-windows-node>"
        resourceContainer: "<storage-container-for-resources>"

PVC ReadWriteMany

Add this to your config.yaml file and fill in your settings

edgedirector:
  replicas: 3
  config:
    blobStorage:
      type: local
      local:
        path: "resources"
        persistence:
          enabled: true
          persistentVolumeClaim:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteMany
            size: 10Gi

Deploy a High Availability Setup

Now you can deploy Crosser Control Center

helm upgrade --cleanup-on-fail \
    --install <helm-release-name> crosser-cloud/crosser-cloud \
    --namespace <k8s-namespace> \
    --create-namespace \
    --version=<chart-version> \
    --values config.yaml \
    --timeout 3600s

Customizing External Services

This page contains instructions for common ways to use external services instead of internals for Crosser Control Center. For a list of all the configurable Helm chart options, see the Configuration Reference section below.

External PostgreSQL

If you don’t have PostgreSQL available we will provide you with some links how to install.

Add this to your config.yaml file and fill in your settings:

# External Database server (PostgreSQL)
#
database:
  config:
    # Enable connection pooling
    poolingEnabled: true
    # The maximum number of connections in the idle connection pool.
    maxPoolSize: 30
  external: # +doc-gen:break
    host: ""
    port: "5432"
    username: ""
    password: ""
    database:
      cloud: "edgedirector"
      moduleregistry: "baget"
    # "disable" - No SSL
    # "require" - Always SSL (skip verification)
    # "verify" - Always SSL (verify that the certificate presented by the
    # server was signed by a trusted CA)
    sslMode: "disable"

External Redis

If you don’t have Redis available we will provide you with some links how to install.

Add this to your config.yaml file and fill in your settings:

# External Redis Configuration
#
redis:
  external:
    addr: ""
    password: ""
    sslMode: true
    abortConnect: false

External InfluxDB

If you don’t have Influx available we will provide you with some links how to install.

Add this to your config.yaml file and fill in your settings:

# External Influx Configuration
#
influx:
  external:
    addr: "http://localdomain.tld:8086"
    username: "edgedirector"
    password: ""
    database: "edgedirector"

Configuration Reference

The Crosser Control Center Helm chart is configurable by values in your config.yaml. In this way, you can customize your installation.

Below is a description of many but not all of the configurable values for the Helm chart. To see all configurable options, inspect their default values defined here.

For more guided information about some specific things you can do with modifications to the helm chart, see the Customization Guide.

Configuration parameters

The following table lists the configurable parameters of the crosser-cloud chart and their default values.

ParameterDescriptionDefaultReference
nameOverrideProvide a name in place of crosser-cloud for app: labels"" 
namespaceOverrideOverride the deployment namespace"" 
fullnameOverrideProvide a name to substitute for the full names of resources"" 
global.imagePullSecretsSecrets to be used when pulling images{"email":"name@domain.tld","enabled":true,"password":"password","registry":"registry.crosser.io","username":"username"}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
global.disconnectedDisconnected installationtrueRef: https://docs.cloud.crosser.io/opc/disconnected
edgedirector.config.organizationDefault organization and user/pass{"email":"admin@example.com","orgName":"Examle","password":"admin12345"} 
edgedirector.config.emailEmail settings for alerts and notifications{"password":"","port":587,"senderMail":"","senderName":"Crosser Cloud","server":"","useSSL":true,"username":""} 
edgedirector.config.slackSlack integration to see exceptions (serilog sink){"webhookURL":""} 
edgedirector.config.blobStorage.typeSet the type as “s3”, “azure” or local and fill the information in the corresponding sectionlocal 
edgedirector.config.blobStorage.s3S3 compatible blobstorage (AWS S3, Minio…){"accessKey":"","accessKeyId":"","bucket":"","region":"us-east-1","serverURL":""} 
edgedirector.config.blobStorage.azureAzure Blobstorage{"edgenodeContainer":"","resourceContainer":"","sasQueryString":"","serverURL":""} 
edgedirector.config.blobStorage.localLocal storage{"path":"resources","persistence":{"enabled":true,"persistentVolumeClaim":{"accessMode":"ReadWriteOnce","existingClaim":"","size":"10Gi","storageClass":"","subPath":""}}} 
edgedirector.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/edgedirector 
edgedirector.image.tagImage tag to use2022.09.01 
edgedirector.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
edgedirector.serviceAccountNameService account for EdgeDirector to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
edgedirector.replicasSize is the expected size of the edgedirector cluster. The controller will eventually make the size of the running cluster equal to the expected size. You will need to use storage that can be accessed by multiple containers1 
edgedirector.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
edgedirector.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
edgedirector.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
edgedirector.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
edgedirector.affinityAssign custom affinity rules to the EdgeDirector instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
edgedirector.annotationsAnnotations for EdgeDirector{} 
database.configDatabase configuration options{"maxPoolSize":30,"poolingEnabled":true} 
database.initalize.runPopulate DB with default values and modulestrue 
database.initalize.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/database-tool 
database.initalize.image.tagImage tag to use2022.09.01 
database.initalize.image.pullPolicyPolicy for kubernetes to use when pulling imagesAlwaysRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
database.typeif external Database is used, set “type” to “external” and fill the connection informations in “external” sectioninternal 
database.internal.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/pgsql-db 
database.internal.image.tagImage tag to use2022.09.01 
database.internal.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
database.internal.passwordThe initial superuser username/password for internal database"changeit" 
database.internal.serviceAccountNameService account for Database to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
database.internal.replicasSize is the expected size of the Redis cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
database.internal.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
database.internal.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
database.internal.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
database.internal.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
database.internal.affinityAssign custom affinity rules to the Redis instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
database.internal.annotationsAnnotations for Redis{} 
database.internal.persistencePersistence storage for Redis{"enabled":true,"persistentVolumeClaim":{"accessMode":"ReadWriteOnce","existingClaim":"","size":"10Gi","storageClass":"","subPath":""}} 
database.externalExternal Database server (PostgreSQL){"database":{"cloud":"edgedirector","moduleregistry":"baget"},"host":"","password":"","port":"5432","sslMode":"disable","username":""} 
redis.typeif external Redis is used, set “type” to “external” and fill the connection informations in “external” sectioninternal 
redis.internal.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/redis 
redis.internal.image.tagImage tag to use2022.09.01 
redis.internal.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
redis.internal.serviceAccountNameService account for Redis to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
redis.internal.replicasSize is the expected size of the Redis cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
redis.internal.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
redis.internal.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
redis.internal.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
redis.internal.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
redis.internal.affinityAssign custom affinity rules to the Redis instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
redis.internal.annotationsAnnotations for Redis{} 
redis.internal.persistencePersistence storage for Redis{"enabled":true,"persistentVolumeClaim":{"accessMode":"ReadWriteOnce","existingClaim":"","size":"1Gi","storageClass":"","subPath":""}} 
redis.externalExternal Redis Configuration{"abortConnect":false,"addr":"","password":"","sslMode":true} 
influx.typeif external Influx is used, set “type” to “external” and fill the connection informations in “external” sectioninternal 
influx.internal.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/influxdb 
influx.internal.image.tagImage tag to use2022.09.01 
influx.internal.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
influx.internal.passwordPassword for internal redis"changeit" 
influx.internal.serviceAccountNameService account for Influx to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
influx.internal.replicasSize is the expected size of the Influx cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
influx.internal.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
influx.internal.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
influx.internal.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
influx.internal.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
influx.internal.affinityAssign custom affinity rules to the Influx instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
influx.internal.annotationsAnnotations for Influx{} 
influx.internal.persistencePersistence storage for Influx{"enabled":true,"persistentVolumeClaim":{"accessMode":"ReadWriteOnce","existingClaim":"","size":"5Gi","storageClass":"","subPath":""}} 
influx.externalExternal Influx Configuration{"addr":"http://localdomain.tld:8086","database":"edgedirector","password":"","username":"edgedirector"} 
moduleregistry.initalize.runAdd modules to registrytrue 
moduleregistry.initalize.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/module-tool 
moduleregistry.initalize.image.tagImage tag to use2022.09.01 
moduleregistry.initalize.image.pullPolicyPolicy for kubernetes to use when pulling imagesAlwaysRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
moduleregistry.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/baget 
moduleregistry.image.tagImage tag to use2022.09.01 
moduleregistry.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
moduleregistry.serviceAccountNameService account for Module Registry to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
moduleregistry.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
moduleregistry.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
moduleregistry.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
moduleregistry.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
moduleregistry.affinityAssign custom affinity rules to the Module Registry instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
moduleregistry.annotationsAnnotations for Module Registry{} 
moduleregistry.persistencePersistence storage for Module Registry{"enabled":true,"persistentVolumeClaim":{"accessMode":"ReadWriteOnce","existingClaim":"","size":"200Gi","storageClass":"","subPath":""}} 
externalrequester.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/edexternalrequester 
externalrequester.image.tagImage tag to use2022.09.01 
externalrequester.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
externalrequester.serviceAccountNameService account for Externalrequester to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
externalrequester.replicasSize is the expected size of the externalrequester cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
externalrequester.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
externalrequester.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
externalrequester.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
externalrequester.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
externalrequester.affinityAssign custom affinity rules to the Externalrequester instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
externalrequester.annotationsAnnotations for Externalrequester{} 
ipa.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/ipa 
ipa.image.tagImage tag to use2022.09.01 
ipa.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
ipa.serviceAccountNameService account for IPA to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
ipa.replicasSize is the expected size of the IPA cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
ipa.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
ipa.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
ipa.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
ipa.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
ipa.affinityAssign custom affinity rules to the IPA instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
ipa.annotationsAnnotations for IPA{} 
iiot.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/edgedirectorui 
iiot.image.tagImage tag to use2022.09.01 
iiot.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
iiot.serviceAccountNameService account for IIoT to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
iiot.replicasSize is the expected size of the IIoT cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
iiot.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
iiot.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
iiot.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
iiot.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
iiot.affinityAssign custom affinity rules to the IIoT instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
iiot.annotationsAnnotations for IIoT{} 
admin.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/edgedirectoradminui 
admin.image.tagImage tag to use2022.09.01 
admin.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
admin.serviceAccountNameService account for Admin to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
admin.replicasSize is the expected size of the Admin cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
admin.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
admin.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
admin.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
admin.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
admin.affinityAssign custom affinity rules to the Admin instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
admin.annotationsAnnotations for Admin{} 
analyzer.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/packageanalyzer 
analyzer.image.tagImage tag to use2022.09.01 
analyzer.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
analyzer.serviceAccountNameService account for Analyzer to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
analyzer.replicasSize is the expected size of the Analyzer cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
analyzer.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
analyzer.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
analyzer.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
analyzer.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
analyzer.affinityAssign custom affinity rules to the Analyzer instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
analyzer.annotationsAnnotations for Analyzer{} 
analyzerCache.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/redis 
analyzerCache.image.tagImage tag to use2022.09.01 
analyzerCache.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
analyzerCache.serviceAccountNameService account for Analyzer Cache to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default if left empty
analyzerCache.resourcesDefine resources requests and limits for single Pods.{}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
analyzerCache.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
analyzerCache.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
analyzerCache.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
analyzerCache.affinityAssign custom affinity rules to the Analyzer Cache instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
analyzerCache.annotationsAnnotations for Analyzer Cache{} 
nodemanager.enabledEnable installation of Node Managerfalse 
nodemanager.configNode Manager configuration options{"auth":{"password":"changeit","username":"manager"},"edgenode":{"image":"/proxy/crosser/edgenode"},"namespace":{"hosted":"hosted-nodes","sandbox":"sandboxes"},"nodeSelector":{"hosted":{},"sandbox":{}}} 
nodemanager.image.repositoryDocker repository to pull the image fromregistry.crosser.io/cloud/crossernodes 
nodemanager.image.tagImage tag to use2022.09.01 
nodemanager.image.pullPolicyPolicy for kubernetes to use when pulling imagesIfNotPresentRef: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
nodemanager.serviceAccountNameService account for Node Manager to use.""Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ set the service account to be used, default “nodemanager” if left empty
nodemanager.replicasSize is the expected size of the Node Manager cluster. The controller will eventually make the size of the running cluster equal to the expected size.1 
nodemanager.resourcesDefine resources requests and limits for single Pods.{"limits":{"cpu":"300m","memory":"256Mi"},"requests":{"cpu":"100m","memory":"64Mi"}}Ref: https://kubernetes.io/docs/user-guide/compute-resources/
nodemanager.securityContextSecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 1000. *v1.PodSecurityContext false{}Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
nodemanager.nodeSelectorDefine which Nodes the Pods are scheduled on.{}Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodemanager.tolerationsIf specified, the pod’s tolerations.[]Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
nodemanager.affinityAssign custom affinity rules to the Node Manager instance{}Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodemanager.annotationsAnnotations for Node Manager{} 
ingress.cloud.enabledIf true, Cloud Ingress will be createdtrue 
ingress.cloud.host crossercloud.domain.tld 
ingress.cloud.ingressClass nginx 
ingress.cloud.annotations {} 
ingress.cloud.tls.enabled false 
ingress.cloud.tls.certSourceThe source of the tls certificate. Set it as “auto”, “secret” or “none” and fill the information in the corresponding section 1) auto: generate the tls certificate automatically 2) secret: read the tls certificate from the specified secret. The tls certificate can be generated manually or by cert manager 3) none: configure no tls certificate for the ingress. If the default tls certificate is configured in the ingress controller, choose this optionauto 
ingress.cloud.tls.auto.commonNameThe common name used to generate the certificate, it’s necessary when the type isn’t “ingress”"" 
ingress.cloud.tls.secret.secretNameThe name of secret which contains keys named: “tls.crt” - the certificate “tls.key” - the private key"" 
ingress.module.enabledIf true, Module Registry Ingress will be createdtrue 
ingress.module.host crossercloud-module.domain.tld 
ingress.module.ingressClass nginx 
ingress.module.annotations {} 
ingress.module.tls.enabled false 
ingress.module.tls.certSourceThe source of the tls certificate. Set it as “auto”, “secret” or “none” and fill the information in the corresponding section 1) auto: generate the tls certificate automatically 2) secret: read the tls certificate from the specified secret. The tls certificate can be generated manually or by cert manager 3) none: configure no tls certificate for the ingress. If the default tls certificate is configured in the ingress controller, choose this optionauto 
ingress.module.tls.auto.commonNameThe common name used to generate the certificate, it’s necessary when the type isn’t “ingress”"" 
ingress.module.tls.secret.secretNameThe name of secret which contains keys named: “tls.crt” - the certificate “tls.key” - the private key"" 

Sample config.yaml file

### Crosser Cloud config that runs all services internally and fully disconnected but without High-Availability
### You can enable more replicas on the following applications.
# edgedirector:
#   replicas: 3
# iiot:
#   replicas: 3
# ipa:
#   replicas: 3
# analyzer:
#   replicas: 3
# admin:
#   replicas: 3
## Used for non-production environments.
global:
  imagePullSecrets:
    enabled: true
    registry: registry.crosser.io
    username: <your-username>
    password: <your-password>
    email: <your-email>
  disconnected: true
edgedirector:
  config:
    organization:
      email: "admin@example.com"
      orgName: "Crosser"
      password: "changeit"
    email:
      server: ""
      port: 587
      useSSL: true
      username: ""
      password: ""
      senderMail: ""
      senderName: "Crosser Cloud"
    blobStorage:
      type: local
      local:
        persistence:
          enabled: true
          persistentVolumeClaim:
            existingClaim: ""
            storageClass: ""
            subPath: ""
            accessMode: ReadWriteOnce
            size: 10Gi
database:
  type: internal
  internal:
    password: "changeit"
    persistence:
      enabled: true
      persistentVolumeClaim:
        existingClaim: ""
        storageClass: ""
        subPath: ""
        accessMode: ReadWriteOnce
        size: 10Gi
redis:
  type: internal
  internal:
    persistence:
      enabled: true
      persistentVolumeClaim:
        existingClaim: ""
        storageClass: ""
        subPath: ""
        accessMode: ReadWriteOnce
        size: 1Gi
influx:
  type: internal
  internal:
    password: "changeit"
    persistence:
      enabled: true
      persistentVolumeClaim:
        existingClaim: ""
        storageClass: ""
        subPath: ""
        accessMode: ReadWriteOnce
        size: 5Gi
moduleregistry:
  persistence:
    enabled: true
    persistentVolumeClaim:
      existingClaim: ""
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 200Gi
ingress:
  cloud:
    enabled: true
    host: crosser-cloud.domain.tld
    tls:
      enabled: false
  module:
    enabled: true
    host: crosser-module-registry.domain.tld
    tls:
      enabled: false



    • Related Articles

    • Installing Nodes with Kubernetes

      This article describes how to install Crosser Nodes on a Kubernetes cluster. Requirements Before you start make sure you have the following dependencies ready and working: Helm > 3 Installation To simplify the setup Crosser provides a Helm chart, ...
    • Introducing Crosser Control Center and a new Flow Studio

      Release Note Crosser Control Center Feb 2, 2023 Introducing Crosser Control Center With this release we are introducing some brand changes that gradually will be implemented across all documentation, illustrations and marketing materials. The main ...
    • Crosser Terminology

      Crosser Terminology - our lingo Crosser Cloud – the web service that is used to design the processing flows to run in Crosser Nodes. Crosser Cloud is hosted by us. This is where you log-in, create and manage all components of the Crosser Platform. ...
    • Crosser Node 2.5.2

      Crosser Node 2.5.2 December 7, 2021 Upgraded software to use .NET 6.0 .NET 6.0 is an LTS release, previously .NET 5.0 was used. In general the upgrade is around improved performance and you can read all about it here Recommendations Using Environment ...
    • Monitoring the Crosser Node

      Introduction Once you have your first flows deployed, you might think about how to integrate the Crosser Node and Flows into your existing monitoring solution. In this article we describe what options you have and how to utilize provided interfaces ...