64347e852e21f4cbe206589962cd86a0e533f21d

Can it fit a line of code?
If yes, we can make it think.

Platform Overview

Platform
Overview

Stream Analyze Platform: empowering the edge with AI

Stream Analyze offers a lightweight streaming analytics and AI platform designed for edge devices. As an end-to-end platform, we enable efficient development, training, deployment, and orchestration of analytics, computational, and AI models. 


We work with dynamic industries such as manufacturing, transport, energy & utilities - wherever sensors, devices and machines live. Ready to gain actionable insights, and make data-driven decisions right at the edge?

Deploy & evolve analytical AI to every edge across the enterprise

Group 3 (1)
Stream Analyze is deployed directly to sensors, devices or assets of all sizes.
AI models and queries are sent to assets and data is streamed back, real-time for live analysis.
Development & deployment are managed within the SA Studio software environment.

Key Capabilities

To solve for an array of business cases, Stream Analyze has designed its platform to enable critical capabilities. These are unique facets to our solution which are both independent features yet integrated to allow for focused applications and expansion over time.
Turn your physical edge into interactive and queryable computational databases.
On-the-fly Model Queries
Leverage our Zero-Trust framework with encryption, time-stamping, audit trails, and more.
Zero-Trust Stream Integration
We built a compact edge engine that runs on any hardware with high performance.
The Run-Anywhere Edge Engine
Manage multiple and concurrent model deployments with industrial scale and impact.
Dynamic Model Operations
Solve the complex information integration challenge across the edge, the cloud, and on-prem.
Data Harmonization
Orchestrate massive fleets of edge devices to act as a unified and powerful ecosystem.
Federated Architecture

Federated
Architecture

Deploy what's next. More models, more scale, more impact 
- in minutes not days.

Model Operations at industrial scale is critical. Stream Analyze has a proven framework for model governance, lifecycle management and automated deployment of ongoing AI and decision models.

What can I use Stream Analyze Platform for?

Decision Intelligence from the edge

The Stream Analyze Platform enables intelligence on all kinds of products, such as consumer products, vehicles, industrial machines, heavy-duty equipment, robots and production line devices. This facilitates new levels of insights and automation, such as predictive maintenance, operation mode insights, and real-time usage-based cost models.

The Stream Analyze Engine is the core component of the Platform, distinguished by its ultra-lightweight resource requirement and runs on a wide range of hardware such as industrial PCs, single board computers (SBCs), microcontrollers (MCUs) and telematic control units (TCUs). It can coexist with existing software on the device either as a separate process or in a container, or even run bare bone on devices without an OS.

We include a library of over 1000 predefined functions for mathematical/statistical computations, data stream filtering and transformation, signal processing, model and data management, and much more. The function library is continuously extended for new customer needs and it is easy to define and deploy new user functions on the fly.

 

Easy integration with enterprise systems

There is built-in support for 3rd party communication protocols and the possibility to install SA Engine on a wide range of devices makes it easy to integrate into existing enterprise solutions.

 

AI deployment to large fleets of devices

Inside the Stream Analyze Engine there is a light-weight main memory database manager where both models and user data is stored. Since this runs as a software engine directly on devices, AI models and analytics code can be updated interactively on the fly by updating the local databases.

Deploying an AI model does not require a firmware update and can even be done while the system is running. This facilitates easy deployment of new AI models to large fleets of devices without time-consuming and costly stops in operation or manual interactions with the hardware.

Learn more

Get under the hood with our platform for more in-depth knowledge.