Panopticon Streams is the stream processing engine that enables users to program sophisticated business logic and data functions using a fully visual interface in Panopticon Visual Analytics.
Built on Apache Kafka
Panopticon Streams is built on Apache Kafka and Kafka Streams, but it requires no coding — not even in KSQL — to build sophisticated stream processing applications.
With Panopticon Streams, business users who understand business problems create data flows. They see a visual representation of their logic on the screen. Within minutes of receiving the software, they can be up and running – designing and deploying their own real-time processes.
The underlying platform for Panopticon Streams is Kafka, and it supports all the benefits of Kafka, but without its complexity.
Similarly, firms don’t need to deploy a proprietary, legacy event processing platform. They can leverage their existing investment in Kafka and get started immediately.
Build complex data flows in a web browser
Panopticon Streams allows users to build stream processing applications that:
- Subscribe to streaming data inputs, including Kafka streams and others
- Retrieve from historic and reference data sources
- Join data streams and tables
- Aggregate streams within defined time windows
- Conflate streams
- Create calculated performance metrics
- Filter streams
- Branch streams
- Union and merge streams
- Pulse output
- Create alerts based on performance metrics against defined thresholds
- Output to Kafka or to email, or write to databases such as kdb+, InfluxDb, or any SQL database
Build event processing applications in minutes using only a web browser – and without writing a line of code. With Panopticon Streams, business users who understand business problems create data flows. They see a visual representation of their logic on the screen.