Panopticon Streams

Panopticon’s stream processing engine enables users to program sophisticated Kafka-enabled business logic and data handing functions using a fully visual interface.


Built on Apache Kafka

Panopticon stream processing is built on Apache Kafka and Kafka Streams. It requires no coding — not even in KSQL — to build sophisticated event processing applications.

Business users who understand business problems create data flows. They see a visual representation of their logic on the screen. Within minutes of receiving the software, they can be up and running, designing, and deploying their own real-time processes.

The underlying platform is Kafka and it supports all the benefits of Kafka, but without its complexity.

Similarly, firms don’t need to deploy a proprietary, legacy event processing platform. They can leverage existing investments in Kafka and get started immediately.

Build complex data flows in a web browser

Panopticon allows users to build stream processing applications that:

  • Subscribe to streaming data inputs, including Kafka streams and others
  • Retrieve from historic and reference data sources
  • Join data streams and tables
  • Aggregate streams within defined time windows
  • Conflate streams
  • Create calculated performance metrics
  • Filter streams
  • Branch streams
  • Union and merge streams
  • Pulse output
  • Create alerts based on performance metrics against defined thresholds
  • Output to Kafka or to email, or write to databases such as kdb+, InfluxDb, or any SQL database

Build event processing applications in minutes using only a web browser – and without writing a line of code.