AlgoX2 isn't a library you integrate — it's the foundation you build on. You write the modules that matter: matchers, risk engines, surveillance, analytics. AlgoX2 handles everything underneath — ordering, replication, storage, fan-out, and recovery.

Every component in your system is a stream consumer and producer. Components don't talk to each other directly — they communicate through the ordered stream. That means they can be developed, deployed, tested, and recovered independently.

The pattern: read, compute, write

Each module reads from streams, does work, and writes to streams. AlgoX2 guarantees the ordering, durability, and delivery. A matching engine reads orders and writes fills. A clearing module reads fills and writes settlement instructions. A surveillance system reads both. A market data publisher reads fills and publishes to external consumers. All from the same source of truth.

Recovery is replay. If your matcher crashes, it restarts and replays the order stream from the last checkpoint. It arrives at the same state deterministically — because the ordering is guaranteed by the sequencer. No reconciliation logic. No divergence between systems. This is the same principle behind database write-ahead logs, filesystem journals, and blockchain ledgers — applied to real-time streaming at hardware speed.

EXEC: run your code inside the cluster

Write a program in any language — Python, Rust, C++, Go, Scala — and the cluster executes it at scale. Messages arrive via the shared memory bus in microseconds. Your program reads from stdin, writes to stdout. AlgoX2 handles scheduling, fan-out, failover, and output routing.

No external process fleets. No data shipping. No serialization overhead.

What you're buildingWithout EXECWith EXEC
Market data decoder48 external filter processes, separate opsOne decoder binary, cluster-managed
Strategy backtestingExternal compute fleet, data shippingStrategy runs next to data, multicast-fed
Format conversionConnector + transformer + connectorInline converter, zero network hops
Custom surveillanceSeparate application, separate deploymentRuns inside the stream path
Smart subscriptionsFat topics + downstream filteringServer-side filtering, clients receive only what they need
Risk engineStandalone service, network latency, separate scalingCo-located with data, shared-memory delivery

How it works:

  1. Deploy your binary or script to the cluster
  2. AlgoX2 launches it as a managed process with X2 streams as stdin/stdout
  3. Messages arrive via shared memory — microsecond delivery
  4. Output routes to any stream, consumer, or storage target
  5. Consumer groups supported — scale horizontally without managing process fleets

EXEC uses spare CPU and memory capacity on existing nodes. No additional infrastructure. Any language that reads stdin and writes stdout works. Most teams start with external consumers via Kafka or NATS and move performance-critical components to EXEC as they scale.

Get started

AlgoX2 runs on-prem, in your cloud, or hybrid. A single-node dev instance is one command: x2sup -temp. Connect your existing Kafka producers and consumers — zero code changes — and start building.