AlgoX2 isn't a library you integrate — it's the foundation you build on. You write the modules that matter: matchers, risk engines, surveillance, analytics. AlgoX2 handles everything underneath — ordering, replication, storage, fan-out, and recovery.
Every component in your system is a stream consumer and producer. Components don't talk to each other directly — they communicate through the ordered stream. That means they can be developed, deployed, tested, and recovered independently.
The pattern: read, compute, write
Each module reads from streams, does work, and writes to streams. AlgoX2 guarantees the ordering, durability, and delivery. A matching engine reads orders and writes fills. A clearing module reads fills and writes settlement instructions. A surveillance system reads both. A market data publisher reads fills and publishes to external consumers. All from the same source of truth.
Recovery is replay. If your matcher crashes, it restarts and replays the order stream from the last checkpoint. It arrives at the same state deterministically — because the ordering is guaranteed by the sequencer. No reconciliation logic. No divergence between systems. This is the same principle behind database write-ahead logs, filesystem journals, and blockchain ledgers — applied to real-time streaming at hardware speed.
EXEC: run your code inside the cluster
Write a program in any language — Python, Rust, C++, Go, Scala — and the cluster executes it at scale. Messages arrive via the shared memory bus in microseconds. Your program reads from stdin, writes to stdout. AlgoX2 handles scheduling, fan-out, failover, and output routing.
No external process fleets. No data shipping. No serialization overhead.
| What you're building | Without EXEC | With EXEC |
|---|---|---|
| Market data decoder | 48 external filter processes, separate ops | One decoder binary, cluster-managed |
| Strategy backtesting | External compute fleet, data shipping | Strategy runs next to data, multicast-fed |
| Format conversion | Connector + transformer + connector | Inline converter, zero network hops |
| Custom surveillance | Separate application, separate deployment | Runs inside the stream path |
| Smart subscriptions | Fat topics + downstream filtering | Server-side filtering, clients receive only what they need |
| Risk engine | Standalone service, network latency, separate scaling | Co-located with data, shared-memory delivery |
How it works:
- Deploy your binary or script to the cluster
- AlgoX2 launches it as a managed process with X2 streams as stdin/stdout
- Messages arrive via shared memory — microsecond delivery
- Output routes to any stream, consumer, or storage target
- Consumer groups supported — scale horizontally without managing process fleets
EXEC uses spare CPU and memory capacity on existing nodes. No additional infrastructure. Any language that reads stdin and writes stdout works. Most teams start with external consumers via Kafka or NATS and move performance-critical components to EXEC as they scale.
Get started
AlgoX2 runs on-prem, in your cloud, or hybrid. A single-node dev instance is one command: x2sup -temp. Connect your existing Kafka producers and consumers — zero code changes — and start building.