The Specialist Ensemble
Three products, one philosophy: specialization beats scale. We built the tools we wished existed when we were fighting with C++ codebases and praying to the linker gods.
FAISS Extended
Vector search that doesn't make you wait
We took Facebook's FAISS and gave it superpowers. Sorted inverted lists for early termination. TBB parallelism that actually scales. Pluggable storage backends so you can run on S3, Azure, or your trusty NVMe drives.
Key Features
- 3-5x faster search with sorted inverted lists
- Intel TBB integration (goodbye OpenMP limitations)
- S3, Azure, GCS, and local storage backends
- Transparent compression (LZ4, ZSTD)
- Thread-safe concurrent operations
- Drop-in replacement for standard FAISS
MLGraph
Turbopuffer vibes, but you own the hardware
A distributed vector database inspired by the best, built for the real world. Tiered storage from memory to SSD. Mirror group replication for reliability. Distributed centroids that actually make sense.
Key Features
- Distributed centroid architecture
- Mirror groups with tunable consistency
- Tiered storage: memory → NVMe → SSD
- Enterprise auth with JWT and MFA
- gRPC and REST APIs
- Automatic failover and recovery
SLM Ensemble
Specialists beat generalists. Always.
A coordinated orchestra of 7B-13B models, each an expert in its domain. Custom C++ tokenizer. 100B token training corpus. Integration with real debuggers because guessing what code does is for amateurs.
Key Features
- Specialized C++ tokenizer (std::vector is ONE token)
- 100B token corpus with developer comments
- gdb/rr debugger integration
- Multi-agent orchestration
- < 300ms inference latency
- Runs on consumer GPUs (RTX 4090/5090)
The Whole Is Greater Than the Sum
Our products are designed to work together. Use FAISS Extended as the storage layer for MLGraph. Feed your SLM Ensemble with context from your vector database. It's a symphony of specialization.