High-Level Architecture
Meridian is a containerized stack that runs entirely within client infrastructure. External API calls to transcription, embedding, and reasoning providers are stateless — no persistent data leaves the client's environment.
System Diagram
Component Reference
Six containers run within client infrastructure (C1–C6). Three external services handle AI processing via stateless API calls.
Client Infrastructure
Primary application server — dashboard, search, API routes
Async processing — transcription, embedding, agent jobs
Semantic knowledge store for meaning-based retrieval
Job queues, caching, pub/sub, session management
Structured data — users, metadata, configs, access control
Playwright-based browser bots for meeting capture (scalable ×N)
External Services
Deepgram Nova-2 or AssemblyAI
Claude (Anthropic) or GPT-4o (OpenAI)
S3-compatible: Cloudflare R2 / AWS S3 / GCS
Key Architecture Decisions
All application containers, databases, and the vector store run within the client's own servers. No meeting data is stored with external providers.
The entire stack ships as a Docker Compose configuration — suitable for a single server or a small Kubernetes cluster.
Transcription, embedding, and reasoning calls are stateless — audio or text is sent for processing and results are returned. No data is persisted at the provider.
Each Meeting Bot container (C6) handles one concurrent meeting. Scaling means spinning up additional containers — no shared state between bot instances.