Section 3

High-Level Architecture

Meridian is a containerized stack that runs entirely within client infrastructure. External API calls to transcription, embedding, and reasoning providers are stateless — no persistent data leaves the client's environment.

System Diagram

Component Reference

Six containers run within client infrastructure (C1–C6). Three external services handle AI processing via stateless API calls.

Client Infrastructure

Web Node
C1

Primary application server — dashboard, search, API routes

Job Runner
C2

Async processing — transcription, embedding, agent jobs

Vector DB (Qdrant)
C3

Semantic knowledge store for meaning-based retrieval

Redis
C4

Job queues, caching, pub/sub, session management

PostgreSQL
C5

Structured data — users, metadata, configs, access control

Meeting Bots
C6

Playwright-based browser bots for meeting capture (scalable ×N)

External Services

Transcription API
External

Deepgram Nova-2 or AssemblyAI

LLM Provider
External

Claude (Anthropic) or GPT-4o (OpenAI)

Blob Storage
External

S3-compatible: Cloudflare R2 / AWS S3 / GCS

Key Architecture Decisions

Everything runs on client infrastructure

All application containers, databases, and the vector store run within the client's own servers. No meeting data is stored with external providers.

Docker Compose deployment

The entire stack ships as a Docker Compose configuration — suitable for a single server or a small Kubernetes cluster.

External APIs used only for processing

Transcription, embedding, and reasoning calls are stateless — audio or text is sent for processing and results are returned. No data is persisted at the provider.

Meeting bots scale horizontally

Each Meeting Bot container (C6) handles one concurrent meeting. Scaling means spinning up additional containers — no shared state between bot instances.