Meridian Meeting Intelligence Platform
A self-hosted meeting intelligence platform that captures, transcribes, embeds, and reasons over meeting content — surfacing insights at three distinct intelligence layers aligned to organizational hierarchy.
Reader Guidance
There are four performance and scaling profiles in this system. One of them (PostgreSQL, Redis, and the Web Node) is negligible: use default settings, don't think about it. That leaves three components a sysadmin or SRE would actually need to understand when deploying and operating this:
Async processing for transcription, embedding, and agent jobs. Bursty, mostly I/O-bound.
The only storage component that scales with corpus size. The one thing you may eventually need to tune.
Stateless, horizontally scalable. One container per concurrent meeting.
If you only want to deploy and operate this system, skip to Section 4.7 (Performance Profiles) and Deployment. The rest of this document exists so that if you technically wanted to go build this yourself, you probably could.
System Overview
Meridian is a self-hosted meeting intelligence platform built for GH Systems. The platform is delivered as a containerized stack (Docker Compose) — all application containers, databases, and the vector store run within the client's infrastructure.
Capture & Transcribe
Embed & Index
Reason & Surface
Client Infrastructure
Technology Philosophy
TypeScript End-to-End
IS4's default stack is TypeScript end-to-end, frontend and backend. This gives us a single language across the entire codebase, shared type definitions between client and server, and a unified toolchain for linting, testing, and deployment.
Default for its ecosystem and deployment breadth. Adaptable to any TypeScript-compatible framework (Vue, Svelte, Angular, Solid) if GH Systems has an existing preference.
Preferred for SSR, API routes, and unified build pipeline. Adaptable to Fastify, Express, Remix, or Nuxt if preferred.
Two exceptions to TypeScript:
When a component requires deep integration with ML tooling where Python has an overwhelming library advantage (PyTorch, HuggingFace, etc.). For Meridian, this only applies if self-hosted transcription or embedding models require custom fine-tuning. Calling external APIs does not trigger this exception — TypeScript SDKs are first-class for all major providers.
If profiling reveals a bottleneck where Node.js throughput or latency is insufficient, we surgically replace that component with Go. This is a reactive decision based on measured constraints, not a preemptive one.