Helix: Conversational AI Platform (WhatsApp + Web)
Helix is a production-grade conversational AI assistant delivered through WhatsApp, backed by a React web portal. It handles finance, travel, reminders, collaborative boards, and connected apps, powered by multi-agent LangGraph with three-layer persistent memory and more than fifty integrations.
6
AI Agents (domain subgraphs)
50+
OAuth Integrations
<100ms
Memory retrieval (target path)
4
LLM Providers (hot-swappable)

Client
Northline Labs
Industry
Conversational AI / Consumer Productivity / Messaging Platforms
Timeline
Production deployment
Team Size
2
Year
2026
Status
Live
01 / THE CHALLENGE
The Challenge
Users needed a personal AI assistant accessible through WhatsApp, one that could remember past conversations, manage expenses, set reminders, book travel, and connect to Google apps without learning any new interface.
Consumer chat apps are fragmented: switching between banking apps, calendars, and travel sites breaks flow. A single conversational surface had to feel instant, trustworthy on memory, and extensible enough to grow new capabilities without rebuilding the core stack each time a new SaaS needed wiring in.
02 / OUR APPROACH
Our Approach
Six domain-specific LangGraph agent subgraphs
Finance, travel, reminders, boards, connected apps, and general each run as dedicated subgraphs so routing stays predictable, prompts stay scoped, and failures do not cascade across unrelated domains.
Pluggable OAuth connector abstraction (50+ integrations)
Shipped a connector abstraction so new SaaS integrations register through adapters without touching orchestration code, enabling the 50+ integration catalog to grow while the LangGraph core stays stable.
Three-layer cognitive memory under 100ms
Combined Qdrant vector recall, PostgreSQL entity graphs, and a Redis snapshot cache so recent context, durable facts, and hot working sets each land on the right store, targeting sub-100ms retrieval on the critical path.
Multi-provider LLM factory and Langfuse observability
Built a hot-swappable factory across Gemini, Groq, Claude, and OpenRouter for A/B routing and failover, with end-to-end traces in Langfuse so regressions are caught before they hit WhatsApp users.
03 / ARCHITECTURE
Technical Architecture
04 / RESULTS
Results & Impact
6
AI Agents (domain subgraphs)
50+
OAuth Integrations
<100ms
Memory retrieval (target path)
4
LLM Providers (hot-swappable)
- Full AWS deployment with Prometheus and Grafana for service-level observability
- Sub-100ms memory retrieval across the three-layer cognitive architecture on tuned workloads
- Production A/B testing of LLM providers with configuration-only swaps, with no redeploy for routing experiments
- Flagship build demonstrating end-to-end conversational AI: WhatsApp channel, FastAPI services, LangGraph agents, durable memory, and a React analytics and boards portal
- Multi-language voice path: Sarvam AI TTS for Indian languages and Google Cloud TTS for international locales
05 / PRODUCT
Screenshots & Product

06 / USE CASES
Use Cases
Personal productivity
One WhatsApp thread for expenses, reminders, and travel instead of juggling five siloed apps
Finance hygiene
Natural-language logging and categorisation with a React portal for analytics when spreadsheets are overkill
Travel and calendar coordination
Agent subgraphs that reason over availability and confirmations with Google-connected context
Small teams
Collaborative boards plus shared memory patterns for lightweight coordination without adopting a full enterprise suite
Next
Related Case Studies
Artificial Intelligence
Adge-Angle
Enterprise-grade AI platform for high-volume image background removal with pixel-perfect accuracy at millisecond speeds.
Read case studyArtificial Intelligence
ArchVision AI
AI platform that converts any 2D floor plan into an interactive 3D model in seconds. Works with hand-drawn, scanned, and digital inputs. No CAD expertise required.
Read case studyHealthcare
CareLine AI
AI Voice Assistant for Healthcare Appointment Automation
Read case study
