AI Engineering & Research Studio

Disciplined orchestration
beats bigger models

We make AI boring, dependable, inspectable, and production-worthy. Murai Labs is an embedded research studio for teams building AI that needs to actually work.

Most AI teams chase parameter counts. We chase reliability. Our work sits at the intersection of rigorous research and production engineering — we build systems where every decision is inspectable, every output is traceable, and every failure mode is anticipated. Disciplined orchestration isn't a constraint. It's the architecture.
We work as an embedded AI research team — from architecture design through validated deployment. Three ways to work with us.
01

Model Fusion & Unification

Your team has fine-tuned multiple specialist models that sit in silos. We fuse them into a single, routable endpoint — validated across domains, with deterministic dispatch and measurable per-domain performance. No retraining required.

KALAVAI Protocol
02

Embedded AI Research

A senior research engineering partner embedded in your team. Architecture reviews, multi-agent system design, training strategy, evaluation rigor. We bring the discipline of published research to your production timeline.

Retainer
03

On-Device & Edge AI

Inference on Apple Neural Engine, local-first architectures, cloud-fallback routing. We've published the first open system for programming Apple's ANE directly — and we bring that depth to your edge deployment challenges.

Orion Framework
+21.76%

Cross-Lingual Fusion Gain

Tamil, Yoruba, Welsh, and Code specialists fused into a single model with validated held-out improvement across all domains.

+10.17%

Private Domain Fusion

Medical, legal, and patent specialists unified with 3-seed validation. Conversion rate of 0.55× divergence — consistent and reproducible.

8.5×

ANE Compilation Speedup

Delta compilation for Apple Neural Engine inference — first open system bypassing CoreML for direct ANE access.

99.8%

Router Determinism

MoE dispatch routing across all KALAVAI configurations. Each specialist matched on its domain with near-perfect routing accuracy.

KALAVAI — Cooperative LLM Fusion Protocol

Divergence-proportional gain law for distributed specialist fusion via frozen-layer MoE routing
NeurIPS 2026

Orion — Programming Apple's Neural Engine

First open end-to-end system for direct ANE inference and training. arXiv 2603.06728
Published

The Coherence Trap

How multi-agent orchestration systems collapse into self-reinforcing consensus. Four metrics, three theorems.
NeurIPS 2026

ĀTAVI — Multi-Agent Research Protocol

Host-agnostic refinement protocol with adaptive agent count and anti-coherence-trap mechanisms
Shipping Soon

Vithai — Living Reflection Engine

Intergenerational memory preservation for diaspora families via voice-first bilingual capture
In Development
Ramchand Kumaresan

Ramchand Kumaresan

Founder & CEO, Murai Labs

AI research engineer with a program management background (PgMP) and deep experience shipping production software at Procore Technologies, where he worked on construction SaaS at scale — field operations, data platforms, and marketplace integrations.

Now building at the intersection of rigorous ML research and production engineering. Author of Orion (arXiv 2603.06728), the first open system for programming Apple's Neural Engine directly. Leading KALAVAI, a cooperative LLM fusion protocol targeting NeurIPS 2026. Research interests span multi-agent orchestration, on-device inference, and the governance of AI systems — grounded in classical frameworks from Tamil Sangam literature and the Arthashastra.

Based in Texas. Ships code, writes papers, builds systems that work.

PgMP arXiv Published NeurIPS 2026 Ex-Procore Apple ANE Multi-Agent Systems

Let's make your AI
production-worthy

Whether you need specialists fused, agents orchestrated, or a research partner embedded in your team — start with a conversation.

Work With Us
[email protected]