Reachy Mini Customer Greeting Robot
April 26, 2026
Summary: The Advocate makes a strong case for cloud LLM based on ecosystem integration and operational simplicity. The Skeptic counters with compelling arguments around latency, reliability, and cost predictability. The Realist shifts the frame entirely, arguing that the decision should be driven by operational constraints.
Key Tension: Cloud LLM offers simplicity but introduces network dependency, while local LLM offers reliability but introduces hardware complexity.
Summary: The Advocate landed strong attacks on local LLM's hardware costs and operational burden. The Skeptic countered with compelling reliability and privacy arguments. The Realist delivered the most nuanced attacks, pointing out that cloud dependency isn't eliminated by going local.
Key Tension: The hardware cost argument vs the reliability argument — both valid but pointing in opposite directions.
Summary: All three delivered strong rebuttals. The Advocate landed a logical knockout on privacy. The Skeptic made the cost amortization argument and flipped the guardrails argument. The Realist dominated with the failover critique.
Key Tension: The guardrails argument has been turned against both sides. The Realist's failover point remains unaddressed.
Summary: A remarkable convergence occurred — all three made substantial, honest concessions. The Skeptic's pivot is most significant: "cloud-first for single-store pilot, migrate to local for multi-store rollout" reframes the debate.
Key Tension: The Skeptic conceded cloud is pragmatic for pilot. The Advocate's question about single-store context cuts to the heart.
Summary: The cross-examination questions were razor-sharp. The Skeptic exposed the multi-cloud-dependency problem. The Advocate countered with a migration cost challenge. The Realist demanded explicit cost thresholds including 2am support scenarios.
Key Tension: Both sides agree cost scales with volume but disagree on break-even thresholds and who bears operational risk.
After 5 rounds of structured debate, the three debaters converged on a nuanced position: both cloud and local LLM have valid use cases depending on deployment scale, team capacity, and reliability requirements.
Critical Insight: The current architecture already has 3 cloud dependencies (Whisper, GPT-4o-mini, TTS), so going "local for LLM" doesn't eliminate cloud risk — it just adds a fourth layer. True local independence requires replacing all three.