The AI Concierge as a Quiet Moat
When 120 Essays Became a Knowledge Base

—— The moment the chat icon appeared in the bottom right of our PWA, we realized something. The 120 essays we had been writing were not just content. They were a knowledge base. And the structure they created was a moat — built without locking anyone in.

A Chatbot in 30 Lines

It started simply. We wondered if we could run an AI chatbot inside TokiQR's PWA. The Anthropic API could be called from a browser, but CORS restrictions stood in the way. So we wrote a roughly 30-line proxy on a Cloudflare Worker.

Here's how it works. On the settings page, users enter their own Anthropic API key. The key is stored only in the browser's localStorage — it never touches our servers. A chat icon appears in the bottom right corner, and tapping it opens the assistant.

The proxy does nothing but enforce an Origin restriction. It holds no data. The serverless principle remains intact. The entire implementation took a single day.

The Moment Essays Became a Knowledge Base

The chatbot's system prompt contains basic information about TokiStorage and a list of key page URLs. The assistant answers questions and suggests links to relevant pages.

That's when it clicked. TokiStorage has over 120 essays. How to use QR codes, tips for voice recording, the design philosophy behind storage methods, guide to the monitor program. Each was written as a standalone article, but when placed behind an AI, they function as a structured knowledge base without any additional work.

What was written as content functions as an AI knowledge base. With zero additional work. Every new essay makes the assistant smarter.

Ask "How do I print QR codes?" and the assistant responds with key points from the relevant guide, complete with links. Ask "What points can I use in a proposal to a venue?" and it summarizes insights from brochures and essays.

This Is Not an AI Wrapper

The market is flooded with "AI + X" services. Legal AI wrappers, accounting AI wrappers, customer support AI. But most are just a prompt layer on top of a general-purpose LLM. Anyone can write the same prompt and build the same thing.

TokiStorage's AI assistant is different. Behind it sits our own collection of 120+ essays — a proprietary knowledge base. This knowledge cannot be replicated, because each essay captures specific insights accumulated through actual service design.

AI is just the delivery mechanism. The value lies in the knowledge being delivered.

No matter how much Anthropic improves their general chatbot, it won't contain TokiStorage's domain knowledge. Even if a competitor adopts the same LLM, they can't build a 120-essay knowledge base overnight. The structure makes imitation inherently difficult.

A Moat Without Lock-In

What makes this interesting is the complete absence of lock-in. The API key belongs to the user. The proxy source code is public. The PWA runs offline. Data lives on the client side.

Yet for a venue staff member who starts using this assistant, it's a "cutting-edge AI tool." Questions get instant answers. Proposal ideas emerge on demand. Natural pathways to essays deepen service understanding.

There is no lock-in. And yet, there is. Not locking in becomes the strongest lock-in of all.

The strongest moat is made not of walls, but of knowledge.

A Door Opened by 30 Lines of Code

Thirty lines of Cloudflare Worker. One settings page. One chat UI file. Technically, it's astonishingly small. But behind it lies a knowledge base of 120 essays — an accumulation of time. This asymmetry is the moat itself.

Technology gets democratized. Anyone can use an LLM. That's exactly why differentiation comes not from the technology, but from the proprietary knowledge placed on top of it. By continuing to write essays, TokiStorage had been digging that moat without even realizing it.

The moment we tapped the chat icon, we blurted out: "This is incredible." The AI was responding naturally, drawing on essays written just days ago. That feeling of surprise — that's what tells you the moat runs deep.