1. How This Differs from Conventional Cloud Storage
Conventional cloud services require more storage as users grow. Store a million photos and you need terabytes. Store a million voice recordings and the numbers become staggering. Capacity scales linearly with users, and so does cost. This is the fundamental structure of cloud storage—and its inherent burden: the more a service grows, the heavier its infrastructure costs become.
TokiQR is different. User voices and images are not stored on a server. They are encoded into the URL within each QR code and exist physically on paper. All the server holds is the "player"—the decoder that receives that URL and renders the content.
| TokiQR | Cloud Photo Service | |
|---|---|---|
| Where data is stored | Inside the QR URL on paper | Central server |
| Server capacity at 1 million codes | 3 MB (decoder only) | 5 TB+ |
| Server capacity at 100 million codes | 3 MB (unchanged) | 500 TB+ |
| Monthly storage cost | ≈ $0 | $100+/TB |
Whether one million or one hundred million codes are issued, the server side stays at 3 MB. Data is distributed across physical paper, so the cloud scaling problem simply does not arise.
The difference goes beyond capacity. The operational model is fundamentally different.
| Conventional (CRUD) | TokiStorage (Write-Once) | |
|---|---|---|
| Operations | Create, Read, Update, Delete | Write once, never modify |
| Update frequency | High (users modify at will) | Zero (immutable once finalized) |
| History tracking | Required (track changes) | Unnecessary (no changes exist) |
| Locking | Required (prevent concurrent writes) | Unnecessary (single write only) |
| Consistency | Complex (transactions) | Trivial (immutable data) |
| Backup | Snapshots + incremental diffs | Simple copy |
Conventional storage requires vast machinery to “manage data that changes”—databases, transaction logs, locking, incremental backups. All of it stems from one assumption: data can change. TokiStorage's preserved records, once finalized, never change. No updates means no history needed. No locking needed. Consistency is automatically guaranteed by immutability itself. This is the principle that structurally reduces storage complexity to zero.
2. What Makes Up Those 3 MB
What exactly is inside those 3 megabytes?
| Component | Size | Purpose |
|---|---|---|
| Codec2 WASM | 1.9 MB | Voice decoder (proprietary audio encoding) |
| Brotli WASM | 1.0 MB | Data compression / decompression |
| JavaScript / CSS / HTML | 0.1 MB | UI and playback logic |
That is the entirety of TokiQR. No database. No user table. No authentication server. Everything is served as static files. A free static hosting service like GitHub Pages is all it takes.
By not holding data, you eliminate the need to protect data.
The server does one thing: it reads.
3. The One Linear Growth Factor
Does this mean nothing in TokiStorage's infrastructure ever grows? There is one exception: newsletter PDFs.
An important distinction first. TokiStorage's design principle is three-layer distributed storage: physical (QR paper), national (National Diet Library), and private (GitHub). Newsletter PDFs are not the customer's voice data—they are archival publications that serve the national layer. The GitHub manifest serves as the private layer backup. The customer's voice remains encoded on QR paper (the physical layer) in their hands. Even if every PDF and GitHub repository were lost, every TokiQR would still play from paper alone. What resides on servers is not voice data, but distributed storage options that ensure redundancy.
TokiStorage publishes a biannual newsletter in PDF format. HTML essays are text-based and weigh roughly 20 KB each, but a PDF is a binary file—roughly 630 KB per issue. Additionally, each issue's appendix includes customer TokiQR codes. When an order ships, the storage pipeline automatically generates a PDF from the TokiQR playback page and commits it to the archive repository. One print PDF per customer, so binary volume grows in proportion to the number of featured customers. And because PDFs are binary, Git's delta compression does not work on them.
| Timeframe | Issues | PDF Total | Estimated with Git History |
|---|---|---|---|
| 1 year | 2 | ∼1.3 MB | ∼2.5 MB |
| 20 years (1 volume) | 40 | ∼25 MB | ∼50 MB |
| 100 years (5 volumes) | 200 | ∼125 MB | ∼250 MB |
| 1,000 years (50 volumes) | 2,000 | ∼1.25 GB | ∼2.5 GB |
GitHub recommends keeping repositories under 1 GB, and strongly recommends staying under 5 GB. Piling everything into a single repository will eventually hit these limits. But there is a design-level mitigation.
4. The PDF Principle
From this analysis, a simple operational rule emerges:
Generate PDFs only when finalized. Never repeatedly update the same PDF file.
HTML is text—Git diffs work beautifully. Rewrite an HTML file a hundred times, and the history grows only by the changed lines. PDFs are binary—even a single-character correction copies the entire file into Git history. Update a 300 KB brochure PDF ten times, and 3 MB accumulates in history alone.
The principle: treat HTML as source code, editable at will. Treat PDFs as build artifacts, generated only in their final form. This mirrors the software engineering rule of not committing build artifacts to version control.
- HTML (source) — Update freely. Text diffs keep history growth minimal
- PDF (artifact) — Generate only the final version. No intermediate PDFs
- Newsletter PDF — Generate once at publication. Never update afterward
Following this principle keeps Git history growth to a minimum. Only the net PDF volume grows—630 KB per issue, a predictable and manageable curve.
5. Shikinen Sengū: Repository Renewal Every 20 Years
Japan's Ise Grand Shrine practices Shikinen Sengū—rebuilding its halls every 20 years. The old structure is not destroyed; a new one is built alongside it, and the role transfers. We apply this philosophy to repository management.
TokiStorage's newsletter uses a numbering system where 20 years equals one volume. Two issues per year, 40 issues per volume. One volume's PDF total is roughly 25 MB. By organizing PDFs into volume-based subdirectories, each completed volume can be cleanly separated into its own repository.
| Volume | Period | Issues | Estimated PDF Size | Status |
|---|---|---|---|---|
| Vol.1 | 2026–2045 | 40 | ∼25 MB | Active |
| Vol.2 | 2046–2065 | 40 | ∼25 MB | — |
| Vol.3 | 2066–2085 | 40 | ∼25 MB | — |
| … | … | … | … | … |
| Vol.50 | 3006–3025 | 40 | ∼25 MB | — |
Once a volume is complete, its PDFs are never updated again. They become a read-only archive. But “frozen” does not mean “inaccessible.” Each volume's repository keeps GitHub Pages active, and PDF downloads remain permanently available.
The key is URL design. From the very first issue, PDF download links on the main site point to each volume's archive URL (e.g., tokistorage.github.io/newsletter-vol1/2026-02.pdf). No PDFs are stored in the main repository. This means separating a volume requires zero link rewrites. The risk of something breaking during separation is zero.
Under this structure, the main repository contains only HTML source code. Even after 1,000 years, it remains lightweight. The 50 archived volumes each sit quietly as independent 25 MB repositories, serving PDFs through GitHub Pages indefinitely.
The Automated Pipeline
This Shikinen Sengū system does not rely on manual operations. When the storage pipeline detects a shipped order, it automatically generates a TokiQR PDF from the playback page and commits it directly to the current volume's archive repository. The main repository receives only a JSON manifest with archive URLs—no binaries ever touch it.
Newsletter advancement is automated as well. Once the current issue's publication month has passed, the pipeline auto-generates the next issue's HTML draft, manifest, and index page card. When the 20-year volume boundary is crossed, the pipeline provisions the new archive repository entirely via the GitHub API—creating the repo, committing the index page and auto-merge workflow, and enabling GitHub Pages, all in a single automated sequence.
Human intervention is zero. The pipeline maintains the Shikinen Sengū structure autonomously. To sustain operations across 1,000 years, the system itself must be self-running.
Just as Shikinen Sengū has sustained Ise Grand Shrine for 1,300 years
by rebuilding every 20 years, splitting repositories every 20 years
prevents 1,000 years of storage from ever breaking down.
6. Migration and URL Permanence
There is no guarantee that GitHub will exist in 1,000 years. So what does TokiQR actually depend on?
| Risk | Severity | Mitigation |
|---|---|---|
| Hosting provider disappears | Medium | Re-host the 3 MB decoder anywhere |
| URL changes | Medium | github.io is free. Platform survival = URL survival |
| Codec2 format deprecated | Low | The WASM binary contains the decoder itself; spec is open |
| HTML/JS incompatibility | Low | Browser backward compatibility is exceptionally strong |
| Physical QR paper degradation | Low | UV laminate is renewable via Shikinen Sengū. Quartz glass edition: 1,000+ year physical durability |
If the URL embedded in QR paper becomes invalid, the data exists on paper but cannot be read. URL permanence is therefore the central issue.
The Custom Domain Trap
The conventional wisdom is to acquire a custom domain and control migration via DNS redirects. But from a permanence standpoint, custom domains introduce their own risks.
- Ongoing cost — Domains require annual renewal fees. A lapse in payment means the URL dies
- Registrar dependency — If the domain registrar goes out of business, a transfer process is required
- Succession fragility — For individuals and small organizations, the operator's absence can directly halt renewal
Custom domains offer the flexibility of controlling where traffic is directed, but they carry a perpetual liability: recurring fees. At the 1,000-year or 10,000-year scale, this liability may itself become the single point of failure.
The github.io Choice
TokiQR uses tokistorage.github.io as-is. This is a deliberate design decision.
- Zero maintenance cost — GitHub Pages is free. There is no payment lapse risk
- Platform survival = URL survival — As long as GitHub exists, the URL remains valid
- Riding a giant’s infrastructure — GitHub (Microsoft) operates the world's largest code hosting platform. Its expected lifespan far exceeds any individual domain registration
Of course, there is no guarantee GitHub will exist forever. But when that day comes, the migration effort is simply re-hosting 3 MB of decoder files. This is fundamentally different from migrating a 100 TB database. And there is no need to re-stamp QR paper with new URLs. Throughout internet history, URLs of major platforms have not vanished—they have been redirected.
A zero-cost URL cannot expire from a lapsed payment.
The most permanent URL is one that costs nothing to maintain.
Software and Hardware: Two Wheels of Sustainability
So far, this essay has detailed the software strategy: the 3 MB decoder, the Write-Once model, and Shikinen Sengū repository renewal. But completing a 1,000-year design requires hardware-side permanence as well.
TokiStorage offers two physical media: UV laminate and quartz glass. UV laminate withstands outdoor conditions for several years and indoor conditions for decades. When it degrades, a new laminate is produced from the same data. Quartz glass withstands temperatures above 1,000℃. Joint research by Hitachi and Kyoto University demonstrated data preservation in quartz glass for over 300 million years. Microsoft’s Project Silica is also developing quartz glass data storage. The permanence of quartz glass as a physical medium is scientifically established.
These two media mirror the structure of Ise Grand Shrine. Quartz glass is the goshintai (sacred object)—the unchanging essence that endures a thousand years. UV laminate is the shaden (shrine hall)—periodically rebuilt, an act of technological succession. The software side shares the same structure. The WASM decoder is the goshintai—the immutable core. Repositories are the shaden—rebuilt every 20 years. The Shikinen Sengū philosophy runs through both hardware and software.
Additionally, QR codes include error correction as defined by ISO/IEC 18004, capable of recovering up to 30% of data even when part of the surface is stained or damaged. Laminate physical protection, quartz glass durability, and QR error correction—a triple defense that structurally mitigates physical degradation risk.
Software permanence alone is meaningless if the paper decays.
Hardware permanence alone falls silent without a reader.
Only when both wheels turn does a 1,000-year design hold.
7. Format Durability
Equally important to physical storage capacity is whether data formats will remain readable in the future.
Here are the formats TokiQR depends on:
- HTML — Over 30 years of backward compatibility since 1993. One of the longest-lived digital formats
- JavaScript — Continuously evolving since 1995, yet old code still runs
- WebAssembly (WASM) — Binary specification is crystallized; can be executed to spec regardless of browser changes
- Codec2 — Open-source voice codec with published specification, decoder embedded in WASM
- PNG — Established in 1996. A lossless image format that is effectively a permanent standard
- QR Code — Invented by Denso Wave in 1994. Standardized internationally as ISO/IEC 18004
Every one of these is based on an international standard or open specification. Not a single proprietary format. Even if TokiStorage as an organization ceases to exist, the data can be recovered by reading the public specifications. This is a decisive difference from cloud services that lock users into proprietary ecosystems.
8. Will Your Phone Photos Last 1,000 Years?
For comparison, consider the likelihood of smartphone photos surviving 1,000 years.
iCloud's free tier is 5 GB. Paid plans delete data when payments stop. Google Photos is the same. Service continuity depends on corporate survival, and the statistical lifespan of a corporation is measured in decades.
Device upgrades, account migrations, service shutdowns. Digital data carries an illusion of permanence—"it's saved somewhere in the cloud"—but in reality faces numerous points of discontinuity. And the more data you have, the greater the cost and effort of migration, increasing the probability of giving up at some transition point.
Data encoded on a QR paper, by contrast, persists as long as the paper physically exists. No migration needed. No backup needed. No cloud subscription renewals, no password management. Place it in a drawer, and decades later it remains readable.
And migrating the decoder? Just copy 3 MB of files to a new server. No matter how many times the hosting provider changes over 1,000 years, each migration takes minutes.
9. A Design Without Precedent
Several projects around the world aim at long-term preservation. But compared to TokiStorage's architecture, each one stands on fundamentally different ground.
| Project | Approach | Scaling | Maintenance Cost |
|---|---|---|---|
| Internet Archive Founded 1996. A nonprofit that collects and publishes humanity’s digital heritage—web pages, books, video, and music |
Centralized servers | O(n) — scales with data | Millions of USD/year |
| GitHub Arctic Code Vault Launched 2020. GitHub’s program to preserve open-source code on film in an Arctic mine on Svalbard |
Physical storage in Arctic mine | Batch (periodic snapshots) | No access (storage only) |
| LOCKSS Started 2000 at Stanford. A network that distributes copies of academic publications across multiple library servers |
Distributed server nodes | O(n) — replicated per node | Institutional budgets |
| IPFS Released 2015. A peer-to-peer protocol where computers worldwide share data without a central server |
P2P distributed storage | O(n) — depends on pinning nodes | Pinning service fees |
| Rosetta Project Led by the Long Now Foundation (the 10,000-Year Clock people). Micro-etches 1,500+ languages onto a nickel disk |
Micro-etched nickel disk | Fixed by physical capacity | Low (physical custody) |
| Memory of Mankind Started 2012 by an Austrian artist. Burns contemporary records onto ceramic tablets and stores them in a Hallstatt salt mine |
Ceramic tablets in salt mine | Fixed by physical capacity | Low (single site) |
| TokiStorage Data distributed inside QR URLs on paper. Server holds only a 3 MB decoder. Three-layer storage: physical, national, private |
Data on paper, server is decoder only | O(1) — independent of volume | ≈ $0 |
The shared weakness across existing projects boils down to two things: “cost grows with data” or “preservation works, but access requires special means.”
The Internet Archive is a monumental effort to preserve humanity's digital heritage, but it is fundamentally a centralized model that stores data on servers. Capacity scales with usage, and annual operating costs reach millions of dollars. If funding stops, the archive stops.
GitHub's Arctic Code Vault stores open-source code in an abandoned mine on Svalbard in the Arctic. Physical durability is excellent, but access requires traveling to the site. Everyday “reading” is not part of the design.
LOCKSS (Lots of Copies Keep Stuff Safe) is a distributed preservation network for academic libraries. The distributed philosophy is similar, but each node holds a full copy of the data, so capacity scales linearly. Dependence on participating institutions' operating budgets introduces uncertainty at the 1,000-year scale.
IPFS is revolutionary as peer-to-peer distributed storage, but data disappears when nodes go offline. Persistence requires pinning services, which means cost. This is incompatible with “zero-cost permanence.”
The Long Now Foundation's Rosetta Project micro-etches text onto a nickel disk. Physical durability is high, but there is no connection to a digital decoder. Reading the etched information requires a microscope—fundamentally different from the experience of “scan with a smartphone and it plays.”
Austria's Memory of Mankind engraves information onto ceramic tablets and stores them in a salt mine. A beautiful concept, but it is a single-site centralized approach without the redundancy of distributed storage.
The Anomaly of O(1)
What makes TokiStorage fundamentally different is that server capacity is O(1)—constant. Issue one code or one hundred million; the server stays at 3 MB. Data is distributed across paper, so nothing accumulates centrally.
This is less a “storage strategy” and more a “strategy of holding no storage.” While every existing project asks “how do we preserve data long-term?,” TokiStorage asks nothing at all—it eliminates the question by never holding the data in the first place. It belongs to a different category entirely.
The Closest Precedent: Ise Grand Shrine
If no precedent exists in the digital world, look to the analog one. The closest design philosophy is Ise Grand Shrine's Shikinen Sengū itself—a system that has sustained continuity for 1,300 years through “designing the mechanism, not the monument.”
The goshintai (sacred essence) never changes. The shaden (shrine hall) is rebuilt every 20 years. Skill and spirit are transmitted to the next generation; the system runs itself. TokiStorage's design—an immutable decoder, repositories split every 20 years, a fully automated pipeline—is this 1,300-year-old design principle translated into digital infrastructure.
Existing long-term preservation projects are designed to protect data.
TokiStorage is designed to never hold data at all.
When there is nothing to protect, no protection mechanism is needed.
10. Beyond 1,000 Years
We have used 1,000 years as our benchmark throughout this essay. But honestly, this architecture has no upper time limit.
Consider 10,000 years. That would mean 500 volumes and 20,000 issues. Each volume is an independent 25 MB repository with no dependencies on any other. The main repository stays at 3 MB. Each volume knows nothing about the others—it simply serves its own PDFs through GitHub Pages. Whether there are 5 volumes or 500, the structure is identical.
Why is there no ceiling? Because nothing accumulates. Conventional cloud services gather data centrally over time, inevitably hitting capacity limits. TokiQR distributes data onto paper and keeps only the reader on the server. The reader is a fixed 3 MB. Newsletter PDFs are isolated by volume, and completed volumes are frozen. Since each volume is independent, no bottleneck emerges regardless of how many volumes exist.
And the pipeline that maintains this structure is fully automated. When the 20-year volume boundary is crossed, everything from archive repository creation to GitHub Pages enablement completes without human intervention. The operator in 100 years benefits from the same mechanism as the operator in 10,000 years.
Sustainability is not about setting a longer time horizon.
It is about making time disappear from the design entirely.
11. Why “Strategy”
When engineers hear “storage strategy,” they typically think of questions like these: Which cloud provider? How to optimize costs? How to ensure redundancy? What replication topology? In other words, optimization under the assumption that you hold storage.
This essay overturns that assumption.
Don’t fight the scaling problem—if data never centralizes, there is no scaling ceiling. Don’t fight maintenance costs—a 3 MB decoder on free hosting eliminates payment lapse risk. Don’t fight data migration—without a 100 TB database, migration takes minutes. Don’t fight physical degradation—quartz glass endures a thousand years, and laminates are renewed through Shikinen Sengū.
These problems are not being solved. The structure that produces them is being eliminated.
Sun Tzu wrote: “To win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.” The supreme victory is not winning on the battlefield, but ensuring no battlefield exists. Tactics ask how to do—how to fight each battle. Strategy asks what not to do—which battles to never enter.
TokiStorage’s design is a “strategy” in precisely this sense. Not how to manage storage, but the decision to hold no storage at all. That single decision structurally neutralizes every risk—capacity, cost, migration, physical decay.
And consider: a reader who sees the title “Storage Strategy” likely expects a discussion of cloud architecture. As they read on, something feels different. Then the essay lands on “the most sustainable storage strategy is to hold no storage at all.” The meaning of the title shifts between the first sentence and the last. That shift itself embodies the essence of this design.
The highest form of strategy is to eliminate the battlefield itself.
The highest form of storage strategy is to hold no storage at all.
12. The Ecosystem’s Silence
This raises a structurally interesting question. If this design philosophy is sound, why has no one said it before?
The answer is simple: they couldn’t.
Storage vendors—AWS S3, Azure Blob Storage, Google Cloud Storage. Their business model is usage-based billing. The moment they say “the best strategy is to hold no storage,” they negate their own reason for existing. An O(1) architecture is, to them, a design where revenue is zero.
Database vendors—CRUD, transactions, replication, sharding. All technologies born from the assumption that data changes. When a Write-Once model eliminates that assumption, the entire product category becomes unnecessary.
Backup and disaster recovery industry—disaster recovery plans, incremental backups, RPO/RTO. All mechanisms needed because data is centralized. When data is distributed across physical paper, the question “how many terabytes do we restore in how many hours?” simply does not exist.
Cloud consultants—they earn their living advising on provider selection, multi-cloud strategy, and cost optimization. “Don’t store it in the first place” eliminates the very premise of their advice.
In other words, this concept is correct for those who realize it, but those who realize it have no incentive to broadcast it. The entire storage ecosystem revolves around the premise of “holding data.” To question that premise is to question one’s own business model.
TokiStorage can articulate this because it does not sell storage. TokiStorage is a “proof-of-existence service”—its core mission is delivering records engraved on quartz glass and QR paper to a thousand years hence. It stands not on the side that sells storage, but on the side free from storage’s constraints. That position alone makes it possible to voice this design philosophy.
A sound design philosophy fails to spread not because it is technically difficult,
but because those who could articulate it would lose by doing so.
13. What Cannot Be Designed
Having read this far, you may wonder: if the structure is this coherent, surely it was designed from the start?
The answer is no. This structure was not designed. It was entailed by constraints.
The starting point was a very specific constraint: “put a voice inside a QR code.” We tried the Opus codec, discovered the DTX cliff between 2–4 kbps rendered it useless, and arrived at Codec2. Because all data had to fit inside the QR code, no data remained on the server. Because no data remained, capacity was O(1). Because capacity was O(1), Shikinen Sengū became feasible. Because Shikinen Sengū was feasible, a thousand-year strategy could be articulated.
Every step was a consequence built from below, not a design imposed from above. Attempting to reverse-engineer this structure would fail unless you start from the same point: “put a voice inside a QR code.” The ecosystem’s silence described in the previous section works the same way—from inside the industry, the answer “don’t hold storage” is structurally invisible. Only someone entirely outside, who arrived at this structure by happenstance, could write it.
And finally, the title of this essay is itself proof of completion.
The business is called “TokiStorage”—the word “Storage” is in the name. The storage strategy written by that business concludes that the best approach is to hold no storage at all. A business named after storage wrote a strategy of holding none. This recursive structure is not something that can be designed.
This essay is not a plan. It was discovered by looking back, after everything was already complete, and asking “why does this work?” Strategy documents are normally written before implementation. This essay could only be written after. The design philosophy surfaced only once everything was already running.
Some things cannot be designed.
They emerge from constraints, and only after completion
do you realize: that is what they were all along.
The very fact that this essay could be written means the design is complete.
The most sustainable storage strategy is to hold no storage at all.
Data lives on paper. The server carries only a 3 MB reader.
This design has no upper time limit.
The endeavor to preserve a voice began with the death of our beloved dog, Pearl.
We placed Pearl’s throat bone in a clamshell,
alongside a four-leaf clover and flower petals our daughter had gathered,
and sealed it in resin—a grave you can carry with you.
Could a voice, too, be given lasting form?
That question was where everything began.
The completion of this strategy is dedicated to the memory of Pearl.