Uptime & External Dependencies
— Why Millennium Preservation Cannot Rely on External Services

A 99.9% SLA means 8.7 hours of downtime per year.
At millennium scale, every external dependency will eventually sever.
This design philosophy extends to every layer — even a contact form.

Key thesis: External dependencies are convenient in the short term but become inevitable failure points over time. Designing for a millennium requires isolating every failure mode and eliminating single points of failure.

This essay explains TokiStorage's design philosophy. It is not intended as criticism of any specific service.

1. The Illusion of 99.9%

Cloud service sales materials are filled with numbers like "99.9% SLA" and "99.99% uptime." They look rock-solid. But convert them to time, and the picture changes.

SLAAnnual Downtime10-Year Total100-Year Total
99%3.65 days36.5 days365 days (1 year)
99.9%8.7 hours87 hours36.5 days
99.99%52.6 min8.7 hours3.65 days
99.999%5.26 min52.6 min8.7 hours

Many consider 99.9% to be "near perfect." But over 100 years, that translates to 36.5 days — more than a full month of total downtime. And this assumes the service still exists in 100 years.

An SLA is a promise of uptime, not a promise of existence. Whether the service will still be operating in 100 years is outside the scope of any SLA.

2. The Chain of External Dependencies

Modern web services carry a staggering number of external dependencies. Consider a typical stack:

If any single one of these goes down, some part of the user experience breaks. And SaaS companies shutting down within a decade is hardly unusual. Even Google has killed countless products — Google+, Google Reader, Hangouts, and the list goes on.

Dependency Chains Fail Probabilistically

Suppose five external services each have a 99.9% uptime SLA. The probability that all five are simultaneously operational:

0.999 × 0.999 × 0.999 × 0.999 × 0.999 = 0.995 = 99.5%

— Annual downtime expands to 43.8 hours (~1.8 days)

The more external dependencies you add, the more exponentially your system-wide uptime degrades. This is not theory — it is arithmetic.

3. TokiStorage's Design Principles

At millennium scale, every external service can cease to exist. From this premise, the design principles become clear.

Principle 1: Minimize External Dependencies

TokiStorage's three-layer distributed storage is designed so each layer survives independently.

The physical layer is not a service at all. The national layer is a legal institution that cannot be shut down by a corporate business decision. Only the private layer is a service — and if GitHub disappears, the other two layers remain.

Principle 2: Isolate Failure Modes

The failure modes of the three layers are mutually independent:

These events do not correlate. An earthquake that breaks the glass does not change the law. A company going bankrupt does not close the library. Because failure modes are independent, the probability of total loss approaches zero.

Principle 3: Design Fallbacks

This principle extends beyond core infrastructure. It applies to every touchpoint — even a contact form.

TokiStorage previously used LINE Official Account and Calendly for customer inquiries. Both had structural problems:

The current system uses a static JavaScript form backed by Google Apps Script. The form UI itself is a static file served from GitHub Pages. If GAS goes down, the form still renders, and a direct email address is shown as fallback.

Normal flow: Form submission → GAS → Email notification + Spreadsheet logging
Failure flow: GAS outage → Error screen displays email address directly → Contact via copy-paste or mailto

Even when the external dependency (GAS) fails, the communication channel never fully breaks. This is fallback design.

4. How to Read an SLA

When evaluating an SLA, never look at the number alone. Check the following:

  1. Scope — What is actually guaranteed? "Server uptime" and "API response" are different things
  2. Compensation — What happens on SLA breach? Usually service credits (billing refunds), not data loss compensation
  3. Existence Assumption — Every SLA implicitly assumes "as long as the service continues to operate." When the service shuts down, the SLA vanishes
  4. Dependency Chain — What external services does the provider itself depend on? Their dependency's SLA breach is not included in your SLA calculation

5. Google Workspace — A Pragmatic Choice

There are specific reasons TokiStorage chose Google Apps Script as the contact form backend:

But the same principle applies here. Rather than fully depending on Google, a fallback (direct email display) is prepared for when Google goes down. External dependency cannot be reduced to absolute zero, but an alternative path must always be designed.

6. The Quiet Standard of Millennium Design

Ultimately, the design philosophy of millennium preservation begins with one premise: "Everything will eventually break."

Servers will eventually stop. Companies will eventually disappear. Laws can be amended. That is precisely why the system is designed so that when one layer breaks, others survive. When one service goes down, communication is never severed.

"Everything fails, all the time."

— Werner Vogels, CTO of Amazon Web Services

The CTO of AWS himself acknowledges it. Everything fails, all the time. Accepting this fact and designing systems that continue to function despite failure — that is millennium-scale infrastructure design, and the philosophy that runs through every layer of TokiStorage.

Uptime is an indicator of trust, but not a guarantee of survival. When designing for a millennium, the question is not "how rarely does it fail?" but "when it fails, what remains?"

References

  • Google Workspace Service Level Agreement — https://workspace.google.com/terms/sla.html
  • US-CERT: Data Backup Options — 3-2-1 Backup Rule (2012)
  • Werner Vogels, "Everything Fails All the Time" — AWS re:Invent Keynote
  • Killed by Google — https://killedbygoogle.com/ — A catalog of discontinued Google services
  • National Diet Library Act (Act No. 5 of 1948) — Legal Deposit System
  • GitHub — https://github.com/