Your Data Never Leaves Your Device (Unless You Want It To)

Most AI platforms work like this: you type something, it goes to a server, gets processed, and comes back. Your data lives on someone else's infrastructure, governed by someone else's privacy policy.

Life Savor works differently.

What Stays Local

  • Your conversations — stored in an encrypted database on your device
  • Your credentials — API keys, tokens, and passwords in a local encrypted vault
  • Your PII — detected and tokenized before it reaches any model or skill
  • Your model inference — when using local models, computation happens entirely on your hardware
  • Your skill data — skills run in a sandbox on your device

What Goes to the Cloud (Only When You Choose)

  • Cloud model requests — if you use GPT-4o, Claude, or other API models, your (PII-sanitized) message goes to that provider
  • Platform sync — agent status, installed components, and configuration sync with the platform for multi-device support
  • Marketplace installs — downloading components requires internet

Even when data does leave your device, the PII interceptor has already replaced sensitive information with vault tags. The cloud model sees <<PII:EMAIL:...>> — not your actual email address.

How We Enforce This

This isn't a promise in a terms-of-service document. It's enforced by architecture:

  1. The interceptor is mandatory. Every message passes through PII scanning before reaching any model or skill. Components cannot bypass it.

  2. Skills are sandboxed. They run with a cleared environment — no access to your filesystem, environment variables, or network unless explicitly declared and approved.

  3. Secrets never leave the vault. API keys and credentials are resolved at runtime by the agent. Skills receive the service response, not the key itself.

  4. Local models are truly local. PyTorch and ONNX Runtime execute on your CPU/GPU. No telemetry, no phone-home, no cloud dependency.

  5. Everything is auditable. Every PII detection, every de-tokenization, every skill invocation is logged locally. You can inspect exactly what happened.

You're in Control

Want to disable PII scanning for a trusted skill? You can. Want to allow a specific model to see your email address? Configure it. Want to run completely offline with no cloud models? Install local models and disconnect.

The defaults are private. The overrides are yours.