What Happens When You Talk to Your Agent
You type a message. A few seconds later, your agent responds. But what actually happens in between? Here's the full lifecycle of a request through the Life Savor agent.
The Journey of a Message
You type "Summarize my last 5 emails"
↓
1. PII Interceptor scans your message
2. Content Safety checks the sanitized text
3. Inference Bridge routes to your configured model
4. Model generates a plan: "I need to call the email skill"
5. Agent invokes the email skill (sandboxed)
6. Skill returns your 5 most recent emails
7. PII Interceptor scans the skill output
8. Model receives sanitized emails, generates summary
9. Content Safety checks the response
10. PII Interceptor verifies no new PII leaked
11. Response delivered to you
Step by Step
1. Interception (Inbound)
Your message hits the interceptor pipeline before anything else sees it. The regex scanner checks for structured PII (emails, phone numbers, credit cards). The NER model checks for unstructured PII (names, addresses). Anything sensitive gets replaced with vault tags.
The model never sees your actual email address — it sees <<PII:EMAIL:...>> and knows "there's an email here" without knowing what it is.
2. Model Routing
The Inference Bridge looks at what models you have installed and healthy. If you have a local LLaMA loaded (hot state), it routes there. If you're using GPT-4o via API, it goes through the gateway. The routing is automatic based on your configuration and model availability.
3. Tool Use
The model decides it needs to call a skill. It generates a tool call request — "invoke the email skill with operation list_recent and parameter count: 5."
The agent validates this against the skill's permission graph. Does this model have permission to invoke this skill? Is the skill healthy? Is it within rate limits?
4. Skill Execution
The agent spawns the email skill as a sandboxed child process. It sends the request via JSON stdin. The skill fetches your emails (using credentials from the encrypted vault), formats the response, and writes it to stdout.
The skill's output goes through the interceptor too — any PII in the email content gets tokenized before the model sees it.
5. Response Generation
The model receives the sanitized email summaries and generates your response. This output goes through content safety (checking for harmful content) and then PII verification (ensuring the model didn't hallucinate or reconstruct personal information).
6. Delivery
The clean, safe response reaches you. The whole thing took a few seconds, but your personal information never left your device unprotected, and every step was logged for your audit trail.
Why This Order Matters
The interception order is fixed and enforced at the pipeline level. Components cannot bypass it, reorder it, or disable it. This means:
- No skill can see your raw PII unless explicitly allowlisted
- No model receives unscanned input
- No response leaves without safety verification
- Every step is traceable via correlation IDs
This isn't a feature you enable — it's how the system works by default.