The Life Savor Developer SDK: What You Can Build

The Life Savor platform is built on a component architecture. Everything — from the AI models to the tools your agent uses — is a pluggable component that you can build, publish, and share. Here's what the SDK gives you.

Four Component Types

Type What it does Language
Skill A tool your agent can invoke (fetch data, transform text, call APIs) Any — Rust, Node.js, Python
Assistant An orchestration layer that combines skills with prompts and guardrails Rust, Python
Model An AI model provider (local inference or API gateway) Rust
System A privileged platform service (voice, TTS, storage, caching) Rust

Most developers will start with skills. They're the simplest to build, work in any language, and cover the most common use cases.

The CLI

Everything starts with lsai-cli:

# Install (macOS)
curl -fsSL https://download.lifesavor.ai/lsai-cli/latest/homebrew/install.sh | bash

# First-time setup
lsai-cli setup

# Create a component
lsai-cli components create --name my-skill --type skill --language node

# Set metadata
lsai-cli components update <id> \
  --description "Does something useful" \
  --category Productivity \
  --tags automation,utility

# Connect your GitHub repo
lsai-cli components connect --component <id> --repo-url https://github.com/you/my-skill

# Submit for review
lsai-cli components submit <id>

# Build for all platforms
lsai-cli builds submit --component <id> --all-platforms

# Publish (after QA approval)
lsai-cli components publish --component <id> --version 1.0.0 --notes CHANGELOG.md

The full lifecycle: create → metadata → connect repo → submit → build → QA → publish.

Skills: Any Language, One Protocol

Skills communicate over JSON stdin/stdout. The agent spawns your skill as a child process, sends a request, and reads the response. You can write skills in whatever language you're comfortable with.

The manifest (skill.json) declares your skill's identity, entrypoint, configuration schema, and setup steps. The platform renders the configuration UI automatically — you never build a settings page.

{
  "skill_id": "my-skill",
  "name": "My Skill",
  "version": "1.0.0",
  "description": "A useful skill for Life Savor agents",
  "execution_tier": 1,
  "entrypoint": {
    "type": "python",
    "command": "python",
    "args": ["skill.py"]
  }
}

Skills run sandboxed: cleared environment, restricted filesystem, bounded output, enforced timeouts.

Assistants: Orchestration Without Code

Assistants combine multiple skills into a coherent workflow. You define:

  • System prompt — with `` template substitution
  • Tool bindings — map logical names to concrete skill IDs
  • Guardrails — safety rules evaluated before/after LLM calls (block, warn, or log)
  • Handoff rules — when to transfer to another assistant or a human
  • Context strategy — how to manage the conversation window (sliding, summarize, truncate)
let definition = AssistantDefinitionBuilder::new()
    .id("support-assistant")
    .display_name("Support Assistant")
    .system_prompt_template("You help users with  questions.")
    .variable("domain", "account management")
    .build()?;

Assistants run in two modes: Interactive (conversational, with user confirmation steps) or Programmatic (execute to completion without interaction).

Models: Four Provider Patterns

If you're building a model component, you implement the LlmProvider trait in Rust:

  • API Gateway — route through the Life Savor API (centralized billing, no user API keys needed)
  • Local — run on the user's hardware via PyTorch or ONNX Runtime
  • BYOK — user supplies their own vendor API key, calls the API directly

The agent handles model state management (hot/warm/cold), hardware detection, health monitoring, and routing — your component just implements inference.

Voice and text-to-speech aren't model components — they're system components that provide an input/output channel to the agent. Think of voice as a way to get to a model, not a model itself.

The JavaScript SDK

For skills that need configuration, the JS SDK (skill-config-sdk.js) provides helpers for defining schemas and handling validation:

const { createConfigSchema, createSetupStep, handleValidation } = require('./skill-config-sdk');

const schema = createConfigSchema({
  properties: {
    api_key: {
      type: 'string',
      title: 'API Key',
      description: 'Your service API key',
      'x-secret': true
    }
  },
  required: ['api_key']
});

Fields marked x-secret are stored in the agent's encrypted vault and masked in the UI. The platform handles all the crypto — you just declare which fields are sensitive.

The Marketplace

Once your component passes QA review, it's published to the marketplace. Users can discover and install it from the web, mobile app, or CLI:

lsai-cli component install my-skill

Components are code-signed, checksum-verified, and sandboxed at runtime. The platform handles distribution, updates, and compatibility checking.

Getting Started

  1. Install the CLI: curl -fsSL https://download.lifesavor.ai/lsai-cli/latest/homebrew/install.sh | bash
  2. Create a developer account at developer.lifesavor.ai
  3. Run lsai-cli setup with your API key
  4. Create your first component: lsai-cli components create --name my-skill --type skill --language node

The developer documentation has full API references, examples, and guides for each component type.

We're excited to see what you build.