How the Marketplace Review Process Works

Every component goes through QA review before it's published. Here's what we check, common rejection reasons, and how to pass on the first try.

The Review Flow

Submit → Pending QA → In Review → Approved → Ready to Publish
                         ↓
                      Rejected → Back to Draft (fix and resubmit)

After you submit, your component enters the QA queue. A reviewer evaluates it against our checklist. If it passes, you can publish. If not, you get specific feedback on what to fix.

What We Check

Metadata Completeness

  • Description is clear and accurate (not just "a skill")
  • Category matches what the component actually does
  • Tags are relevant and helpful for discovery
  • License is specified
  • Compatibility platforms are listed

Manifest Validity

  • All required fields present and correctly typed
  • Version follows semver
  • Entrypoint command exists and is valid
  • Config schema (if present) is well-formed
  • Setup steps reference valid fields

Security

  • No hardcoded credentials or API keys in source
  • Secrets use x-secret flag properly
  • No unnecessary filesystem access declarations
  • No suspicious network calls
  • Dependencies are from trusted sources

Functionality

  • The component actually does what the description says
  • Error handling is reasonable (doesn't crash on bad input)
  • Timeouts are appropriate for the execution tier
  • Output is well-structured JSON

Documentation

  • README explains what the component does
  • Setup instructions are clear
  • If there's a usage_guide, it's helpful

Common Rejection Reasons

"Description too vague" — "A useful skill" doesn't tell users anything. Be specific: "Fetches current weather and 5-day forecasts from OpenWeatherMap for any city worldwide."

"Missing error handling" — If your skill crashes on invalid input instead of returning a structured error response, it'll be rejected. Always return {"status": "failed", "error": "..."} instead of crashing.

"Hardcoded credentials" — Any API key, token, or secret in your source code is an automatic rejection. Use config_schema with x-secret.

"Execution tier mismatch" — If your skill makes external API calls but uses tier 1 (most restricted), it may timeout. Match your tier to your actual resource needs.

"No validation on required config" — If your skill requires an API key but doesn't validate it during setup, users will get cryptic errors later. Add a validation_command.

How to Pass on the First Try

  1. Run lsai-cli skill config validate — catches manifest issues before submission
  2. Test in the sandbox — if it works in the sandbox, it'll work in production
  3. Write a real description — 2-3 sentences explaining what it does and who it's for
  4. Handle errors gracefully — never crash, always return structured error responses
  5. Use x-secret for credentials — no exceptions
  6. Include a README — even a short one helps
  7. Test with bad input — what happens when the API key is wrong? When the network is down? When input is empty?

After Rejection

Rejections include specific feedback. Fix the issues, then resubmit:

lsai-cli components submit <component-id>

You don't need to create a new component — just fix and resubmit the same one. Your draft is preserved.

Timeline

Reviews typically complete within 1-2 business days. Complex components (models, system components) may take longer. You'll get a notification when the review is complete.