Vercel Labs skills CLI commands for GitHub-based installation and verification across agent runtimes.
Skill Guide
OmniLLM ships with a first-party agent skill in the repository's
skill/ directory. The
skill teaches agents the crate's real boundaries:
- runtime generation through
Gateway,ProviderEndpoint, andEndpointProtocol - provider primitive runtime calls through
PrimitiveRequest,PrimitiveProviderEndpoint, andGateway::primitive_* - protocol parsing, emission, and transcoding through
parse_*,emit_*, andtranscode_* - typed multi-endpoint conversion through
ApiRequest,ApiResponse, andWireFormat - replay fixture sanitization through
ReplayFixtureandsanitize_*
If you only need the Rust crate, go back to Usage Guide. This page is specifically about installing the OmniLLM Skill into coding agents.
Install With Vercel Labs Skills
These instructions use the Vercel Labs skills installer.
The skill is declared as omnillm. When you install with --skill omnillm,
the installer creates the correct target directory name automatically.
Agent runtimes only require:
SKILL.mdreferences/assets/
The installer may also add README.md next to the skill files and a
project-level skills-lock.json.
The commands below install directly from GitHub, so you do not need to clone the repository first.
The commands below use --copy so the installed skill stays self-contained in
the target agent directory.
Claude Code
Add -g for a user-level install.
Codex
Add -g for a user-level install.
OpenCode
Add -g for a user-level install.
Verify Installation
Use the installer to confirm that the skill is present for the agent you care about:
Replace codex with claude-code or opencode as needed.
Then start a new session in your chosen agent and ask for something OmniLLM-specific, for example:
- scaffold a
GatewayBuilderflow withProviderEndpointandKeyConfig - configure an
EndpointProtocol::*_compatruntime endpoint for an OpenAI-compatible wrapper that requiresmessages[].content[] - debug an OpenAI Chat compat stream where
delta.roleand the firstdelta.contentarrive in the same SSE frame - pass through wrapper-specific OpenAI top-level fields such as
enable_thinkingwithLlmRequest.vendor_extensions - explain when canonical
GatewayAPIs, provider primitive APIs, ortranscode_*are correct - route a provider-native
PrimitiveRequestthroughprimitive_call,primitive_stream, orprimitive_realtime - debug
NoAvailableKey,BudgetExceeded, orProtocol(...) - emit an
ApiRequestinto a provider wire format
If the skill does not appear immediately, restart the session and rerun
npx skills ls -a <agent>.