Paste an OpenAI Chat Completions JSON response and get a Valibot v.object for choices, usage, message, tool_calls, and the rest. Validate LLM responses at runtime without paying Zod's bundle tax — useful for OpenAI proxies running on edge runtimes.
LLM responses are the most untrusted JSON your backend handles: schema drift across model versions, refusal fields appearing only sometimes, tool_calls and logprobs filled or null, and OpenAI-compatible providers (Groq, Together, OpenRouter, Ollama) adding their own fields. A Valibot schema turns the response into a parse-or-fail boundary — when a provider ships a new field you don't expect, you get a clear validation error at the edge of your code instead of a runtime crash twelve function calls deep.
Pick Valibot over Zod here when you're deploying an LLM proxy on Cloudflare Workers, Vercel Edge, or Next.js middleware — the per-primitive tree-shaking saves several KB on cold-start, which matters when your endpoint fans out to a model provider on every request. The generated schema uses only standard Valibot primitives (v.string, v.number, v.array, v.union, v.optional). Drop it into your LLM client wrapper, run v.parse(schema, response), and the rest of your code consumes a strongly typed Choice.