Other libraries
We'll briefly mention a few other libraries worth checking out. We might expand on them by adding sections for each in the future.
Guardrails
Guardrails is an open-source library initially created to enforce both structure and content constraints on LLM outputs. It’s often mentioned in the context of making LLMs output safe content (e.g., no profanity, no PII) or ensuring correctness, but a big part of it is specifying output format requirements.
Guardrails is built for schemas like this:
- The output must be valid JSON with fields X, Y, Z.
- Field Y (which is a summary) should not contain any profane words or be longer than 100 characters.
- Field Z (which is a date) must be a valid date in the format YYYY-MM-DD.
TypeChat
Microsoft’s TypeChat is a set of libraries for TypeScript, Python, C# that enable structured outputs by coupling with TypeScript definitions or dataclasses.
You define an interface (say in TypeScript) for the output and TypeChat will use the OpenAI function calling or few-shot examples to get the model to comply. It validates using your type definitions. It’s similar in spirit to Instructor/Pydantic AI but language-agnostic and heavily uses the model’s capabilities to interpret types.
AICI
An experimental project from Microsoft, where prompts are treated as WASM programs with cooperative constraint enforcement. It wasn’t mainstream (complex to set up) but shows the lengths being explored for structured prompting.
Marvin
Marvin offers structured outputs as part of a broader framework for building agentic AI workflows.