Name | llm-schema-lite JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | LLM-ify your JSON schemas |
upload_time | 2025-10-19 08:04:26 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT License
Copyright (c) 2025 Rohit Garud
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
dspy
json-schema
llm
openai
pydantic
schema
token-optimization
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-schema-lite
[](https://pypi.org/project/llm-schema-lite/)
[](https://pypi.org/project/llm-schema-lite/)
[](https://github.com/rohitgarud/llm-schema-lite/actions)
[](https://codecov.io/gh/rohitgarud/llm-schema-lite)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
Transform verbose Pydantic JSON schemas into LLM-friendly formats. Reduce token usage by **60-85%** while preserving essential type information. Includes robust JSON/YAML parsing with automatic error recovery.
## 🚀 Quick Start
### Basic Usage
```python
from pydantic import BaseModel
from llm_schema_lite import simplify_schema, loads
# Define your Pydantic model
class User(BaseModel):
name: str
age: int
email: str
# Transform to LLM-friendly format
schema = simplify_schema(User)
print(schema.to_string())
# Output: { name: string, age: int, email: string }
# Parse JSON/YAML with robust error handling
json_data = loads('{"name": "John", "age": 30}', mode="json")
yaml_data = loads('name: Jane\nage: 25', mode="yaml")
```
### Multiple Output Formats
```python
# JSONish format (BAML-like) - Default
schema = simplify_schema(User)
print(schema.to_string())
# { name: string, age: int, email: string }
# TypeScript format
schema_ts = simplify_schema(User, format_type="typescript")
print(schema_ts.to_string())
# interface User { name: string; age: int; email: string; }
# YAML format
schema_yaml = simplify_schema(User, format_type="yaml")
print(schema_yaml.to_string())
# name: string
# age: int
# email: string
```
### Advanced Features
```python
from pydantic import BaseModel, Field
class Product(BaseModel):
name: str = Field(..., description="Product name", min_length=1)
price: float = Field(..., ge=0, description="Price must be positive")
tags: list[str] = Field(default_factory=list)
# Include metadata (descriptions, constraints)
schema_with_meta = simplify_schema(Product, include_metadata=True)
print(schema_with_meta.to_string())
# {
# name: string //Product name, minLength: 1,
# price: float //Price must be positive, min: 0,
# tags: string[]
# }
# Exclude metadata for minimal output
schema_minimal = simplify_schema(Product, include_metadata=False)
print(schema_minimal.to_string())
# {
# name: string,
# price: float,
# tags: string[]
# }
```
### Nested Models
```python
class Address(BaseModel):
street: str
city: str
zipcode: str
class Customer(BaseModel):
name: str
email: str
address: Address
schema = simplify_schema(Customer)
print(schema.to_string())
# { name: string, email: string, address: { street: string, city: string, zipcode: string } }
```
### Different Output Methods
```python
schema = simplify_schema(User)
# String output
print(schema.to_string())
# JSON output
print(schema.to_json(indent=2))
# Dictionary output
print(schema.to_dict())
# YAML output (if format_type="yaml")
print(schema.to_yaml())
```
## 📊 Token Reduction
Compare the token usage:
```python
import json
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
email: str
# Original Pydantic schema (verbose)
original_schema = User.model_json_schema()
print("Original tokens:", len(json.dumps(original_schema)))
# Simplified schema (LLM-friendly)
simplified = simplify_schema(User)
print("Simplified tokens:", len(simplified.to_string()))
# Typical reduction: 60-85% fewer tokens!
```
## 🎯 Use Cases
- **LLM Function Calling**: Reduce schema tokens in function definitions
- **DSPy Integration**: Native adapter for structured outputs with multiple modes
- **LangChain**: Streamline Pydantic model schemas
- **Raw LLM APIs**: Minimize prompt overhead with concise schemas
- **Robust Parsing**: Parse malformed JSON/YAML from LLM responses with automatic repair
- **Data Extraction**: Extract structured data from mixed text content and markdown
## 🔌 DSPy Integration
**NEW!** Native DSPy adapter with support for JSON, JSONish, and YAML output modes:
```python
import dspy
from pydantic import BaseModel
from llm_schema_lite.dspy_integration import StructuredOutputAdapter, OutputMode
class Answer(BaseModel):
answer: str
confidence: float
# Create adapter with JSONish mode (60-85% fewer tokens)
adapter = StructuredOutputAdapter(output_mode=OutputMode.JSONISH)
# Configure DSPy
lm = dspy.LM(model="openai/gpt-4")
dspy.configure(lm=lm, adapter=adapter)
# Use with any DSPy module
class QA(dspy.Signature):
question: str = dspy.InputField()
answer: Answer = dspy.OutputField()
predictor = dspy.Predict(QA)
result = predictor(question="What is Python?")
```
**Features:**
- 🎯 **Multiple Output Modes**: JSON, JSONish (BAML-style), and YAML
- 📉 **60-85% Token Reduction**: With JSONish mode
- 🔄 **Input Schema Simplification**: Automatically simplifies Pydantic input fields
- 🛡️ **Robust Parsing**: Handles malformed outputs with automatic recovery
- ✅ **Full Compatibility**: Works with Predict, ChainOfThought, and all DSPy modules
See [DSPy Integration Guide](src/llm_schema_lite/dspy_integration/README.md) for detailed documentation.
## 🔧 Robust Parsing with `loads`
**NEW!** The `loads` function provides unified, robust parsing for JSON and YAML content with automatic error recovery and markdown extraction.
### Basic Usage
```python
from llm_schema_lite import loads
# Parse JSON
data = loads('{"name": "John", "age": 30}', mode="json")
print(data) # {'name': 'John', 'age': 30}
# Parse YAML
data = loads('name: Jane\nage: 25', mode="yaml")
print(data) # {'name': 'Jane', 'age': 25}
```
### Markdown Extraction
Automatically extracts content from markdown code blocks:
```python
# JSON from markdown
markdown_json = '''```json
{"name": "Alice", "age": 28}
```'''
data = loads(markdown_json, mode="json")
print(data) # {'name': 'Alice', 'age': 28}
# YAML from markdown
markdown_yaml = '''```yaml
name: Bob
age: 32
```'''
data = loads(markdown_yaml, mode="yaml")
print(data) # {'name': 'Bob', 'age': 32}
```
### JSON Object Extraction
Extracts JSON objects from embedded text when markdown extraction is disabled:
```python
# Extract JSON from mixed content
text = 'Here is the result: {"name": "Charlie", "age": 35} and some other text'
data = loads(text, mode="json", extract_from_markdown=False)
print(data) # {'name': 'Charlie', 'age': 35}
# Multiple JSON objects - extracts the first one
multiple = 'First: {"a": 1} Second: {"b": 2}'
data = loads(multiple, mode="json", extract_from_markdown=False)
print(data) # {'a': 1}
```
### Error Recovery and Repair
Handles malformed JSON with automatic repair:
```python
# Malformed JSON with trailing comma
malformed = '{"name": "David", "age": 40,}'
data = loads(malformed, mode="json")
print(data) # {'name': 'David', 'age': 40}
# Missing quotes
missing_quotes = '{name: "Eve", age: 22}'
data = loads(missing_quotes, mode="json")
print(data) # {'name': 'Eve', 'age': 22}
# Disable repair to get errors
try:
loads(malformed, mode="json", repair=False)
except ConversionError as e:
print(f"Parse error: {e}")
```
### YAML Fallback
YAML parsing automatically falls back to JSON when YAML parsing fails:
```python
# YAML that looks like JSON
yaml_like_json = '{"name": "Frank", "age": 45}'
data = loads(yaml_like_json, mode="yaml")
print(data) # {'name': 'Frank', 'age': 45}
```
### Advanced Features
```python
# Complex nested structures
complex_json = '''```json
{
"user": {
"name": "Grace",
"details": {
"age": 30,
"city": "NYC"
}
}
}
```'''
data = loads(complex_json, mode="json")
print(data['user']['details']['city']) # NYC
# Arrays and special values
array_json = '{"items": ["apple", "banana"], "active": true, "data": null}'
data = loads(array_json, mode="json")
print(data) # {'items': ['apple', 'banana'], 'active': True, 'data': None}
# YAML with comments
yaml_with_comments = '''# User information
name: Henry # Full name
age: 35
# Contact details
email: henry@example.com'''
data = loads(yaml_with_comments, mode="yaml")
print(data) # {'name': 'Henry', 'age': 35, 'email': 'henry@example.com'}
```
### Error Handling
```python
from llm_schema_lite import loads, ConversionError
try:
# This will raise ConversionError
data = loads('This is not JSON at all', mode="json", repair=False)
except ConversionError as e:
print(f"Failed to parse: {e}")
# With repair enabled (default), it will attempt to fix the content
try:
data = loads('This is not JSON at all', mode="json")
# This might still fail, but will try json_repair first
except ConversionError as e:
print(f"Even repair failed: {e}")
```
### Use Cases
- **LLM Output Parsing**: Robustly parse JSON/YAML from LLM responses
- **API Response Handling**: Handle malformed or embedded JSON/YAML
- **Data Extraction**: Extract structured data from mixed text content
- **Error Recovery**: Automatically repair common JSON/YAML issues
- **Markdown Processing**: Extract code blocks from documentation or responses
## Installation
### Basic Installation
```bash
pip install llm-schema-lite
```
### With DSPy Support
```bash
pip install "llm-schema-lite[dspy]"
```
### Using uv
```bash
# Basic
uv pip install llm-schema-lite
# With DSPy
uv pip install "llm-schema-lite[dspy]"
```
## Development
This project uses `uv` for package management and includes pre-commit hooks for code quality.
### Setup Development Environment
1. Install uv if you haven't already:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. Quick setup with Make:
```bash
make setup
```
Or manually:
```bash
# Create virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install package with dev dependencies
uv pip install -e ".[dev]"
# Install pre-commit hooks
uv pip install pre-commit
pre-commit install
pre-commit install --hook-type commit-msg
```
### Available Make Commands
Run `make help` to see all available commands:
- `make install` - Install package
- `make install-dev` - Install with dev dependencies
- `make test` - Run tests
- `make test-cov` - Run tests with coverage
- `make test-parallel` - Run tests in parallel (faster)
- `make test-fast` - Run tests excluding slow ones
- `make lint` - Run all linters
- `make format` - Format code
- `make build` - Build package
- `make changelog` - Generate changelog
- `make clean` - Clean build artifacts
### Running Tests
```bash
make test
# or
pytest
```
### Code Quality
The project uses several tools to maintain code quality:
- **Ruff**: Fast Python linter and formatter (replaces flake8, isort, and more)
- **MyPy**: Static type checker for type safety
- **Bandit**: Security vulnerability scanner
- **Pre-commit**: Git hooks for automated checks
- **Pytest**: Testing framework with coverage reporting
```bash
# Format code
make format
# Run linters
make lint
# Run pre-commit on all files
make pre-commit-run
# Run tests in parallel (faster for large test suites)
make test-parallel
```
### Changelog Management
This project uses [git-changelog](https://github.com/pawamoy/git-changelog) with conventional commits:
```bash
# Generate changelog
make changelog
```
Commit message format:
- `feat:` - New features
- `fix:` - Bug fixes
- `docs:` - Documentation changes
- `refactor:` - Code refactoring
- `test:` - Test changes
- `chore:` - Maintenance tasks
- `perf:` - Performance improvements
## License
See the [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-schema-lite",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Rohit Garud <rohit.garuda1992@gmail.com>",
"keywords": "dspy, json-schema, llm, openai, pydantic, schema, token-optimization",
"author": null,
"author_email": "Rohit Garud <rohit.garuda1992@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/e0/e3/7903046e346e845373503333280276a9b94132de9f4eeba602d0cbc162be/llm_schema_lite-0.4.0.tar.gz",
"platform": null,
"description": "# llm-schema-lite\n\n[](https://pypi.org/project/llm-schema-lite/)\n[](https://pypi.org/project/llm-schema-lite/)\n[](https://github.com/rohitgarud/llm-schema-lite/actions)\n[](https://codecov.io/gh/rohitgarud/llm-schema-lite)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/astral-sh/ruff)\n\nTransform verbose Pydantic JSON schemas into LLM-friendly formats. Reduce token usage by **60-85%** while preserving essential type information. Includes robust JSON/YAML parsing with automatic error recovery.\n\n## \ud83d\ude80 Quick Start\n\n### Basic Usage\n\n```python\nfrom pydantic import BaseModel\nfrom llm_schema_lite import simplify_schema, loads\n\n# Define your Pydantic model\nclass User(BaseModel):\n name: str\n age: int\n email: str\n\n# Transform to LLM-friendly format\nschema = simplify_schema(User)\nprint(schema.to_string())\n# Output: { name: string, age: int, email: string }\n\n# Parse JSON/YAML with robust error handling\njson_data = loads('{\"name\": \"John\", \"age\": 30}', mode=\"json\")\nyaml_data = loads('name: Jane\\nage: 25', mode=\"yaml\")\n```\n\n### Multiple Output Formats\n\n```python\n# JSONish format (BAML-like) - Default\nschema = simplify_schema(User)\nprint(schema.to_string())\n# { name: string, age: int, email: string }\n\n# TypeScript format\nschema_ts = simplify_schema(User, format_type=\"typescript\")\nprint(schema_ts.to_string())\n# interface User { name: string; age: int; email: string; }\n\n# YAML format\nschema_yaml = simplify_schema(User, format_type=\"yaml\")\nprint(schema_yaml.to_string())\n# name: string\n# age: int\n# email: string\n```\n\n### Advanced Features\n\n```python\nfrom pydantic import BaseModel, Field\n\nclass Product(BaseModel):\n name: str = Field(..., description=\"Product name\", min_length=1)\n price: float = Field(..., ge=0, description=\"Price must be positive\")\n tags: list[str] = Field(default_factory=list)\n\n# Include metadata (descriptions, constraints)\nschema_with_meta = simplify_schema(Product, include_metadata=True)\nprint(schema_with_meta.to_string())\n# {\n# name: string //Product name, minLength: 1,\n# price: float //Price must be positive, min: 0,\n# tags: string[]\n# }\n\n# Exclude metadata for minimal output\nschema_minimal = simplify_schema(Product, include_metadata=False)\nprint(schema_minimal.to_string())\n# {\n# name: string,\n# price: float,\n# tags: string[]\n# }\n```\n\n### Nested Models\n\n```python\nclass Address(BaseModel):\n street: str\n city: str\n zipcode: str\n\nclass Customer(BaseModel):\n name: str\n email: str\n address: Address\n\nschema = simplify_schema(Customer)\nprint(schema.to_string())\n# { name: string, email: string, address: { street: string, city: string, zipcode: string } }\n```\n\n### Different Output Methods\n\n```python\nschema = simplify_schema(User)\n\n# String output\nprint(schema.to_string())\n\n# JSON output\nprint(schema.to_json(indent=2))\n\n# Dictionary output\nprint(schema.to_dict())\n\n# YAML output (if format_type=\"yaml\")\nprint(schema.to_yaml())\n```\n\n## \ud83d\udcca Token Reduction\n\nCompare the token usage:\n\n```python\nimport json\nfrom pydantic import BaseModel\n\nclass User(BaseModel):\n name: str\n age: int\n email: str\n\n# Original Pydantic schema (verbose)\noriginal_schema = User.model_json_schema()\nprint(\"Original tokens:\", len(json.dumps(original_schema)))\n\n# Simplified schema (LLM-friendly)\nsimplified = simplify_schema(User)\nprint(\"Simplified tokens:\", len(simplified.to_string()))\n\n# Typical reduction: 60-85% fewer tokens!\n```\n\n## \ud83c\udfaf Use Cases\n\n- **LLM Function Calling**: Reduce schema tokens in function definitions\n- **DSPy Integration**: Native adapter for structured outputs with multiple modes\n- **LangChain**: Streamline Pydantic model schemas\n- **Raw LLM APIs**: Minimize prompt overhead with concise schemas\n- **Robust Parsing**: Parse malformed JSON/YAML from LLM responses with automatic repair\n- **Data Extraction**: Extract structured data from mixed text content and markdown\n\n## \ud83d\udd0c DSPy Integration\n\n**NEW!** Native DSPy adapter with support for JSON, JSONish, and YAML output modes:\n\n```python\nimport dspy\nfrom pydantic import BaseModel\nfrom llm_schema_lite.dspy_integration import StructuredOutputAdapter, OutputMode\n\nclass Answer(BaseModel):\n answer: str\n confidence: float\n\n# Create adapter with JSONish mode (60-85% fewer tokens)\nadapter = StructuredOutputAdapter(output_mode=OutputMode.JSONISH)\n\n# Configure DSPy\nlm = dspy.LM(model=\"openai/gpt-4\")\ndspy.configure(lm=lm, adapter=adapter)\n\n# Use with any DSPy module\nclass QA(dspy.Signature):\n question: str = dspy.InputField()\n answer: Answer = dspy.OutputField()\n\npredictor = dspy.Predict(QA)\nresult = predictor(question=\"What is Python?\")\n```\n\n**Features:**\n- \ud83c\udfaf **Multiple Output Modes**: JSON, JSONish (BAML-style), and YAML\n- \ud83d\udcc9 **60-85% Token Reduction**: With JSONish mode\n- \ud83d\udd04 **Input Schema Simplification**: Automatically simplifies Pydantic input fields\n- \ud83d\udee1\ufe0f **Robust Parsing**: Handles malformed outputs with automatic recovery\n- \u2705 **Full Compatibility**: Works with Predict, ChainOfThought, and all DSPy modules\n\nSee [DSPy Integration Guide](src/llm_schema_lite/dspy_integration/README.md) for detailed documentation.\n\n## \ud83d\udd27 Robust Parsing with `loads`\n\n**NEW!** The `loads` function provides unified, robust parsing for JSON and YAML content with automatic error recovery and markdown extraction.\n\n### Basic Usage\n\n```python\nfrom llm_schema_lite import loads\n\n# Parse JSON\ndata = loads('{\"name\": \"John\", \"age\": 30}', mode=\"json\")\nprint(data) # {'name': 'John', 'age': 30}\n\n# Parse YAML\ndata = loads('name: Jane\\nage: 25', mode=\"yaml\")\nprint(data) # {'name': 'Jane', 'age': 25}\n```\n\n### Markdown Extraction\n\nAutomatically extracts content from markdown code blocks:\n\n```python\n# JSON from markdown\nmarkdown_json = '''```json\n{\"name\": \"Alice\", \"age\": 28}\n```'''\ndata = loads(markdown_json, mode=\"json\")\nprint(data) # {'name': 'Alice', 'age': 28}\n\n# YAML from markdown\nmarkdown_yaml = '''```yaml\nname: Bob\nage: 32\n```'''\ndata = loads(markdown_yaml, mode=\"yaml\")\nprint(data) # {'name': 'Bob', 'age': 32}\n```\n\n### JSON Object Extraction\n\nExtracts JSON objects from embedded text when markdown extraction is disabled:\n\n```python\n# Extract JSON from mixed content\ntext = 'Here is the result: {\"name\": \"Charlie\", \"age\": 35} and some other text'\ndata = loads(text, mode=\"json\", extract_from_markdown=False)\nprint(data) # {'name': 'Charlie', 'age': 35}\n\n# Multiple JSON objects - extracts the first one\nmultiple = 'First: {\"a\": 1} Second: {\"b\": 2}'\ndata = loads(multiple, mode=\"json\", extract_from_markdown=False)\nprint(data) # {'a': 1}\n```\n\n### Error Recovery and Repair\n\nHandles malformed JSON with automatic repair:\n\n```python\n# Malformed JSON with trailing comma\nmalformed = '{\"name\": \"David\", \"age\": 40,}'\ndata = loads(malformed, mode=\"json\")\nprint(data) # {'name': 'David', 'age': 40}\n\n# Missing quotes\nmissing_quotes = '{name: \"Eve\", age: 22}'\ndata = loads(missing_quotes, mode=\"json\")\nprint(data) # {'name': 'Eve', 'age': 22}\n\n# Disable repair to get errors\ntry:\n loads(malformed, mode=\"json\", repair=False)\nexcept ConversionError as e:\n print(f\"Parse error: {e}\")\n```\n\n### YAML Fallback\n\nYAML parsing automatically falls back to JSON when YAML parsing fails:\n\n```python\n# YAML that looks like JSON\nyaml_like_json = '{\"name\": \"Frank\", \"age\": 45}'\ndata = loads(yaml_like_json, mode=\"yaml\")\nprint(data) # {'name': 'Frank', 'age': 45}\n```\n\n### Advanced Features\n\n```python\n# Complex nested structures\ncomplex_json = '''```json\n{\n \"user\": {\n \"name\": \"Grace\",\n \"details\": {\n \"age\": 30,\n \"city\": \"NYC\"\n }\n }\n}\n```'''\ndata = loads(complex_json, mode=\"json\")\nprint(data['user']['details']['city']) # NYC\n\n# Arrays and special values\narray_json = '{\"items\": [\"apple\", \"banana\"], \"active\": true, \"data\": null}'\ndata = loads(array_json, mode=\"json\")\nprint(data) # {'items': ['apple', 'banana'], 'active': True, 'data': None}\n\n# YAML with comments\nyaml_with_comments = '''# User information\nname: Henry # Full name\nage: 35\n# Contact details\nemail: henry@example.com'''\ndata = loads(yaml_with_comments, mode=\"yaml\")\nprint(data) # {'name': 'Henry', 'age': 35, 'email': 'henry@example.com'}\n```\n\n### Error Handling\n\n```python\nfrom llm_schema_lite import loads, ConversionError\n\ntry:\n # This will raise ConversionError\n data = loads('This is not JSON at all', mode=\"json\", repair=False)\nexcept ConversionError as e:\n print(f\"Failed to parse: {e}\")\n\n# With repair enabled (default), it will attempt to fix the content\ntry:\n data = loads('This is not JSON at all', mode=\"json\")\n # This might still fail, but will try json_repair first\nexcept ConversionError as e:\n print(f\"Even repair failed: {e}\")\n```\n\n### Use Cases\n\n- **LLM Output Parsing**: Robustly parse JSON/YAML from LLM responses\n- **API Response Handling**: Handle malformed or embedded JSON/YAML\n- **Data Extraction**: Extract structured data from mixed text content\n- **Error Recovery**: Automatically repair common JSON/YAML issues\n- **Markdown Processing**: Extract code blocks from documentation or responses\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install llm-schema-lite\n```\n\n### With DSPy Support\n\n```bash\npip install \"llm-schema-lite[dspy]\"\n```\n\n### Using uv\n\n```bash\n# Basic\nuv pip install llm-schema-lite\n\n# With DSPy\nuv pip install \"llm-schema-lite[dspy]\"\n```\n\n## Development\n\nThis project uses `uv` for package management and includes pre-commit hooks for code quality.\n\n### Setup Development Environment\n\n1. Install uv if you haven't already:\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n```\n\n2. Quick setup with Make:\n```bash\nmake setup\n```\n\nOr manually:\n```bash\n# Create virtual environment\nuv venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n\n# Install package with dev dependencies\nuv pip install -e \".[dev]\"\n\n# Install pre-commit hooks\nuv pip install pre-commit\npre-commit install\npre-commit install --hook-type commit-msg\n```\n\n### Available Make Commands\n\nRun `make help` to see all available commands:\n\n- `make install` - Install package\n- `make install-dev` - Install with dev dependencies\n- `make test` - Run tests\n- `make test-cov` - Run tests with coverage\n- `make test-parallel` - Run tests in parallel (faster)\n- `make test-fast` - Run tests excluding slow ones\n- `make lint` - Run all linters\n- `make format` - Format code\n- `make build` - Build package\n- `make changelog` - Generate changelog\n- `make clean` - Clean build artifacts\n\n### Running Tests\n\n```bash\nmake test\n# or\npytest\n```\n\n### Code Quality\n\nThe project uses several tools to maintain code quality:\n\n- **Ruff**: Fast Python linter and formatter (replaces flake8, isort, and more)\n- **MyPy**: Static type checker for type safety\n- **Bandit**: Security vulnerability scanner\n- **Pre-commit**: Git hooks for automated checks\n- **Pytest**: Testing framework with coverage reporting\n\n```bash\n# Format code\nmake format\n\n# Run linters\nmake lint\n\n# Run pre-commit on all files\nmake pre-commit-run\n\n# Run tests in parallel (faster for large test suites)\nmake test-parallel\n```\n\n### Changelog Management\n\nThis project uses [git-changelog](https://github.com/pawamoy/git-changelog) with conventional commits:\n\n```bash\n# Generate changelog\nmake changelog\n```\n\nCommit message format:\n- `feat:` - New features\n- `fix:` - Bug fixes\n- `docs:` - Documentation changes\n- `refactor:` - Code refactoring\n- `test:` - Test changes\n- `chore:` - Maintenance tasks\n- `perf:` - Performance improvements\n\n## License\n\nSee the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2025 Rohit Garud\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "LLM-ify your JSON schemas",
"version": "0.4.0",
"project_urls": {
"Changelog": "https://github.com/rohitgarud/llm-schema-lite/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/rohitgarud/llm-schema-lite#readme",
"Homepage": "https://github.com/rohitgarud/llm-schema-lite",
"Issues": "https://github.com/rohitgarud/llm-schema-lite/issues",
"Repository": "https://github.com/rohitgarud/llm-schema-lite"
},
"split_keywords": [
"dspy",
" json-schema",
" llm",
" openai",
" pydantic",
" schema",
" token-optimization"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "177be49256be3f147058721999ce511be6cfbd0f6c62be0b1ff243996dee7354",
"md5": "4b750242a7035c6198431081bfe476d7",
"sha256": "1f04a212bb9cf95e005dd6a684a11742f747fc4cc4cb94292eefb7c5e59b0450"
},
"downloads": -1,
"filename": "llm_schema_lite-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4b750242a7035c6198431081bfe476d7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 32105,
"upload_time": "2025-10-19T08:04:24",
"upload_time_iso_8601": "2025-10-19T08:04:24.890309Z",
"url": "https://files.pythonhosted.org/packages/17/7b/e49256be3f147058721999ce511be6cfbd0f6c62be0b1ff243996dee7354/llm_schema_lite-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e0e37903046e346e845373503333280276a9b94132de9f4eeba602d0cbc162be",
"md5": "e5834f9af1faa988660704700712bb13",
"sha256": "f9637737d8dd81e8c847aa7257b39c9656a32221ceac2b041590d744781e2253"
},
"downloads": -1,
"filename": "llm_schema_lite-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "e5834f9af1faa988660704700712bb13",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 257432,
"upload_time": "2025-10-19T08:04:26",
"upload_time_iso_8601": "2025-10-19T08:04:26.472605Z",
"url": "https://files.pythonhosted.org/packages/e0/e3/7903046e346e845373503333280276a9b94132de9f4eeba602d0cbc162be/llm_schema_lite-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-19 08:04:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rohitgarud",
"github_project": "llm-schema-lite",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-schema-lite"
}