# π datason
**A comprehensive Python package for intelligent serialization that handles complex data types with ease**
[](https://pypi.org/project/datason/)
[](https://pypi.org/project/datason/)
[](https://pypi.org/project/datason/)
[](https://github.com/danielendler/datason/releases)
[](https://opensource.org/licenses/MIT)
[](https://github.com/danielendler/datason)
[](https://github.com/astral-sh/ruff)
[](https://codecov.io/github/danielendler/datason)
[](https://github.com/danielendler/datason/actions)
π― **Perfect Drop-in Replacement for Python's JSON Module** with enhanced features for complex data types and ML workflows. Zero migration effort - your existing JSON code works immediately with smart datetime parsing, type preservation, and advanced serialization capabilities.
> **π Works exactly like `json` module**: Use `import datason.json as json` for perfect compatibility, or `import datason` for enhanced features like automatic datetime parsing and ML type support.
## β¨ Features
### π― **Drop-in JSON Replacement**
- π **Perfect Compatibility**: Works exactly like Python's `json` module - zero code changes needed
- π **Enhanced by Default**: Main API provides smart datetime parsing and type detection automatically
- β‘ **Dual API Strategy**: Choose stdlib compatibility (`datason.json`) or enhanced features (`datason`)
- π οΈ **Zero Migration**: Existing `json.loads/dumps` code works immediately with optional enhancements
### π§ **Intelligent Processing**
- π§ **Smart Type Detection**: Automatically handles pandas DataFrames, NumPy arrays, datetime objects, and more
- π **Bidirectional**: Serialize to JSON and deserialize back to original objects with perfect fidelity
- π **Datetime Intelligence**: Automatic ISO 8601 string parsing across Python 3.8-3.11+
- π‘οΈ **Type Safety**: Preserves data types and structure integrity with **guaranteed round-trip** serialization
### π **ML/AI Optimized**
- π **ML Framework Support**: Production-ready support for 10+ ML frameworks with unified architecture
- β‘ **High Performance**: Sub-millisecond serialization optimized for ML workloads
- π― **Simple & Direct API**: Intention-revealing functions (`dump_api`, `dump_ml`, `dump_secure`, `dump_fast`) with automatic optimization
- π **Progressive Loading**: Choose your success rate - `load_basic` (60-70%), `load_smart` (80-90%), `load_perfect` (100%)
- ποΈ **Production Ready**: Enterprise-grade ML serving with monitoring, A/B testing, and security
### π§ **Developer Experience**
- π **Extensible**: Easy to add custom serializers for your own types
- π¦ **Zero Dependencies**: Core functionality works without additional packages
- π **Integrity Verification**: Hash, sign, and verify objects for compliance workflows
- π **File Operations**: Save and load JSON/JSONL files with compression support
## π€ ML Framework Support
datason provides **production-ready integration** for major ML frameworks with consistent serialization:
### **Core ML Libraries**
- πΌ **Pandas** - DataFrames with schema preservation
- π’ **NumPy** - Arrays with dtype and shape preservation
- π₯ **PyTorch** - Tensors with exact dtype/shape reconstruction
- π§ **TensorFlow/Keras** - Models with architecture and weights
- π² **Scikit-learn** - Fitted models with parameters
### **Advanced ML Frameworks**
- π **CatBoost** - Models with fitted state and parameter extraction
- π **Optuna** - Studies with trial history and hyperparameter tracking
- π **Plotly** - Interactive figures with data, layout, and configuration
- β‘ **Polars** - High-performance DataFrames with schema preservation
- π― **XGBoost** - Gradient boosting models (via scikit-learn interface)
### **ML Serving Platforms**
- π± **BentoML** - Production services with A/B testing and monitoring
- βοΈ **Ray Serve** - Scalable deployment with autoscaling
- π¬ **MLflow** - Model registry integration with experiment tracking
- π¨ **Streamlit** - Interactive dashboards with real-time data
- π **Gradio** - ML demos with consistent data handling
- β‘ **FastAPI** - Custom APIs with validation and rate limiting
- βΈοΈ **Seldon Core/KServe** - Kubernetes-native model serving
> **Universal Pattern**: All frameworks use the same `get_api_config()` for consistent UUID and datetime handling across your entire ML pipeline.
## π Python Version Support
datason officially supports **Python 3.8+** and is actively tested on:
- β
**Python 3.8** - Minimum supported version (core functionality)
- β
**Python 3.9** - Full compatibility
- β
**Python 3.10** - Full compatibility
- β
**Python 3.11** - Full compatibility (primary development version)
- β
**Python 3.12** - Full compatibility
- β
**Python 3.13** - Latest stable version (core features only; many ML libraries still releasing wheels)
### Compatibility Testing
We maintain compatibility through:
- **Automated CI testing** on all supported Python versions with strategic coverage:
- **Python 3.8**: Core functionality validation (minimal dependencies)
- **Python 3.9**: Data science focus (pandas integration)
- **Python 3.10**: ML focus (scikit-learn, scipy)
- **Python 3.11**: Full test suite (primary development version)
- **Python 3.12**: Full test suite
- **Python 3.13**: Core serialization tests only (latest stable)
- **Core functionality tests** ensuring basic serialization works on Python 3.8+
- **Dependency compatibility checks** for optional ML/data science libraries
- **Runtime version validation** with helpful error messages
> **Note**: While core functionality works on Python 3.8, some optional dependencies (like latest ML frameworks) may require newer Python versions. The package will still work - you'll just have fewer optional features available.
>
> **Python 3.13 Caution**: Many machine learning libraries have not yet released official 3.13 builds. Datason runs on Python 3.13, but only with core serialization features until those libraries catch up.
### Python 3.8 Limitations
Python 3.8 users should be aware:
- β
**Core serialization** - Full support
- β
**Basic types** - datetime, UUID, decimal, etc.
- β
**Pandas/NumPy** - Basic DataFrame and array serialization
- β οΈ **Advanced ML libraries** - Some may require Python 3.9+
- β οΈ **Latest features** - Some newer configuration options may have limited support
We recommend Python 3.9+ for the best experience with all features.
## π Drop-in JSON Replacement
**Replace Python's `json` module with zero code changes and get enhanced features automatically!**
### Perfect Compatibility Mode
```python
# Your existing code works unchanged
import datason.json as json
# Exact same API as Python's json module
data = json.loads('{"timestamp": "2024-01-01T00:00:00Z", "value": 42}')
# Returns: {'timestamp': '2024-01-01T00:00:00Z', 'value': 42}
json_string = json.dumps({"key": "value"}, indent=2)
# Works exactly like json.dumps() with all parameters
```
### Enhanced Mode (Automatic Improvements)
```python
# Just use the main datason module for enhanced features
import datason
# Smart datetime parsing automatically enabled
data = datason.loads('{"timestamp": "2024-01-01T00:00:00Z", "value": 42}')
# Returns: {'timestamp': datetime.datetime(2024, 1, 1, 0, 0, tzinfo=timezone.utc), 'value': 42}
# Enhanced serialization with dict output for chaining
result = datason.dumps({"timestamp": datetime.now(), "data": [1, 2, 3]})
# Returns: dict (not string) with smart type handling
```
### Migration Strategy
```python
# Phase 1: Drop-in replacement (zero risk)
import datason.json as json # Perfect compatibility
# Phase 2: Enhanced features when ready
import datason # Smart datetime parsing, ML support, etc.
# Phase 3: Advanced features as needed
datason.dump_ml(ml_model) # ML-optimized serialization
datason.dump_secure(data) # Automatic PII redaction
datason.load_perfect(data, template) # 100% accurate reconstruction
```
> **Zero Risk Migration**: Start with `datason.json` for perfect compatibility, then gradually adopt enhanced features when you need them.
## πββοΈ Quick Start
### Installation
```bash
pip install datason
```
### Production ML Serving - Simple & Direct
```python
import datason as ds
import uuid
from datetime import datetime
# ML prediction data with UUIDs and complex types
prediction_data = {
"request_id": uuid.uuid4(),
"timestamp": datetime.now(),
"features": {"feature1": 1.0, "feature2": 2.0},
"model_version": "1.0.0"
}
# Simple, direct API with automatic optimizations
api_response = ds.dump_api(prediction_data) # Perfect for web APIs
# β
UUIDs become strings automatically - no more Pydantic errors!
# ML-optimized serialization
import torch
model_data = {"model": torch.nn.Linear(10, 1), "weights": torch.randn(10, 1)}
ml_serialized = ds.dump_ml(model_data) # Automatic ML optimization
# Security-focused with automatic PII redaction
user_data = {"name": "Alice", "email": "alice@example.com", "ssn": "123-45-6789"}
secure_data = ds.dump_secure(user_data) # Automatic PII redaction
# Works across ALL ML frameworks with same simple pattern
import bentoml
from bentoml.io import JSON
@svc.api(input=JSON(), output=JSON())
def predict(input_data: dict) -> dict:
features = ds.load_smart(input_data) # 80-90% success rate
prediction = model.predict(features)
return ds.dump_api({"prediction": prediction}) # Clean API response
```
### Simple & Direct API
```python
import datason as ds
from datetime import datetime
import pandas as pd
import numpy as np
# Complex nested data structure
data = {
"timestamp": datetime.now(),
"dataframe": pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}),
"array": np.array([1, 2, 3, 4, 5]),
"nested": {
"values": [1, 2, {"inner": datetime.now()}]
}
}
# Simple API with automatic optimization
api_data = ds.dump_api(data) # Web APIs (UUIDs as strings, clean JSON)
ml_data = ds.dump_ml(data) # ML optimized (framework detection)
secure_data = ds.dump_secure(data) # Security focused (PII redaction)
fast_data = ds.dump_fast(data) # Performance optimized
# Progressive loading - choose your success rate
basic_result = ds.load_basic(api_data) # 60-70% success, fastest
smart_result = ds.load_smart(api_data) # 80-90% success, balanced
perfect_result = ds.load_perfect(api_data, template=data) # 100% with template
# Traditional API still available
serialized = ds.serialize(data)
restored = ds.deserialize(serialized)
```
### Advanced Options - Composable & Flexible
```python
import datason as ds
# Use the main dump() function with options for complex scenarios
large_sensitive_ml_data = {
"model": trained_model,
"user_data": {"email": "user@example.com", "preferences": {...}},
"large_dataset": huge_numpy_array
}
# Combine multiple optimizations
result = ds.dump(
large_sensitive_ml_data,
secure=True, # Enable PII redaction
ml_mode=True, # Optimize for ML objects
chunked=True # Memory-efficient processing
)
# Or use specialized functions for simple cases
api_data = ds.dump_api(response_data) # Web API optimized
ml_data = ds.dump_ml(model_data) # ML optimized
secure_data = ds.dump_secure(sensitive_data) # Security focused
fast_data = ds.dump_fast(performance_data) # Speed optimized
# Progressive loading with clear success rates
basic_result = ds.load_basic(json_data) # 60-70% success, fastest
smart_result = ds.load_smart(json_data) # 80-90% success, balanced
perfect_result = ds.load_perfect(json_data, template) # 100% with template
# API discovery and help
help_info = ds.help_api() # Get guidance on function selection
```
## ποΈ Production Architecture
datason provides a **complete ML serving architecture** with visual documentation:
- **π― Universal Integration Pattern**: Single configuration works across all frameworks
- **π Comprehensive Monitoring**: Prometheus metrics, health checks, and observability
- **π Enterprise Security**: Input validation, rate limiting, and PII redaction
- **β‘ Performance Optimized**: Sub-millisecond serialization with caching support
- **π A/B Testing**: Framework for testing multiple model versions
- **π Production Examples**: Ready-to-deploy BentoML, Ray Serve, and FastAPI services
### Quick Architecture Overview
```mermaid
graph LR
A[Client Apps] --> B[API Gateway]
B --> C[ML Services<br/>BentoML/Ray/FastAPI]
C --> D[Models<br/>CatBoost/Keras/etc]
C --> E[Cache<br/>Redis]
C --> F[DB<br/>PostgreSQL]
style C fill:#e1f5fe,stroke:#01579b,stroke-width:3px
style D fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
```
> **See Full Documentation**: Complete architecture diagrams and production patterns in `docs/features/model-serving/`
## π Documentation
### **Core Documentation**
For full documentation, examples, and API reference, visit: https://datason.readthedocs.io
### **ML Serving Guides**
- ποΈ **[Architecture Overview](docs/features/model-serving/architecture-overview.md)** - Complete system architecture with Mermaid diagrams
- π **[Model Serving Integration](docs/features/model-serving/index.md)** - Production-ready examples for all major frameworks
- π― **[Production Patterns](docs/features/model-serving/production-patterns.md)** - Advanced deployment strategies and best practices
### **Production Examples**
- π± **[Advanced BentoML Integration](examples/advanced_bentoml_integration.py)** - Enterprise service with A/B testing and monitoring
- π **[Production ML Serving Guide](examples/production_ml_serving_guide.py)** - Complete implementation with security and observability
> **Quick Start**: Run `python examples/production_ml_serving_guide.py` to see all features in action!
## π€ Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "datason",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "datason Maintainers <maintainers@datason.dev>",
"keywords": "ai, data-science, datetime, json, ml, numpy, pandas, serialization",
"author": "datason Contributors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/ff/9a/3f80f36831fd002a4cd02f609c940b50a697b9bd1f533595a8063f9057b9/datason-0.13.0.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 datason\n\n**A comprehensive Python package for intelligent serialization that handles complex data types with ease**\n\n[](https://pypi.org/project/datason/)\n[](https://pypi.org/project/datason/)\n[](https://pypi.org/project/datason/)\n[](https://github.com/danielendler/datason/releases)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/danielendler/datason)\n[](https://github.com/astral-sh/ruff)\n[](https://codecov.io/github/danielendler/datason)\n[](https://github.com/danielendler/datason/actions)\n\n\ud83c\udfaf **Perfect Drop-in Replacement for Python's JSON Module** with enhanced features for complex data types and ML workflows. Zero migration effort - your existing JSON code works immediately with smart datetime parsing, type preservation, and advanced serialization capabilities.\n\n> **\ud83d\udd04 Works exactly like `json` module**: Use `import datason.json as json` for perfect compatibility, or `import datason` for enhanced features like automatic datetime parsing and ML type support.\n\n## \u2728 Features\n\n### \ud83c\udfaf **Drop-in JSON Replacement**\n- \ud83d\udd04 **Perfect Compatibility**: Works exactly like Python's `json` module - zero code changes needed\n- \ud83d\ude80 **Enhanced by Default**: Main API provides smart datetime parsing and type detection automatically\n- \u26a1 **Dual API Strategy**: Choose stdlib compatibility (`datason.json`) or enhanced features (`datason`)\n- \ud83d\udee0\ufe0f **Zero Migration**: Existing `json.loads/dumps` code works immediately with optional enhancements\n\n### \ud83e\udde0 **Intelligent Processing** \n- \ud83e\udde0 **Smart Type Detection**: Automatically handles pandas DataFrames, NumPy arrays, datetime objects, and more\n- \ud83d\udd04 **Bidirectional**: Serialize to JSON and deserialize back to original objects with perfect fidelity\n- \ud83d\udd52 **Datetime Intelligence**: Automatic ISO 8601 string parsing across Python 3.8-3.11+\n- \ud83d\udee1\ufe0f **Type Safety**: Preserves data types and structure integrity with **guaranteed round-trip** serialization\n\n### \ud83d\ude80 **ML/AI Optimized**\n- \ud83d\ude80 **ML Framework Support**: Production-ready support for 10+ ML frameworks with unified architecture\n- \u26a1 **High Performance**: Sub-millisecond serialization optimized for ML workloads \n- \ud83c\udfaf **Simple & Direct API**: Intention-revealing functions (`dump_api`, `dump_ml`, `dump_secure`, `dump_fast`) with automatic optimization\n- \ud83d\udcc8 **Progressive Loading**: Choose your success rate - `load_basic` (60-70%), `load_smart` (80-90%), `load_perfect` (100%)\n- \ud83c\udfd7\ufe0f **Production Ready**: Enterprise-grade ML serving with monitoring, A/B testing, and security\n\n### \ud83d\udd27 **Developer Experience**\n- \ud83d\udd0c **Extensible**: Easy to add custom serializers for your own types\n- \ud83d\udce6 **Zero Dependencies**: Core functionality works without additional packages\n- \ud83d\udcdd **Integrity Verification**: Hash, sign, and verify objects for compliance workflows\n- \ud83d\udcc2 **File Operations**: Save and load JSON/JSONL files with compression support\n\n## \ud83e\udd16 ML Framework Support\n\ndatason provides **production-ready integration** for major ML frameworks with consistent serialization:\n\n### **Core ML Libraries**\n- \ud83d\udc3c **Pandas** - DataFrames with schema preservation\n- \ud83d\udd22 **NumPy** - Arrays with dtype and shape preservation \n- \ud83d\udd25 **PyTorch** - Tensors with exact dtype/shape reconstruction\n- \ud83e\udde0 **TensorFlow/Keras** - Models with architecture and weights\n- \ud83c\udf32 **Scikit-learn** - Fitted models with parameters\n\n### **Advanced ML Frameworks**\n- \ud83d\ude80 **CatBoost** - Models with fitted state and parameter extraction\n- \ud83d\udcca **Optuna** - Studies with trial history and hyperparameter tracking\n- \ud83d\udcc8 **Plotly** - Interactive figures with data, layout, and configuration\n- \u26a1 **Polars** - High-performance DataFrames with schema preservation\n- \ud83c\udfaf **XGBoost** - Gradient boosting models (via scikit-learn interface)\n\n### **ML Serving Platforms**\n- \ud83c\udf71 **BentoML** - Production services with A/B testing and monitoring\n- \u2600\ufe0f **Ray Serve** - Scalable deployment with autoscaling\n- \ud83d\udd2c **MLflow** - Model registry integration with experiment tracking\n- \ud83c\udfa8 **Streamlit** - Interactive dashboards with real-time data\n- \ud83c\udfad **Gradio** - ML demos with consistent data handling\n- \u26a1 **FastAPI** - Custom APIs with validation and rate limiting\n- \u2638\ufe0f **Seldon Core/KServe** - Kubernetes-native model serving\n\n> **Universal Pattern**: All frameworks use the same `get_api_config()` for consistent UUID and datetime handling across your entire ML pipeline.\n\n## \ud83d\udc0d Python Version Support\n\ndatason officially supports **Python 3.8+** and is actively tested on:\n\n- \u2705 **Python 3.8** - Minimum supported version (core functionality)\n- \u2705 **Python 3.9** - Full compatibility \n- \u2705 **Python 3.10** - Full compatibility\n- \u2705 **Python 3.11** - Full compatibility (primary development version)\n- \u2705 **Python 3.12** - Full compatibility\n- \u2705 **Python 3.13** - Latest stable version (core features only; many ML libraries still releasing wheels)\n\n### Compatibility Testing\n\nWe maintain compatibility through:\n- **Automated CI testing** on all supported Python versions with strategic coverage:\n - **Python 3.8**: Core functionality validation (minimal dependencies)\n - **Python 3.9**: Data science focus (pandas integration)\n - **Python 3.10**: ML focus (scikit-learn, scipy)\n - **Python 3.11**: Full test suite (primary development version)\n - **Python 3.12**: Full test suite\n - **Python 3.13**: Core serialization tests only (latest stable)\n- **Core functionality tests** ensuring basic serialization works on Python 3.8+\n- **Dependency compatibility checks** for optional ML/data science libraries\n- **Runtime version validation** with helpful error messages\n\n> **Note**: While core functionality works on Python 3.8, some optional dependencies (like latest ML frameworks) may require newer Python versions. The package will still work - you'll just have fewer optional features available.\n>\n> **Python 3.13 Caution**: Many machine learning libraries have not yet released official 3.13 builds. Datason runs on Python 3.13, but only with core serialization features until those libraries catch up.\n\n### Python 3.8 Limitations\n\nPython 3.8 users should be aware:\n- \u2705 **Core serialization** - Full support\n- \u2705 **Basic types** - datetime, UUID, decimal, etc.\n- \u2705 **Pandas/NumPy** - Basic DataFrame and array serialization\n- \u26a0\ufe0f **Advanced ML libraries** - Some may require Python 3.9+\n- \u26a0\ufe0f **Latest features** - Some newer configuration options may have limited support\n\nWe recommend Python 3.9+ for the best experience with all features.\n\n## \ud83d\udd04 Drop-in JSON Replacement\n\n**Replace Python's `json` module with zero code changes and get enhanced features automatically!**\n\n### Perfect Compatibility Mode\n```python\n# Your existing code works unchanged\nimport datason.json as json\n\n# Exact same API as Python's json module\ndata = json.loads('{\"timestamp\": \"2024-01-01T00:00:00Z\", \"value\": 42}')\n# Returns: {'timestamp': '2024-01-01T00:00:00Z', 'value': 42}\n\njson_string = json.dumps({\"key\": \"value\"}, indent=2)\n# Works exactly like json.dumps() with all parameters\n```\n\n### Enhanced Mode (Automatic Improvements)\n```python\n# Just use the main datason module for enhanced features\nimport datason\n\n# Smart datetime parsing automatically enabled\ndata = datason.loads('{\"timestamp\": \"2024-01-01T00:00:00Z\", \"value\": 42}')\n# Returns: {'timestamp': datetime.datetime(2024, 1, 1, 0, 0, tzinfo=timezone.utc), 'value': 42}\n\n# Enhanced serialization with dict output for chaining\nresult = datason.dumps({\"timestamp\": datetime.now(), \"data\": [1, 2, 3]})\n# Returns: dict (not string) with smart type handling\n```\n\n### Migration Strategy\n```python\n# Phase 1: Drop-in replacement (zero risk)\nimport datason.json as json # Perfect compatibility\n\n# Phase 2: Enhanced features when ready\nimport datason # Smart datetime parsing, ML support, etc.\n\n# Phase 3: Advanced features as needed\ndatason.dump_ml(ml_model) # ML-optimized serialization\ndatason.dump_secure(data) # Automatic PII redaction\ndatason.load_perfect(data, template) # 100% accurate reconstruction\n```\n\n> **Zero Risk Migration**: Start with `datason.json` for perfect compatibility, then gradually adopt enhanced features when you need them.\n\n## \ud83c\udfc3\u200d\u2642\ufe0f Quick Start\n\n### Installation\n\n```bash\npip install datason\n```\n\n### Production ML Serving - Simple & Direct\n\n```python\nimport datason as ds\nimport uuid\nfrom datetime import datetime\n\n# ML prediction data with UUIDs and complex types\nprediction_data = {\n \"request_id\": uuid.uuid4(),\n \"timestamp\": datetime.now(),\n \"features\": {\"feature1\": 1.0, \"feature2\": 2.0},\n \"model_version\": \"1.0.0\"\n}\n\n# Simple, direct API with automatic optimizations\napi_response = ds.dump_api(prediction_data) # Perfect for web APIs\n# \u2705 UUIDs become strings automatically - no more Pydantic errors!\n\n# ML-optimized serialization\nimport torch\nmodel_data = {\"model\": torch.nn.Linear(10, 1), \"weights\": torch.randn(10, 1)}\nml_serialized = ds.dump_ml(model_data) # Automatic ML optimization\n\n# Security-focused with automatic PII redaction\nuser_data = {\"name\": \"Alice\", \"email\": \"alice@example.com\", \"ssn\": \"123-45-6789\"}\nsecure_data = ds.dump_secure(user_data) # Automatic PII redaction\n\n# Works across ALL ML frameworks with same simple pattern\nimport bentoml\nfrom bentoml.io import JSON\n\n@svc.api(input=JSON(), output=JSON())\ndef predict(input_data: dict) -> dict:\n features = ds.load_smart(input_data) # 80-90% success rate\n prediction = model.predict(features)\n return ds.dump_api({\"prediction\": prediction}) # Clean API response\n```\n\n### Simple & Direct API\n\n```python\nimport datason as ds\nfrom datetime import datetime\nimport pandas as pd\nimport numpy as np\n\n# Complex nested data structure\ndata = {\n \"timestamp\": datetime.now(),\n \"dataframe\": pd.DataFrame({\"A\": [1, 2, 3], \"B\": [4, 5, 6]}),\n \"array\": np.array([1, 2, 3, 4, 5]),\n \"nested\": {\n \"values\": [1, 2, {\"inner\": datetime.now()}]\n }\n}\n\n# Simple API with automatic optimization\napi_data = ds.dump_api(data) # Web APIs (UUIDs as strings, clean JSON)\nml_data = ds.dump_ml(data) # ML optimized (framework detection)\nsecure_data = ds.dump_secure(data) # Security focused (PII redaction)\nfast_data = ds.dump_fast(data) # Performance optimized\n\n# Progressive loading - choose your success rate\nbasic_result = ds.load_basic(api_data) # 60-70% success, fastest\nsmart_result = ds.load_smart(api_data) # 80-90% success, balanced\nperfect_result = ds.load_perfect(api_data, template=data) # 100% with template\n\n# Traditional API still available\nserialized = ds.serialize(data)\nrestored = ds.deserialize(serialized)\n```\n\n### Advanced Options - Composable & Flexible\n\n```python\nimport datason as ds\n\n# Use the main dump() function with options for complex scenarios\nlarge_sensitive_ml_data = {\n \"model\": trained_model,\n \"user_data\": {\"email\": \"user@example.com\", \"preferences\": {...}},\n \"large_dataset\": huge_numpy_array\n}\n\n# Combine multiple optimizations\nresult = ds.dump(\n large_sensitive_ml_data,\n secure=True, # Enable PII redaction\n ml_mode=True, # Optimize for ML objects\n chunked=True # Memory-efficient processing\n)\n\n# Or use specialized functions for simple cases\napi_data = ds.dump_api(response_data) # Web API optimized\nml_data = ds.dump_ml(model_data) # ML optimized\nsecure_data = ds.dump_secure(sensitive_data) # Security focused\nfast_data = ds.dump_fast(performance_data) # Speed optimized\n\n# Progressive loading with clear success rates\nbasic_result = ds.load_basic(json_data) # 60-70% success, fastest\nsmart_result = ds.load_smart(json_data) # 80-90% success, balanced\nperfect_result = ds.load_perfect(json_data, template) # 100% with template\n\n# API discovery and help\nhelp_info = ds.help_api() # Get guidance on function selection\n```\n\n\n\n## \ud83c\udfd7\ufe0f Production Architecture\n\ndatason provides a **complete ML serving architecture** with visual documentation:\n\n- **\ud83c\udfaf Universal Integration Pattern**: Single configuration works across all frameworks\n- **\ud83d\udcca Comprehensive Monitoring**: Prometheus metrics, health checks, and observability\n- **\ud83d\udd12 Enterprise Security**: Input validation, rate limiting, and PII redaction\n- **\u26a1 Performance Optimized**: Sub-millisecond serialization with caching support\n- **\ud83d\udd04 A/B Testing**: Framework for testing multiple model versions\n- **\ud83d\udcc8 Production Examples**: Ready-to-deploy BentoML, Ray Serve, and FastAPI services\n\n### Quick Architecture Overview\n\n```mermaid\ngraph LR\n A[Client Apps] --> B[API Gateway]\n B --> C[ML Services<br/>BentoML/Ray/FastAPI]\n C --> D[Models<br/>CatBoost/Keras/etc]\n C --> E[Cache<br/>Redis]\n C --> F[DB<br/>PostgreSQL]\n\n style C fill:#e1f5fe,stroke:#01579b,stroke-width:3px\n style D fill:#f3e5f5,stroke:#4a148c,stroke-width:2px\n```\n\n> **See Full Documentation**: Complete architecture diagrams and production patterns in `docs/features/model-serving/`\n\n## \ud83d\udcda Documentation\n\n### **Core Documentation**\nFor full documentation, examples, and API reference, visit: https://datason.readthedocs.io\n\n### **ML Serving Guides**\n- \ud83c\udfd7\ufe0f **[Architecture Overview](docs/features/model-serving/architecture-overview.md)** - Complete system architecture with Mermaid diagrams\n- \ud83d\ude80 **[Model Serving Integration](docs/features/model-serving/index.md)** - Production-ready examples for all major frameworks\n- \ud83c\udfaf **[Production Patterns](docs/features/model-serving/production-patterns.md)** - Advanced deployment strategies and best practices\n\n### **Production Examples**\n- \ud83c\udf71 **[Advanced BentoML Integration](examples/advanced_bentoml_integration.py)** - Enterprise service with A/B testing and monitoring\n- \ud83d\udcca **[Production ML Serving Guide](examples/production_ml_serving_guide.py)** - Complete implementation with security and observability\n\n> **Quick Start**: Run `python examples/production_ml_serving_guide.py` to see all features in action!\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A comprehensive Python package for intelligent serialization that handles complex data types with ease.",
"version": "0.13.0",
"project_urls": {
"Bug Tracker": "https://github.com/danielendler/datason/issues",
"Changelog": "https://github.com/danielendler/datason/blob/main/CHANGELOG.md",
"Discussions": "https://github.com/danielendler/datason/discussions",
"Documentation": "https://datason.readthedocs.io",
"Funding": "https://github.com/sponsors/danielendler",
"Homepage": "https://github.com/danielendler/datason",
"Repository": "https://github.com/danielendler/datason"
},
"split_keywords": [
"ai",
" data-science",
" datetime",
" json",
" ml",
" numpy",
" pandas",
" serialization"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "18041dc0c30a511736f075f37162573b23da7102c643419865a2a96fa27434f1",
"md5": "e616428710dd50610a0953103cc509e7",
"sha256": "7ea24bd73ec04a9ea0a23b15061c94e4f62838c465f8205c4bf5e2a91de458cc"
},
"downloads": -1,
"filename": "datason-0.13.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e616428710dd50610a0953103cc509e7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 129970,
"upload_time": "2025-08-25T18:35:23",
"upload_time_iso_8601": "2025-08-25T18:35:23.618369Z",
"url": "https://files.pythonhosted.org/packages/18/04/1dc0c30a511736f075f37162573b23da7102c643419865a2a96fa27434f1/datason-0.13.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ff9a3f80f36831fd002a4cd02f609c940b50a697b9bd1f533595a8063f9057b9",
"md5": "05540ca80059274d546231c44fe7f658",
"sha256": "ac3ba69586501fe8d6cf039d75ecaf5659bc16ff089d71f60d73e398505900dd"
},
"downloads": -1,
"filename": "datason-0.13.0.tar.gz",
"has_sig": false,
"md5_digest": "05540ca80059274d546231c44fe7f658",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 511930,
"upload_time": "2025-08-25T18:35:25",
"upload_time_iso_8601": "2025-08-25T18:35:25.278175Z",
"url": "https://files.pythonhosted.org/packages/ff/9a/3f80f36831fd002a4cd02f609c940b50a697b9bd1f533595a8063f9057b9/datason-0.13.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-25 18:35:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "danielendler",
"github_project": "datason",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "datason"
}