# TornadoPy
A Python library for tornado chart generation and analysis. TornadoPy provides tools for processing Excel-based tornado data and generating professional tornado charts for uncertainty analysis.
## Features
- **TornadoProcessor**: Process Excel files containing tornado analysis data
- Parse multi-sheet Excel files with complex headers
- Extract and compute statistics (p90p10, mean, median, minmax, percentiles)
- Filter data by properties and dynamic fields
- Named filter presets for reusable filter combinations
- Base and reference case extraction with caching
- Default multiplier support for consistent unit conversion
- Case selection with weighted criteria
- Batch processing for multiple parameters
- Optimized for performance with native numpy operations
- Comprehensive docstrings and organized code structure
- **tornado_plot**: Generate professional tornado charts
- Customizable colors, fonts, and styling
- Support for p90/p10 ranges with automatic label placement
- Reference case lines
- Custom parameter ordering
- Export to various image formats
- **distribution_plot**: Generate distribution histograms with cumulative curves
- Beautiful bin sizing with round numbers
- Cumulative distribution curve showing % of cases above value
- P90/P50/P10 percentile markers and subtitle
- Optional reference case line
- Multiple color schemes available
- Export to various image formats
## Installation
Install from PyPI:
```bash
pip install tornadopy
```
## Quick Start
### Processing Tornado Data
```python
from tornadopy import TornadoProcessor
# Load Excel file with tornado data
# Optional: Set default multiplier and base case sheet
processor = TornadoProcessor(
"tornado_data.xlsb",
multiplier=1e-6, # Default multiplier for all operations
base_case="Reference_Case" # Sheet containing base/reference cases
)
# Get available parameters
parameters = processor.parameters()
print(f"Parameters: {parameters}")
# Get properties for a parameter
properties = processor.properties(parameter="Parameter1")
print(f"Properties: {properties}")
# Compute statistics
result = processor.compute(
stats="p90p10",
parameter="Parameter1",
filters={"property": "npv"},
multiplier=1e-6 # Convert to millions (or use default if set)
)
print(f"P90/P10: {result['p90p10']}")
```
### Generating Tornado Charts
```python
from tornadopy import TornadoProcessor, tornado_plot
# Get tornado data
processor = TornadoProcessor("tornado_data.xlsb")
tornado_data = processor.get_tornado_data(
parameters="all",
filters={"property": "npv"},
multiplier=1e-6
)
# Convert to sections format for plotting
sections = []
for param, data in tornado_data.items():
sections.append({
"parameter": param,
"minmax": [data["p10"], data["p90"]],
"p90p10": [data["p10"], data["p90"]]
})
# Generate tornado chart
fig, ax, saved = tornado_plot(
sections=sections,
title="NPV Tornado Chart",
subtitle="Base case = 100.0 MM USD",
base=100.0,
unit="MM USD",
outfile="tornado_chart.png"
)
```
### Generating Distribution Charts
```python
from tornadopy import TornadoProcessor, distribution_plot
# Get distribution data
processor = TornadoProcessor("tornado_data.xlsb")
distribution = processor.distribution(
parameter="Parameter1",
filters={"property": "npv"},
multiplier=1e-6
)
# Generate distribution chart
fig, ax, saved = distribution_plot(
distribution,
title="NPV Distribution",
unit="MM USD",
color="blue",
reference_case=100.0,
outfile="npv_distribution.png"
)
```
### Advanced Usage
#### Multi-Zone Analysis with Batch Processing
Process multiple parameters at once with zone filtering and custom options:
```python
from tornadopy import TornadoProcessor, tornado_plot
processor = TornadoProcessor("reservoir_data.xlsb")
# Compute statistics for all parameters with zone filtering
results = processor.compute_batch(
stats=["minmax", "p90p10"],
parameters="all",
filters={
"zones": ["Zone A - Reservoir", "Zone B - Reservoir"],
"property": "STOIIP"
},
multiplier=1e-3, # Convert to thousands
options={
"p90p10_threshold": 150, # Minimum cases required
"skip": ["sources"] # Skip source tracking for cleaner output
}
)
# Convert results to tornado plot format
sections = []
for result in results:
if "p90p10" in result and "errors" not in result:
p10, p90 = result["p90p10"]
sections.append({
"parameter": result["parameter"],
"minmax": result.get("minmax", [p10, p90]),
"p90p10": [p10, p90]
})
# Generate tornado chart
fig, ax, saved = tornado_plot(
sections,
title="STOIIP Tornado - Multi-Zone Analysis",
base=14.5, # Base case value
reference_case=14.2, # Reference case line
unit="MM m³",
outfile="stoiip_tornado.svg"
)
```
#### Distribution Plot with Custom Gridlines
Create distribution charts with percentile markers and custom grid settings:
```python
from tornadopy import TornadoProcessor, distribution_plot
processor = TornadoProcessor("reservoir_data.xlsb")
# Get distribution data for specific zones
distribution = processor.distribution(
parameter="Uncertainty_Analysis",
filters={
"zones": ["Zone A - Reservoir", "Zone B - Reservoir"],
"property": "STOIIP"
},
multiplier=1e-3 # Convert to thousands
)
# Generate distribution chart with custom settings
fig, ax, saved = distribution_plot(
data=distribution,
title="STOIIP Distribution - Uncertainty Analysis",
unit="MM m³",
color="blue",
reference_case=14.5,
target_bins=20,
settings={
"show_percentile_markers": True, # Show P90/P50/P10 markers
"marker_size": 8,
"show_minor_grid": True,
# Custom gridline intervals
"x_major_interval": 5, # Major x-gridlines every 5 units
"x_minor_interval": 1, # Minor x-gridlines every 1 unit
"y_major_interval": 50, # Major y-gridlines every 50 frequency
"y_minor_interval": 10, # Minor y-gridlines every 10 frequency
},
outfile="stoiip_distribution.svg"
)
```
#### Working with Multiple Properties
Analyze multiple properties simultaneously:
```python
# Compute statistics for multiple properties
result = processor.compute(
stats=["p90p10", "mean", "median"],
parameter="Reservoir_Model",
filters={
"zones": ["Main_Reservoir"],
"property": ["STOIIP", "GIIP"] # Multiple properties
},
multiplier=1e-6 # Convert to millions
)
# Access results by property
stoiip_p90, stoiip_p10 = result["p90p10"][0] # First property (STOIIP)
giip_p90, giip_p10 = result["p90p10"][1] # Second property (GIIP)
print(f"STOIIP P90/P10: {stoiip_p90:.2f} / {stoiip_p10:.2f} MM m³")
print(f"GIIP P90/P10: {giip_p90:.2f} / {giip_p10:.2f} bcm")
```
#### Case Selection with Weighted Criteria
Find specific cases that match target percentiles:
```python
# Find closest cases to p90/p10 with custom weights
result = processor.compute(
stats="p90p10",
parameter="Reservoir_Model",
filters={
"zones": ["Main_Reservoir"],
"property": "STOIIP"
},
multiplier=1e-6,
case_selection=True, # Enable case selection
selection_criteria={
"weights": {"STOIIP": 0.6, "GIIP": 0.4} # Weighted criteria
}
)
# Access closest cases
for case in result["closest_cases"]:
print(f"Case {case['case']}: index={case['idx']}, STOIIP={case['STOIIP']:.2f}")
print(f" Properties: {case['properties']}")
```
#### Skipping Specific Parameters
Exclude certain parameters from batch processing:
```python
# Process all parameters except specific ones
results = processor.compute_batch(
stats="p90p10",
parameters="all",
filters={"property": "STOIIP"},
multiplier=1e-3,
options={
"skip_parameters": ["Reference_Case", "Full_Uncertainty"], # Skip these
"skip": ["sources", "errors"] # Skip these fields in output
}
)
```
#### Custom Tornado Chart Styling
Full control over chart appearance:
```python
# Custom styling for professional reports
settings = {
"figsize": (12, 8),
"dpi": 200,
"pos_dark": "#1E88E5", # Blue for positive
"neg_dark": "#D32F2F", # Red for negative
"show_values": ["min", "max", "p10", "p90"],
"show_percentage_diff": True,
}
fig, ax, saved = tornado_plot(
sections=sections,
title="Reservoir Volume Sensitivity Analysis",
subtitle="Base Case: 100 MM m³",
base=100.0,
reference_case=95.0,
unit="MM m³",
preferred_order=["Porosity", "NTG", "Area"], # Custom parameter order
settings=settings,
outfile="sensitivity_analysis.png"
)
```
## Common Workflows
### Complete Reservoir Uncertainty Analysis
End-to-end workflow for reservoir analysis with tornado and distribution charts:
```python
from tornadopy import TornadoProcessor, tornado_plot, distribution_plot
import matplotlib.pyplot as plt
# Load data
processor = TornadoProcessor("reservoir_uncertainty.xlsb")
# Define common filters
zones = ["Main Reservoir - SST1", "Main Reservoir - SST2"]
multiplier = 1e-3 # Convert to thousands
# 1. Generate STOIIP Tornado Chart
stoiip_results = processor.compute_batch(
stats=["minmax", "p90p10"],
parameters="all",
filters={
"zones": zones,
"property": "STOIIP"
},
multiplier=multiplier,
options={
"p90p10_threshold": 150,
"skip_parameters": ["Reference_Case", "Full_Uncertainty"]
}
)
# Convert to tornado format
sections = []
for result in stoiip_results:
if "p90p10" in result and "errors" not in result:
p10, p90 = result["p90p10"]
min_val, max_val = result.get("minmax", [p10, p90])
sections.append({
"parameter": result["parameter"],
"minmax": [min_val, max_val],
"p90p10": [p10, p90]
})
# Create tornado chart
fig1, ax1, saved1 = tornado_plot(
sections,
title="STOIIP Sensitivity Analysis",
base=14.5,
reference_case=14.2,
unit="MM m³",
outfile="stoiip_tornado.svg"
)
# 2. Generate Distribution Chart
distribution = processor.distribution(
parameter="Full_Uncertainty",
filters={
"zones": zones,
"property": "STOIIP"
},
multiplier=multiplier
)
fig2, ax2, saved2 = distribution_plot(
data=distribution,
title="STOIIP Distribution - Full Uncertainty",
unit="MM m³",
color="blue",
reference_case=14.5,
settings={
"show_percentile_markers": True,
"x_major_interval": 5,
"x_minor_interval": 1,
},
outfile="stoiip_distribution.svg"
)
# Show both charts
plt.show()
print(f"Charts saved: {saved1}, {saved2}")
```
### Comparing Multiple Scenarios
Compare different reservoir scenarios side by side:
```python
from tornadopy import TornadoProcessor, distribution_plot
import matplotlib.pyplot as plt
import numpy as np
processor = TornadoProcessor("scenarios.xlsb")
# Define scenarios
scenarios = [
{"name": "Base Case", "param": "Base_Case", "color": "blue"},
{"name": "Optimistic", "param": "Optimistic", "color": "green"},
{"name": "Pessimistic", "param": "Pessimistic", "color": "red"},
]
# Create subplots for comparison
fig, axes = plt.subplots(1, 3, figsize=(18, 6))
for idx, scenario in enumerate(scenarios):
dist = processor.distribution(
parameter=scenario["param"],
filters={"property": "NPV"},
multiplier=1e-6
)
distribution_plot(
data=dist,
title=f"{scenario['name']} Scenario",
unit="MM USD",
color=scenario["color"],
target_bins=15,
outfile=None # Don't save individual plots
)
# Move the plot to the subplot
plt.close()
plt.tight_layout()
plt.savefig("scenario_comparison.png", dpi=200)
plt.show()
```
## Tips and Best Practices
### Filter Management (NEW)
Store and reuse filter presets for consistent analysis:
```python
from tornadopy import TornadoProcessor
processor = TornadoProcessor("reservoir_data.xlsb")
# Store commonly used filter combinations
processor.set_filter('main_zones', {
'zones': ['Main Reservoir - SST1', 'Main Reservoir - SST2'],
'property': 'STOIIP'
})
processor.set_filter('north_area', {
'zones': ['North Zone A', 'North Zone B'],
})
# List all stored filters
print(f"Available filters: {processor.list_filters()}")
# Use stored filters by name (can be string or dict)
result = processor.compute(
stats="p90p10",
parameter="Uncertainty_Analysis",
filters="main_zones", # Reference filter by name
multiplier=1e-3
)
# Retrieve filter for inspection
main_zones_filter = processor.get_filter('main_zones')
print(f"Filter contents: {main_zones_filter}")
# Can still use dict filters as before
result = processor.compute(
stats="mean",
parameter="Porosity",
filters={'zones': ['Zone A'], 'property': 'STOIIP'},
multiplier=1e-3
)
```
### Base and Reference Case Extraction (NEW)
Extract base and reference case values when initializing with a base case sheet:
```python
from tornadopy import TornadoProcessor
# Initialize with base case parameter
processor = TornadoProcessor(
"reservoir_data.xlsb",
multiplier=1e-3,
base_case="Reference_Case" # Sheet containing base (idx 0) and reference (idx 1)
)
# Access cached base case values (extracted at initialization)
base_values = processor.base_case_values
print(f"Base case STOIIP: {base_values.get('STOIIP', 'N/A')}")
# Access cached reference case values
ref_values = processor.reference_case_values
print(f"Reference case STOIIP: {ref_values.get('STOIIP', 'N/A')}")
# Extract with custom filters and multiplier at runtime
base_case_custom = processor.base_case(
parameter="Reference_Case",
filters={'zones': ['Main Reservoir']},
multiplier=1e-6 # Different multiplier than default
)
ref_case_custom = processor.ref_case(
parameter="Reference_Case",
filters={'zones': ['Main Reservoir']},
multiplier=1e-6
)
# Use in tornado plot
from tornadopy import tornado_plot
base_stoiip = base_values.get('STOIIP', 14.5)
ref_stoiip = ref_values.get('STOIIP', 14.2)
fig, ax, saved = tornado_plot(
sections=tornado_sections,
title="STOIIP Tornado Analysis",
base=base_stoiip,
reference_case=ref_stoiip,
unit="MM m³",
outfile="tornado.png"
)
```
### Working with Filters
**Zone Filtering:**
```python
# Single zone
filters = {"zones": "Main Reservoir", "property": "STOIIP"}
# Multiple zones (will sum values across zones)
filters = {"zones": ["Zone A", "Zone B"], "property": "STOIIP"}
```
**Property Filtering:**
```python
# Single property
filters = {"property": "STOIIP"}
# Multiple properties (returns separate results for each)
filters = {"property": ["STOIIP", "GIIP"]}
```
### Using Multipliers
Convert units easily with the multiplier parameter:
```python
# Convert to thousands (mcm → MM m³)
multiplier = 1e-3
# Convert to millions (m³ → MM m³)
multiplier = 1e-6
# Convert to billions (m³ → bcm)
multiplier = 1e-9
```
### Skipping Parameters
Exclude specific parameters from batch processing:
```python
options = {
"skip_parameters": ["Reference_Case", "Full_Uncertainty"], # Skip these parameters
"skip": ["sources", "errors"] # Skip these fields in results
}
```
### Handling Errors
```python
results = processor.compute_batch(
stats="p90p10",
parameters="all",
filters={"property": "STOIIP"},
options={"skip": ["errors"]} # Hide error messages
)
# Check for errors in results
for result in results:
if "errors" in result:
print(f"Parameter {result['parameter']} had errors: {result['errors']}")
elif "p90p10" in result:
print(f"Parameter {result['parameter']}: P90/P10 = {result['p90p10']}")
```
### Performance Tips
1. **Use batch processing** for multiple parameters:
```python
# Good: Single call for all parameters
results = processor.compute_batch(stats="p90p10", parameters="all", ...)
# Avoid: Multiple calls
for param in parameters:
result = processor.compute(stats="p90p10", parameter=param, ...)
```
2. **Skip unnecessary data**:
```python
options = {
"skip": ["sources", "errors"], # Reduces memory usage
}
```
3. **Set appropriate thresholds**:
```python
options = {
"p90p10_threshold": 150, # Require minimum cases for reliable statistics
}
```
## Excel File Format
TornadoPy expects Excel files with the following structure:
```
[Info rows - optional metadata]
Header Row 1 | Dynamic Field 1 | Dynamic Field 1 | ...
Header Row 2 | Value A | Value B | ...
Case | Property 1 | Property 2 | ...
1 | 123.45 | 67.89 | ...
2 | 234.56 | 78.90 | ...
...
```
- Multiple header rows are supported and will be combined
- The "Case" row marks the start of data
- Dynamic fields in column A define metadata columns
- Property names are extracted from the last header row
## API Reference
### TornadoProcessor
#### Initialization
```python
TornadoProcessor(
filepath: str,
multiplier: float = 1.0,
base_case: str = None
)
```
**Parameters:**
- `filepath`: Path to Excel file (.xlsx, .xlsb, etc.)
- `multiplier`: Default multiplier to apply to all operations (default: 1.0)
- `base_case`: Name of sheet containing base/reference case data (optional)
#### Core Methods
**Information Access:**
- `parameters()`: Get list of available parameters (sheet names)
- `properties(parameter=None)`: Get available properties for a parameter
- `unique(field, parameter=None)`: Get unique values for a dynamic field
- `info(parameter=None)`: Get metadata for a parameter
- `case(index, parameter=None)`: Get data for a specific case
**Statistics:**
- `compute(stats, parameter=None, filters=None, multiplier=None, options=None, case_selection=False, selection_criteria=None)`: Compute statistics
- `compute_batch(stats, parameters, filters=None, multiplier=None, options=None, case_selection=False, selection_criteria=None)`: Batch compute for multiple parameters
- `distribution(parameter=None, filters=None, multiplier=None, options=None)`: Get distribution data
- `get_tornado_data(parameters, filters=None, multiplier=None, options=None)`: Get tornado chart formatted data
**Filter Management (NEW):**
- `set_filter(name, filters)`: Store a named filter preset
- `get_filter(name)`: Retrieve a stored filter preset
- `list_filters()`: List all stored filter names
**Base/Reference Case (NEW):**
- `base_case(parameter=None, filters=None, multiplier=None)`: Extract base case values (index 0)
- `ref_case(parameter=None, filters=None, multiplier=None)`: Extract reference case values (index 1)
- `base_case_values`: Property containing cached base case values (dict)
- `reference_case_values`: Property containing cached reference case values (dict)
#### Legacy Methods (Deprecated but still supported)
For backwards compatibility, the following methods still work but are deprecated:
- `get_parameters()` → use `parameters()`
- `get_properties()` → use `properties()`
- `get_unique()` → use `unique()`
- `get_distribution()` → use `distribution()`
- `get_info()` → use `info()`
- `get_case()` → use `case()`
### tornado_plot
#### Parameters
- `sections`: List of section dictionaries with parameter data
- `title`: Chart title
- `subtitle`: Chart subtitle
- `outfile`: Output file path
- `base`: Base case value
- `reference_case`: Reference case line value
- `unit`: Unit label
- `preferred_order`: List of parameter names for custom ordering
- `settings`: Dictionary of visual settings
#### Returns
- `fig`: Matplotlib figure object
- `ax`: Matplotlib axes object
- `saved`: Path to saved file (if outfile specified)
### distribution_plot
#### Parameters
- `data`: Array-like data (numpy array, list, or from get_distribution)
- `title`: Chart title (default "Distribution")
- `unit`: Unit label for x-axis and subtitle
- `outfile`: Output file path (if specified, saves the figure)
- `target_bins`: Target number of bins for histogram (default 20)
- `color`: Color scheme - "red", "blue", "green", "orange", "purple", "fuchsia", "yellow"
- `reference_case`: Optional reference case value to plot as vertical line
- `settings`: Dictionary of visual settings to override defaults
#### Settings Options
Common settings for customizing distribution plots:
```python
settings = {
# Layout
"figsize": (10, 6),
"dpi": 160,
# Percentile markers
"show_percentile_markers": True, # Show P90/P50/P10 on cumulative curve
"marker_size": 8,
# Grid customization
"show_minor_grid": True,
"x_major_interval": 5, # Major x-gridlines every 5 units
"x_minor_interval": 1, # Minor x-gridlines every 1 unit
"y_major_interval": 50, # Major y-gridlines every 50 frequency
"y_minor_interval": 10, # Minor y-gridlines every 10 frequency
# Font sizes
"title_fontsize": 15,
"subtitle_fontsize": 11,
"label_fontsize": 10,
}
```
#### Returns
- `fig`: Matplotlib figure object
- `ax`: Matplotlib axes object (primary)
- `saved`: Path to saved file (if outfile specified)
## Requirements
- Python >= 3.9
- numpy >= 1.20.0
- polars >= 0.18.0
- fastexcel >= 0.9.0
- matplotlib >= 3.5.0
## License
MIT License - see LICENSE file for details
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Support
For issues and questions, please open an issue on GitHub.
Raw data
{
"_id": null,
"home_page": "https://github.com/kkollsga/tornadopy",
"name": "tornadopy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "tornado, chart, visualization, uncertainty, analysis",
"author": "Kristian dF Kollsg\u00e5rd",
"author_email": "Kristian dF Kollsg\u00e5rd <kkollsg@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/22/8c/ebf12a22f743177eb5af43684f4b73e2c5e7dc4e0755eb16b2e9af532c26/tornadopy-0.1.10.tar.gz",
"platform": null,
"description": "# TornadoPy\n\nA Python library for tornado chart generation and analysis. TornadoPy provides tools for processing Excel-based tornado data and generating professional tornado charts for uncertainty analysis.\n\n## Features\n\n- **TornadoProcessor**: Process Excel files containing tornado analysis data\n - Parse multi-sheet Excel files with complex headers\n - Extract and compute statistics (p90p10, mean, median, minmax, percentiles)\n - Filter data by properties and dynamic fields\n - Named filter presets for reusable filter combinations\n - Base and reference case extraction with caching\n - Default multiplier support for consistent unit conversion\n - Case selection with weighted criteria\n - Batch processing for multiple parameters\n - Optimized for performance with native numpy operations\n - Comprehensive docstrings and organized code structure\n\n- **tornado_plot**: Generate professional tornado charts\n - Customizable colors, fonts, and styling\n - Support for p90/p10 ranges with automatic label placement\n - Reference case lines\n - Custom parameter ordering\n - Export to various image formats\n\n- **distribution_plot**: Generate distribution histograms with cumulative curves\n - Beautiful bin sizing with round numbers\n - Cumulative distribution curve showing % of cases above value\n - P90/P50/P10 percentile markers and subtitle\n - Optional reference case line\n - Multiple color schemes available\n - Export to various image formats\n\n## Installation\n\nInstall from PyPI:\n\n```bash\npip install tornadopy\n```\n\n## Quick Start\n\n### Processing Tornado Data\n\n```python\nfrom tornadopy import TornadoProcessor\n\n# Load Excel file with tornado data\n# Optional: Set default multiplier and base case sheet\nprocessor = TornadoProcessor(\n \"tornado_data.xlsb\",\n multiplier=1e-6, # Default multiplier for all operations\n base_case=\"Reference_Case\" # Sheet containing base/reference cases\n)\n\n# Get available parameters\nparameters = processor.parameters()\nprint(f\"Parameters: {parameters}\")\n\n# Get properties for a parameter\nproperties = processor.properties(parameter=\"Parameter1\")\nprint(f\"Properties: {properties}\")\n\n# Compute statistics\nresult = processor.compute(\n stats=\"p90p10\",\n parameter=\"Parameter1\",\n filters={\"property\": \"npv\"},\n multiplier=1e-6 # Convert to millions (or use default if set)\n)\nprint(f\"P90/P10: {result['p90p10']}\")\n```\n\n### Generating Tornado Charts\n\n```python\nfrom tornadopy import TornadoProcessor, tornado_plot\n\n# Get tornado data\nprocessor = TornadoProcessor(\"tornado_data.xlsb\")\ntornado_data = processor.get_tornado_data(\n parameters=\"all\",\n filters={\"property\": \"npv\"},\n multiplier=1e-6\n)\n\n# Convert to sections format for plotting\nsections = []\nfor param, data in tornado_data.items():\n sections.append({\n \"parameter\": param,\n \"minmax\": [data[\"p10\"], data[\"p90\"]],\n \"p90p10\": [data[\"p10\"], data[\"p90\"]]\n })\n\n# Generate tornado chart\nfig, ax, saved = tornado_plot(\n sections=sections,\n title=\"NPV Tornado Chart\",\n subtitle=\"Base case = 100.0 MM USD\",\n base=100.0,\n unit=\"MM USD\",\n outfile=\"tornado_chart.png\"\n)\n```\n\n### Generating Distribution Charts\n\n```python\nfrom tornadopy import TornadoProcessor, distribution_plot\n\n# Get distribution data\nprocessor = TornadoProcessor(\"tornado_data.xlsb\")\ndistribution = processor.distribution(\n parameter=\"Parameter1\",\n filters={\"property\": \"npv\"},\n multiplier=1e-6\n)\n\n# Generate distribution chart\nfig, ax, saved = distribution_plot(\n distribution,\n title=\"NPV Distribution\",\n unit=\"MM USD\",\n color=\"blue\",\n reference_case=100.0,\n outfile=\"npv_distribution.png\"\n)\n```\n\n### Advanced Usage\n\n#### Multi-Zone Analysis with Batch Processing\n\nProcess multiple parameters at once with zone filtering and custom options:\n\n```python\nfrom tornadopy import TornadoProcessor, tornado_plot\n\nprocessor = TornadoProcessor(\"reservoir_data.xlsb\")\n\n# Compute statistics for all parameters with zone filtering\nresults = processor.compute_batch(\n stats=[\"minmax\", \"p90p10\"],\n parameters=\"all\",\n filters={\n \"zones\": [\"Zone A - Reservoir\", \"Zone B - Reservoir\"],\n \"property\": \"STOIIP\"\n },\n multiplier=1e-3, # Convert to thousands\n options={\n \"p90p10_threshold\": 150, # Minimum cases required\n \"skip\": [\"sources\"] # Skip source tracking for cleaner output\n }\n)\n\n# Convert results to tornado plot format\nsections = []\nfor result in results:\n if \"p90p10\" in result and \"errors\" not in result:\n p10, p90 = result[\"p90p10\"]\n sections.append({\n \"parameter\": result[\"parameter\"],\n \"minmax\": result.get(\"minmax\", [p10, p90]),\n \"p90p10\": [p10, p90]\n })\n\n# Generate tornado chart\nfig, ax, saved = tornado_plot(\n sections,\n title=\"STOIIP Tornado - Multi-Zone Analysis\",\n base=14.5, # Base case value\n reference_case=14.2, # Reference case line\n unit=\"MM m\u00b3\",\n outfile=\"stoiip_tornado.svg\"\n)\n```\n\n#### Distribution Plot with Custom Gridlines\n\nCreate distribution charts with percentile markers and custom grid settings:\n\n```python\nfrom tornadopy import TornadoProcessor, distribution_plot\n\nprocessor = TornadoProcessor(\"reservoir_data.xlsb\")\n\n# Get distribution data for specific zones\ndistribution = processor.distribution(\n parameter=\"Uncertainty_Analysis\",\n filters={\n \"zones\": [\"Zone A - Reservoir\", \"Zone B - Reservoir\"],\n \"property\": \"STOIIP\"\n },\n multiplier=1e-3 # Convert to thousands\n)\n\n# Generate distribution chart with custom settings\nfig, ax, saved = distribution_plot(\n data=distribution,\n title=\"STOIIP Distribution - Uncertainty Analysis\",\n unit=\"MM m\u00b3\",\n color=\"blue\",\n reference_case=14.5,\n target_bins=20,\n settings={\n \"show_percentile_markers\": True, # Show P90/P50/P10 markers\n \"marker_size\": 8,\n \"show_minor_grid\": True,\n # Custom gridline intervals\n \"x_major_interval\": 5, # Major x-gridlines every 5 units\n \"x_minor_interval\": 1, # Minor x-gridlines every 1 unit\n \"y_major_interval\": 50, # Major y-gridlines every 50 frequency\n \"y_minor_interval\": 10, # Minor y-gridlines every 10 frequency\n },\n outfile=\"stoiip_distribution.svg\"\n)\n```\n\n#### Working with Multiple Properties\n\nAnalyze multiple properties simultaneously:\n\n```python\n# Compute statistics for multiple properties\nresult = processor.compute(\n stats=[\"p90p10\", \"mean\", \"median\"],\n parameter=\"Reservoir_Model\",\n filters={\n \"zones\": [\"Main_Reservoir\"],\n \"property\": [\"STOIIP\", \"GIIP\"] # Multiple properties\n },\n multiplier=1e-6 # Convert to millions\n)\n\n# Access results by property\nstoiip_p90, stoiip_p10 = result[\"p90p10\"][0] # First property (STOIIP)\ngiip_p90, giip_p10 = result[\"p90p10\"][1] # Second property (GIIP)\n\nprint(f\"STOIIP P90/P10: {stoiip_p90:.2f} / {stoiip_p10:.2f} MM m\u00b3\")\nprint(f\"GIIP P90/P10: {giip_p90:.2f} / {giip_p10:.2f} bcm\")\n```\n\n#### Case Selection with Weighted Criteria\n\nFind specific cases that match target percentiles:\n\n```python\n# Find closest cases to p90/p10 with custom weights\nresult = processor.compute(\n stats=\"p90p10\",\n parameter=\"Reservoir_Model\",\n filters={\n \"zones\": [\"Main_Reservoir\"],\n \"property\": \"STOIIP\"\n },\n multiplier=1e-6,\n case_selection=True, # Enable case selection\n selection_criteria={\n \"weights\": {\"STOIIP\": 0.6, \"GIIP\": 0.4} # Weighted criteria\n }\n)\n\n# Access closest cases\nfor case in result[\"closest_cases\"]:\n print(f\"Case {case['case']}: index={case['idx']}, STOIIP={case['STOIIP']:.2f}\")\n print(f\" Properties: {case['properties']}\")\n```\n\n#### Skipping Specific Parameters\n\nExclude certain parameters from batch processing:\n\n```python\n# Process all parameters except specific ones\nresults = processor.compute_batch(\n stats=\"p90p10\",\n parameters=\"all\",\n filters={\"property\": \"STOIIP\"},\n multiplier=1e-3,\n options={\n \"skip_parameters\": [\"Reference_Case\", \"Full_Uncertainty\"], # Skip these\n \"skip\": [\"sources\", \"errors\"] # Skip these fields in output\n }\n)\n```\n\n#### Custom Tornado Chart Styling\n\nFull control over chart appearance:\n\n```python\n# Custom styling for professional reports\nsettings = {\n \"figsize\": (12, 8),\n \"dpi\": 200,\n \"pos_dark\": \"#1E88E5\", # Blue for positive\n \"neg_dark\": \"#D32F2F\", # Red for negative\n \"show_values\": [\"min\", \"max\", \"p10\", \"p90\"],\n \"show_percentage_diff\": True,\n}\n\nfig, ax, saved = tornado_plot(\n sections=sections,\n title=\"Reservoir Volume Sensitivity Analysis\",\n subtitle=\"Base Case: 100 MM m\u00b3\",\n base=100.0,\n reference_case=95.0,\n unit=\"MM m\u00b3\",\n preferred_order=[\"Porosity\", \"NTG\", \"Area\"], # Custom parameter order\n settings=settings,\n outfile=\"sensitivity_analysis.png\"\n)\n```\n\n## Common Workflows\n\n### Complete Reservoir Uncertainty Analysis\n\nEnd-to-end workflow for reservoir analysis with tornado and distribution charts:\n\n```python\nfrom tornadopy import TornadoProcessor, tornado_plot, distribution_plot\nimport matplotlib.pyplot as plt\n\n# Load data\nprocessor = TornadoProcessor(\"reservoir_uncertainty.xlsb\")\n\n# Define common filters\nzones = [\"Main Reservoir - SST1\", \"Main Reservoir - SST2\"]\nmultiplier = 1e-3 # Convert to thousands\n\n# 1. Generate STOIIP Tornado Chart\nstoiip_results = processor.compute_batch(\n stats=[\"minmax\", \"p90p10\"],\n parameters=\"all\",\n filters={\n \"zones\": zones,\n \"property\": \"STOIIP\"\n },\n multiplier=multiplier,\n options={\n \"p90p10_threshold\": 150,\n \"skip_parameters\": [\"Reference_Case\", \"Full_Uncertainty\"]\n }\n)\n\n# Convert to tornado format\nsections = []\nfor result in stoiip_results:\n if \"p90p10\" in result and \"errors\" not in result:\n p10, p90 = result[\"p90p10\"]\n min_val, max_val = result.get(\"minmax\", [p10, p90])\n sections.append({\n \"parameter\": result[\"parameter\"],\n \"minmax\": [min_val, max_val],\n \"p90p10\": [p10, p90]\n })\n\n# Create tornado chart\nfig1, ax1, saved1 = tornado_plot(\n sections,\n title=\"STOIIP Sensitivity Analysis\",\n base=14.5,\n reference_case=14.2,\n unit=\"MM m\u00b3\",\n outfile=\"stoiip_tornado.svg\"\n)\n\n# 2. Generate Distribution Chart\ndistribution = processor.distribution(\n parameter=\"Full_Uncertainty\",\n filters={\n \"zones\": zones,\n \"property\": \"STOIIP\"\n },\n multiplier=multiplier\n)\n\nfig2, ax2, saved2 = distribution_plot(\n data=distribution,\n title=\"STOIIP Distribution - Full Uncertainty\",\n unit=\"MM m\u00b3\",\n color=\"blue\",\n reference_case=14.5,\n settings={\n \"show_percentile_markers\": True,\n \"x_major_interval\": 5,\n \"x_minor_interval\": 1,\n },\n outfile=\"stoiip_distribution.svg\"\n)\n\n# Show both charts\nplt.show()\n\nprint(f\"Charts saved: {saved1}, {saved2}\")\n```\n\n### Comparing Multiple Scenarios\n\nCompare different reservoir scenarios side by side:\n\n```python\nfrom tornadopy import TornadoProcessor, distribution_plot\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nprocessor = TornadoProcessor(\"scenarios.xlsb\")\n\n# Define scenarios\nscenarios = [\n {\"name\": \"Base Case\", \"param\": \"Base_Case\", \"color\": \"blue\"},\n {\"name\": \"Optimistic\", \"param\": \"Optimistic\", \"color\": \"green\"},\n {\"name\": \"Pessimistic\", \"param\": \"Pessimistic\", \"color\": \"red\"},\n]\n\n# Create subplots for comparison\nfig, axes = plt.subplots(1, 3, figsize=(18, 6))\n\nfor idx, scenario in enumerate(scenarios):\n dist = processor.distribution(\n parameter=scenario[\"param\"],\n filters={\"property\": \"NPV\"},\n multiplier=1e-6\n )\n\n distribution_plot(\n data=dist,\n title=f\"{scenario['name']} Scenario\",\n unit=\"MM USD\",\n color=scenario[\"color\"],\n target_bins=15,\n outfile=None # Don't save individual plots\n )\n\n # Move the plot to the subplot\n plt.close()\n\nplt.tight_layout()\nplt.savefig(\"scenario_comparison.png\", dpi=200)\nplt.show()\n```\n\n## Tips and Best Practices\n\n### Filter Management (NEW)\n\nStore and reuse filter presets for consistent analysis:\n\n```python\nfrom tornadopy import TornadoProcessor\n\nprocessor = TornadoProcessor(\"reservoir_data.xlsb\")\n\n# Store commonly used filter combinations\nprocessor.set_filter('main_zones', {\n 'zones': ['Main Reservoir - SST1', 'Main Reservoir - SST2'],\n 'property': 'STOIIP'\n})\n\nprocessor.set_filter('north_area', {\n 'zones': ['North Zone A', 'North Zone B'],\n})\n\n# List all stored filters\nprint(f\"Available filters: {processor.list_filters()}\")\n\n# Use stored filters by name (can be string or dict)\nresult = processor.compute(\n stats=\"p90p10\",\n parameter=\"Uncertainty_Analysis\",\n filters=\"main_zones\", # Reference filter by name\n multiplier=1e-3\n)\n\n# Retrieve filter for inspection\nmain_zones_filter = processor.get_filter('main_zones')\nprint(f\"Filter contents: {main_zones_filter}\")\n\n# Can still use dict filters as before\nresult = processor.compute(\n stats=\"mean\",\n parameter=\"Porosity\",\n filters={'zones': ['Zone A'], 'property': 'STOIIP'},\n multiplier=1e-3\n)\n```\n\n### Base and Reference Case Extraction (NEW)\n\nExtract base and reference case values when initializing with a base case sheet:\n\n```python\nfrom tornadopy import TornadoProcessor\n\n# Initialize with base case parameter\nprocessor = TornadoProcessor(\n \"reservoir_data.xlsb\",\n multiplier=1e-3,\n base_case=\"Reference_Case\" # Sheet containing base (idx 0) and reference (idx 1)\n)\n\n# Access cached base case values (extracted at initialization)\nbase_values = processor.base_case_values\nprint(f\"Base case STOIIP: {base_values.get('STOIIP', 'N/A')}\")\n\n# Access cached reference case values\nref_values = processor.reference_case_values\nprint(f\"Reference case STOIIP: {ref_values.get('STOIIP', 'N/A')}\")\n\n# Extract with custom filters and multiplier at runtime\nbase_case_custom = processor.base_case(\n parameter=\"Reference_Case\",\n filters={'zones': ['Main Reservoir']},\n multiplier=1e-6 # Different multiplier than default\n)\n\nref_case_custom = processor.ref_case(\n parameter=\"Reference_Case\",\n filters={'zones': ['Main Reservoir']},\n multiplier=1e-6\n)\n\n# Use in tornado plot\nfrom tornadopy import tornado_plot\n\nbase_stoiip = base_values.get('STOIIP', 14.5)\nref_stoiip = ref_values.get('STOIIP', 14.2)\n\nfig, ax, saved = tornado_plot(\n sections=tornado_sections,\n title=\"STOIIP Tornado Analysis\",\n base=base_stoiip,\n reference_case=ref_stoiip,\n unit=\"MM m\u00b3\",\n outfile=\"tornado.png\"\n)\n```\n\n### Working with Filters\n\n**Zone Filtering:**\n```python\n# Single zone\nfilters = {\"zones\": \"Main Reservoir\", \"property\": \"STOIIP\"}\n\n# Multiple zones (will sum values across zones)\nfilters = {\"zones\": [\"Zone A\", \"Zone B\"], \"property\": \"STOIIP\"}\n```\n\n**Property Filtering:**\n```python\n# Single property\nfilters = {\"property\": \"STOIIP\"}\n\n# Multiple properties (returns separate results for each)\nfilters = {\"property\": [\"STOIIP\", \"GIIP\"]}\n```\n\n### Using Multipliers\n\nConvert units easily with the multiplier parameter:\n\n```python\n# Convert to thousands (mcm \u2192 MM m\u00b3)\nmultiplier = 1e-3\n\n# Convert to millions (m\u00b3 \u2192 MM m\u00b3)\nmultiplier = 1e-6\n\n# Convert to billions (m\u00b3 \u2192 bcm)\nmultiplier = 1e-9\n```\n\n### Skipping Parameters\n\nExclude specific parameters from batch processing:\n\n```python\noptions = {\n \"skip_parameters\": [\"Reference_Case\", \"Full_Uncertainty\"], # Skip these parameters\n \"skip\": [\"sources\", \"errors\"] # Skip these fields in results\n}\n```\n\n### Handling Errors\n\n```python\nresults = processor.compute_batch(\n stats=\"p90p10\",\n parameters=\"all\",\n filters={\"property\": \"STOIIP\"},\n options={\"skip\": [\"errors\"]} # Hide error messages\n)\n\n# Check for errors in results\nfor result in results:\n if \"errors\" in result:\n print(f\"Parameter {result['parameter']} had errors: {result['errors']}\")\n elif \"p90p10\" in result:\n print(f\"Parameter {result['parameter']}: P90/P10 = {result['p90p10']}\")\n```\n\n### Performance Tips\n\n1. **Use batch processing** for multiple parameters:\n ```python\n # Good: Single call for all parameters\n results = processor.compute_batch(stats=\"p90p10\", parameters=\"all\", ...)\n\n # Avoid: Multiple calls\n for param in parameters:\n result = processor.compute(stats=\"p90p10\", parameter=param, ...)\n ```\n\n2. **Skip unnecessary data**:\n ```python\n options = {\n \"skip\": [\"sources\", \"errors\"], # Reduces memory usage\n }\n ```\n\n3. **Set appropriate thresholds**:\n ```python\n options = {\n \"p90p10_threshold\": 150, # Require minimum cases for reliable statistics\n }\n ```\n\n## Excel File Format\n\nTornadoPy expects Excel files with the following structure:\n\n```\n[Info rows - optional metadata]\nHeader Row 1 | Dynamic Field 1 | Dynamic Field 1 | ...\nHeader Row 2 | Value A | Value B | ...\nCase | Property 1 | Property 2 | ...\n1 | 123.45 | 67.89 | ...\n2 | 234.56 | 78.90 | ...\n...\n```\n\n- Multiple header rows are supported and will be combined\n- The \"Case\" row marks the start of data\n- Dynamic fields in column A define metadata columns\n- Property names are extracted from the last header row\n\n## API Reference\n\n### TornadoProcessor\n\n#### Initialization\n\n```python\nTornadoProcessor(\n filepath: str,\n multiplier: float = 1.0,\n base_case: str = None\n)\n```\n\n**Parameters:**\n- `filepath`: Path to Excel file (.xlsx, .xlsb, etc.)\n- `multiplier`: Default multiplier to apply to all operations (default: 1.0)\n- `base_case`: Name of sheet containing base/reference case data (optional)\n\n#### Core Methods\n\n**Information Access:**\n- `parameters()`: Get list of available parameters (sheet names)\n- `properties(parameter=None)`: Get available properties for a parameter\n- `unique(field, parameter=None)`: Get unique values for a dynamic field\n- `info(parameter=None)`: Get metadata for a parameter\n- `case(index, parameter=None)`: Get data for a specific case\n\n**Statistics:**\n- `compute(stats, parameter=None, filters=None, multiplier=None, options=None, case_selection=False, selection_criteria=None)`: Compute statistics\n- `compute_batch(stats, parameters, filters=None, multiplier=None, options=None, case_selection=False, selection_criteria=None)`: Batch compute for multiple parameters\n- `distribution(parameter=None, filters=None, multiplier=None, options=None)`: Get distribution data\n- `get_tornado_data(parameters, filters=None, multiplier=None, options=None)`: Get tornado chart formatted data\n\n**Filter Management (NEW):**\n- `set_filter(name, filters)`: Store a named filter preset\n- `get_filter(name)`: Retrieve a stored filter preset\n- `list_filters()`: List all stored filter names\n\n**Base/Reference Case (NEW):**\n- `base_case(parameter=None, filters=None, multiplier=None)`: Extract base case values (index 0)\n- `ref_case(parameter=None, filters=None, multiplier=None)`: Extract reference case values (index 1)\n- `base_case_values`: Property containing cached base case values (dict)\n- `reference_case_values`: Property containing cached reference case values (dict)\n\n#### Legacy Methods (Deprecated but still supported)\n\nFor backwards compatibility, the following methods still work but are deprecated:\n- `get_parameters()` \u2192 use `parameters()`\n- `get_properties()` \u2192 use `properties()`\n- `get_unique()` \u2192 use `unique()`\n- `get_distribution()` \u2192 use `distribution()`\n- `get_info()` \u2192 use `info()`\n- `get_case()` \u2192 use `case()`\n\n### tornado_plot\n\n#### Parameters\n\n- `sections`: List of section dictionaries with parameter data\n- `title`: Chart title\n- `subtitle`: Chart subtitle\n- `outfile`: Output file path\n- `base`: Base case value\n- `reference_case`: Reference case line value\n- `unit`: Unit label\n- `preferred_order`: List of parameter names for custom ordering\n- `settings`: Dictionary of visual settings\n\n#### Returns\n\n- `fig`: Matplotlib figure object\n- `ax`: Matplotlib axes object\n- `saved`: Path to saved file (if outfile specified)\n\n### distribution_plot\n\n#### Parameters\n\n- `data`: Array-like data (numpy array, list, or from get_distribution)\n- `title`: Chart title (default \"Distribution\")\n- `unit`: Unit label for x-axis and subtitle\n- `outfile`: Output file path (if specified, saves the figure)\n- `target_bins`: Target number of bins for histogram (default 20)\n- `color`: Color scheme - \"red\", \"blue\", \"green\", \"orange\", \"purple\", \"fuchsia\", \"yellow\"\n- `reference_case`: Optional reference case value to plot as vertical line\n- `settings`: Dictionary of visual settings to override defaults\n\n#### Settings Options\n\nCommon settings for customizing distribution plots:\n\n```python\nsettings = {\n # Layout\n \"figsize\": (10, 6),\n \"dpi\": 160,\n\n # Percentile markers\n \"show_percentile_markers\": True, # Show P90/P50/P10 on cumulative curve\n \"marker_size\": 8,\n\n # Grid customization\n \"show_minor_grid\": True,\n \"x_major_interval\": 5, # Major x-gridlines every 5 units\n \"x_minor_interval\": 1, # Minor x-gridlines every 1 unit\n \"y_major_interval\": 50, # Major y-gridlines every 50 frequency\n \"y_minor_interval\": 10, # Minor y-gridlines every 10 frequency\n\n # Font sizes\n \"title_fontsize\": 15,\n \"subtitle_fontsize\": 11,\n \"label_fontsize\": 10,\n}\n```\n\n#### Returns\n\n- `fig`: Matplotlib figure object\n- `ax`: Matplotlib axes object (primary)\n- `saved`: Path to saved file (if outfile specified)\n\n## Requirements\n\n- Python >= 3.9\n- numpy >= 1.20.0\n- polars >= 0.18.0\n- fastexcel >= 0.9.0\n- matplotlib >= 3.5.0\n\n## License\n\nMIT License - see LICENSE file for details\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## Support\n\nFor issues and questions, please open an issue on GitHub.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python library for tornado chart generation and analysis",
"version": "0.1.10",
"project_urls": {
"Documentation": "https://github.com/kkollsga/tornadopy#readme",
"Homepage": "https://github.com/kkollsga/tornadopy",
"Issues": "https://github.com/kkollsga/tornadopy/issues",
"Repository": "https://github.com/kkollsga/tornadopy"
},
"split_keywords": [
"tornado",
" chart",
" visualization",
" uncertainty",
" analysis"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "08d7207848108ff43b4e9a3e6910677578ccdae6cee030cec8c75180c9e31a47",
"md5": "2ddc120d1bc722b5cf1656367d7e3881",
"sha256": "558b1cde101ffe672967acecf46f7b11bd5e66c424d8ba1abeeaafa6f8ac88a0"
},
"downloads": -1,
"filename": "tornadopy-0.1.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2ddc120d1bc722b5cf1656367d7e3881",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 29920,
"upload_time": "2025-10-30T21:54:23",
"upload_time_iso_8601": "2025-10-30T21:54:23.127344Z",
"url": "https://files.pythonhosted.org/packages/08/d7/207848108ff43b4e9a3e6910677578ccdae6cee030cec8c75180c9e31a47/tornadopy-0.1.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "228cebf12a22f743177eb5af43684f4b73e2c5e7dc4e0755eb16b2e9af532c26",
"md5": "460f639d141f209d1c59326a48f21d24",
"sha256": "94a79366098e6f5290382a15d69d71327f260cc92f066dab92b5705aa9fc883e"
},
"downloads": -1,
"filename": "tornadopy-0.1.10.tar.gz",
"has_sig": false,
"md5_digest": "460f639d141f209d1c59326a48f21d24",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 37051,
"upload_time": "2025-10-30T21:54:23",
"upload_time_iso_8601": "2025-10-30T21:54:23.962822Z",
"url": "https://files.pythonhosted.org/packages/22/8c/ebf12a22f743177eb5af43684f4b73e2c5e7dc4e0755eb16b2e9af532c26/tornadopy-0.1.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-30 21:54:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kkollsga",
"github_project": "tornadopy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "polars",
"specs": [
[
">=",
"0.18.0"
]
]
},
{
"name": "fastexcel",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.5.0"
]
]
}
],
"lcname": "tornadopy"
}