Name | upstream-sdk JSON |
Version |
1.0.2
JSON |
| download |
home_page | None |
Summary | Python SDK for Upstream environmental sensor data platform and CKAN integration |
upload_time | 2025-08-02 20:05:19 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT |
keywords |
environmental
sensors
data
api
ckan
upstream
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Upstream Python SDK
A Python SDK for seamless integration with the Upstream environmental sensor data platform and CKAN data portal.
> **Note**: This SDK is built on top of the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) - an OpenAPI-generated Python client. You can extend the SDK by using the underlying API client directly for advanced use cases or accessing endpoints not yet covered by the high-level SDK interface.
## Overview
The Upstream Python SDK provides a standardized, production-ready toolkit for environmental researchers and organizations to:
- **Authenticate** with Upstream API and CKAN data portals
- **Manage** environmental monitoring campaigns and stations
- **Upload** sensor data efficiently (with automatic chunking for large datasets)
- **Publish** datasets automatically to CKAN for discoverability
- **Automate** data pipelines for continuous sensor networks
## Key Features
### 🔐 **Unified Authentication**
- Seamless integration with Upstream API and Tapis/CKAN
- Automatic token management and refresh
- Secure credential handling
### 📊 **Complete Data Workflow**
```python
from upstream.client import UpstreamClient
from upstream_api_client.models import CampaignsIn, StationCreate
from datetime import datetime, timedelta
# Initialize client with CKAN integration
client = UpstreamClient(
username="researcher",
password="password",
base_url="https://upstream-dso.tacc.utexas.edu",
ckan_url="https://ckan.tacc.utexas.edu",
ckan_organization="your-org"
)
# Create campaign
campaign_data = CampaignsIn(
name="Environmental Monitoring 2024",
description="Environmental monitoring campaign with multi-sensor stations",
contact_name="Dr. Jane Smith",
contact_email="jane.smith@university.edu",
allocation="TACC",
start_date=datetime.now(),
end_date=datetime.now() + timedelta(days=365)
)
campaign = client.create_campaign(campaign_data)
# Create monitoring station
station_data = StationCreate(
name="Downtown Air Quality Monitor",
description="Multi-sensor environmental monitoring station",
contact_name="Dr. Jane Smith",
contact_email="jane.smith@university.edu",
start_date=datetime.now()
)
station = client.create_station(campaign.id, station_data)
# Upload sensor data
result = client.upload_csv_data(
campaign_id=campaign.id,
station_id=station.id,
sensors_file="sensors.csv",
measurements_file="measurements.csv"
)
print(f"Uploaded {result['response']['Total sensors processed']} sensors")
print(f"Added {result['response']['Total measurements added to database']} measurements")
# Publish to CKAN with rich metadata
publication = client.publish_to_ckan(
campaign_id=campaign.id,
station_id=station.id
)
print(f"Data published at: {publication['ckan_url']}")
```
### 🚀 **Production-Ready Features**
- **Type-safe interfaces** with Pydantic models and comprehensive validation
- **Rich statistics** - automatic calculation of sensor measurement statistics
- **Comprehensive error handling** with specific exception types (`APIError`, `ValidationError`)
- **CKAN integration** with custom metadata support and automatic resource management
- **Modular architecture** with dedicated managers for campaigns, stations, and sensors
- **Extensive logging** and debugging capabilities
- **Authentication management** with automatic token handling
### 🔄 **CKAN Integration & Publishing**
Seamless data publishing to CKAN portals:
```python
# Publish with custom metadata
publication_result = client.publish_to_ckan(
campaign_id=campaign_id,
station_id=station_id,
# Custom dataset metadata
dataset_metadata={
"project_name": "Air Quality Study",
"funding_agency": "EPA",
"grant_number": "EPA-2024-001"
},
# Custom resource metadata
resource_metadata={
"calibration_date": "2024-01-15",
"quality_control": "Automated + Manual Review",
"uncertainty_bounds": "±2% of reading"
},
# Custom tags for discoverability
custom_tags=["air-quality", "epa-funded", "quality-controlled"]
)
print(f"Dataset published: {publication_result['ckan_url']}")
```
## Installation
```bash
pip install upstream-sdk
```
For development:
```bash
pip install upstream-sdk[dev]
```
## Demo Notebooks
The SDK includes comprehensive demo notebooks that showcase all features:
### 📓 **UpstreamSDK_Core_Demo.ipynb**
Interactive demonstration of core functionality:
- Authentication and client setup
- Campaign creation and management
- Station setup with sensor configuration
- CSV data upload with comprehensive validation
- Sensor statistics and analytics
- Error handling and best practices
### 📓 **UpstreamSDK_CKAN_Demo.ipynb**
Complete CKAN integration workflow:
- CKAN portal setup and authentication
- Data export and preparation for publishing
- Dataset creation with rich metadata
- Custom metadata support (dataset, resource, and tags)
- Resource management and updates
- Dataset discovery and search capabilities
Both notebooks include detailed explanations, practical examples, and production-ready code patterns.
## Quick Start
### 1. Basic Setup
```python
from upstream.client import UpstreamClient
# Initialize with credentials and CKAN integration
client = UpstreamClient(
username="your_username",
password="your_password",
base_url="https://upstream-dso.tacc.utexas.edu",
ckan_url="https://ckan.tacc.utexas.edu",
ckan_organization="your-org"
)
# Test authentication
if client.authenticate():
print("✅ Connected successfully!")
```
### 2. Create Campaign
```python
from upstream.campaigns import CampaignManager
from upstream_api_client.models import CampaignsIn
from datetime import datetime, timedelta
# Initialize campaign manager
campaign_manager = CampaignManager(client.auth_manager)
campaign_data = CampaignsIn(
name="Environmental Monitoring 2024",
description="Multi-sensor environmental monitoring network",
contact_name="Dr. Jane Smith",
contact_email="jane.smith@university.edu",
allocation="TACC",
start_date=datetime.now(),
end_date=datetime.now() + timedelta(days=365)
)
campaign = campaign_manager.create(campaign_data)
print(f"Campaign created with ID: {campaign.id}")
```
### 3. Register Monitoring Station
```python
from upstream.stations import StationManager
from upstream_api_client.models import StationCreate
from datetime import datetime
# Initialize station manager
station_manager = StationManager(client.auth_manager)
station_data = StationCreate(
name="Downtown Air Quality Monitor",
description="Multi-sensor air quality monitoring station",
contact_name="Dr. Jane Smith",
contact_email="jane.smith@university.edu",
start_date=datetime.now()
)
station = station_manager.create(
campaign_id=campaign.id,
station_create=station_data
)
print(f"Station created with ID: {station.id}")
```
### 4. Upload Sensor Data
```python
# Upload from CSV files
result = client.upload_csv_data(
campaign_id=campaign.id,
station_id=station.id,
sensors_file="path/to/sensors.csv",
measurements_file="path/to/measurements.csv"
)
# Access detailed results
response = result['response']
print(f"Sensors processed: {response['Total sensors processed']}")
print(f"Measurements added: {response['Total measurements added to database']}")
print(f"Processing time: {response['Data Processing time']}")
```
## Data Format Requirements
### Sensors CSV Format
```csv
alias,variablename,units,postprocess,postprocessscript
temp_01,Air Temperature,°C,false,
humidity_01,Relative Humidity,%,false,
PM25_01,PM2.5 Concentration,μg/m³,true,pm25_calibration
wind_speed,Wind Speed,m/s,false,
co2_01,CO2 Concentration,ppm,false,
```
### Measurements CSV Format
```csv
collectiontime,Lat_deg,Lon_deg,temp_01,humidity_01,PM25_01,wind_speed,co2_01
2024-01-15T10:00:00,30.2672,-97.7431,22.5,68.2,15.2,3.2,420
2024-01-15T10:05:00,30.2672,-97.7431,22.7,67.8,14.8,3.5,425
2024-01-15T10:10:00,30.2672,-97.7431,22.9,67.5,16.1,3.1,418
```
## Advanced Usage
### Sensor Analytics and Statistics
```python
# Get sensor statistics after upload
sensors = client.sensors.list(campaign_id=campaign_id, station_id=station_id)
for sensor in sensors.items:
stats = sensor.statistics
print(f"Sensor: {sensor.alias} ({sensor.variablename})")
print(f" Measurements: {stats.count}")
print(f" Range: {stats.min_value:.2f} - {stats.max_value:.2f} {sensor.units}")
print(f" Average: {stats.avg_value:.2f} {sensor.units}")
print(f" Std Dev: {stats.stddev_value:.3f}")
print(f" Last value: {stats.last_measurement_value:.2f}")
print(f" Updated: {stats.stats_last_updated}")
```
#### Force Update Sensor Statistics
The SDK provides methods to manually trigger statistics recalculation for sensors when needed (e.g., after data corrections or updates):
```python
from upstream.sensors import SensorManager
# Initialize sensor manager
sensor_manager = SensorManager(client.auth_manager)
# Force update statistics for all sensors in a station
update_result = sensor_manager.force_update_statistics(
campaign_id=campaign_id,
station_id=station_id
)
print(f"Statistics update completed for all sensors in station {station_id}")
# Force update statistics for a specific sensor
single_update_result = sensor_manager.force_update_single_sensor_statistics(
campaign_id=campaign_id,
station_id=station_id,
sensor_id=sensor_id
)
print(f"Statistics update completed for sensor {sensor_id}")
# Verify updated statistics
updated_sensors = client.sensors.list(campaign_id=campaign_id, station_id=station_id)
for sensor in updated_sensors.items:
stats = sensor.statistics
print(f"Updated stats for {sensor.alias}: {stats.stats_last_updated}")
```
**When to use statistics updates:**
- After correcting measurement data
- When statistics appear outdated or inconsistent
- During data quality assurance processes
- After bulk data imports or migrations
### Measurement Data Management
```python
from upstream.measurements import MeasurementManager
from upstream_api_client.models import MeasurementIn
from datetime import datetime
# Initialize measurement manager
measurement_manager = MeasurementManager(client.auth_manager)
# List measurements for a specific sensor
measurements = measurement_manager.list(
campaign_id=campaign_id,
station_id=station_id,
sensor_id=sensor_id,
start_date=datetime(2024, 1, 1),
end_date=datetime(2024, 12, 31),
limit=100
)
print(f"Found {len(measurements.items)} measurements")
# Get measurements with confidence intervals for visualization
aggregated_data = measurement_manager.get_with_confidence_intervals(
campaign_id=campaign_id,
station_id=station_id,
sensor_id=sensor_id,
interval="hour",
interval_value=1,
start_date=datetime(2024, 1, 1),
end_date=datetime(2024, 1, 2)
)
for measurement in aggregated_data:
print(f"Time: {measurement.time_bucket}")
print(f" Average: {measurement.avg_value}")
print(f" Min/Max: {measurement.min_value} - {measurement.max_value}")
print(f" Confidence Interval: {measurement.confidence_interval_lower} - {measurement.confidence_interval_upper}")
```
### Error Handling and Validation
```python
from upstream.exceptions import APIError, ValidationError
from upstream.campaigns import CampaignManager
from upstream.stations import StationManager
try:
# Initialize managers
campaign_manager = CampaignManager(client.auth_manager)
station_manager = StationManager(client.auth_manager)
# Create campaign with validation
campaign = campaign_manager.create(campaign_data)
station = station_manager.create(
campaign_id=str(campaign.id),
station_create=station_data
)
except ValidationError as e:
print(f"Data validation failed: {e}")
except APIError as e:
print(f"API error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
```
### Comprehensive Data Upload
```python
# Upload with detailed response handling
result = client.upload_csv_data(
campaign_id=campaign.id,
station_id=station.id,
sensors_file="path/to/sensors.csv",
measurements_file="path/to/measurements.csv"
)
# Access detailed upload information
response = result['response']
print(f"Sensors processed: {response['Total sensors processed']}")
print(f"Measurements added: {response['Total measurements added to database']}")
print(f"Processing time: {response['Data Processing time']}")
print(f"Files stored: {response['uploaded_file_sensors stored in memory']}")
```
### Automated Data Pipeline
```python
# Complete automated workflow
def automated_monitoring_pipeline():
try:
# List existing campaigns and stations
campaigns = client.list_campaigns(limit=5)
if campaigns.items:
campaign = campaigns.items[0]
stations = client.list_stations(campaign_id=str(campaign.id))
if stations.items:
station = stations.items[0]
# Upload new sensor data
result = client.upload_csv_data(
campaign_id=campaign.id,
station_id=station.id,
sensors_file="latest_sensors.csv",
measurements_file="latest_measurements.csv"
)
# Publish to CKAN automatically
publication = client.publish_to_ckan(
campaign_id=campaign.id,
station_id=station.id,
custom_tags=["automated", "real-time"]
)
print(f"Pipeline completed: {publication['ckan_url']}")
except Exception as e:
print(f"Pipeline error: {e}")
# Implement alerting/retry logic
```
### Extending the SDK with the Underlying API Client
The Upstream SDK provides high-level convenience methods, but you can access the full OpenAPI-generated client for advanced use cases.
**📖 Complete API Documentation:** [Documentation for API Endpoints](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-api-endpoints)
The SDK uses Pydantic models from the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) for type-safe data handling and validation.
**📖 Complete Model Documentation:** [Documentation for Models](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-models)
```python
from upstream.client import UpstreamClient
from upstream_api_client.api.campaigns_api import CampaignsApi
from upstream_api_client.api.measurements_api import MeasurementsApi
# Initialize the SDK client
client = UpstreamClient(username="user", password="pass", base_url="https://upstream-dso.tacc.utexas.edu")
client.authenticate()
# Access the underlying API client for advanced operations
api_client = client.auth_manager.api_client
# Use the generated API classes directly
campaigns_api = CampaignsApi(api_client)
measurements_api = MeasurementsApi(api_client)
# Example: Use advanced filtering not yet available in the SDK
response = measurements_api.get_measurements_api_v1_campaigns_campaign_id_stations_station_id_sensors_sensor_id_measurements_get(
campaign_id=campaign_id,
station_id=station_id,
sensor_id=sensor_id,
min_measurement_value=20.0,
max_measurement_value=30.0,
start_date="2024-01-01T00:00:00",
end_date="2024-12-31T23:59:59",
limit=1000,
page=1
)
print(f"Advanced filtered measurements: {len(response.items)}")
```
This approach allows you to:
- Access all available API endpoints
- Use advanced filtering and pagination options
- Handle complex data transformations
- Implement custom error handling
- Access response metadata and headers
## Use Cases
### 🌪️ **Disaster Response Networks**
- Hurricane monitoring stations with automated data upload
- Emergency response sensor deployment
- Real-time environmental hazard tracking
### 🌬️ **Environmental Research**
- Long-term air quality monitoring
- Climate change research networks
- Urban environmental health studies
### 🌊 **Water Monitoring**
- Stream gauge networks
- Water quality assessment programs
- Flood monitoring and prediction
### 🏭 **Industrial Monitoring**
- Emissions monitoring compliance
- Environmental impact assessment
- Regulatory reporting automation
## API Reference
### UpstreamClient Methods
#### Campaign Management
- **`create_campaign(campaign_in: CampaignsIn)`** - Create a new monitoring campaign
- **`get_campaign(campaign_id: str)`** - Get campaign by ID
- **`list_campaigns(**kwargs)`\*\* - List all campaigns
#### Station Management
- **`create_station(campaign_id: str, station_create: StationCreate)`** - Create a new monitoring station
- **`get_station(station_id: str, campaign_id: str)`** - Get station by ID
- **`list_stations(campaign_id: str, **kwargs)`\*\* - List stations for a campaign
#### Data Upload
- **`upload_csv_data(campaign_id: str, station_id: str, sensors_file: str, measurements_file: str)`** - Upload CSV files with comprehensive response
- **`publish_to_ckan(campaign_id: str, station_id: str, dataset_metadata: dict = None, resource_metadata: dict = None, custom_tags: list = None, **kwargs)`\*\* - Publish to CKAN with custom metadata
#### Sensor Management
- **`sensors.get(sensor_id: int, station_id: int, campaign_id: int)`** - Get sensor by ID with statistics
- **`sensors.list(campaign_id: int, station_id: int, **kwargs)`** - List sensors for a station with filtering options
- **`sensors.update(sensor_id: int, station_id: int, campaign_id: int, sensor_update: SensorUpdate)`** - Update sensor configuration
- **`sensors.delete(sensor_id: int, station_id: int, campaign_id: int)`** - Delete a sensor
- **`sensors.upload_csv_files(campaign_id: int, station_id: int, sensors_file: str, measurements_file: str, chunk_size: int = 1000)`** - Upload CSV files with chunking support
- **`sensors.force_update_statistics(campaign_id: int, station_id: int)`** - Force recalculation of statistics for all sensors in a station
- **`sensors.force_update_single_sensor_statistics(campaign_id: int, station_id: int, sensor_id: int)`** - Force recalculation of statistics for a specific sensor
#### Measurement Management
- **`measurements.create(campaign_id: int, station_id: int, sensor_id: int, measurement_in: MeasurementIn)`** - Create a new measurement
- **`measurements.list(campaign_id: int, station_id: int, sensor_id: int, **filters)`** - List measurements with filtering options
- **`measurements.get_with_confidence_intervals(campaign_id: int, station_id: int, sensor_id: int, **params)`** - Get aggregated measurements with confidence intervals for visualization
- **`measurements.update(campaign_id: int, station_id: int, sensor_id: int, measurement_id: int, measurement_update: MeasurementUpdate)`** - Update a specific measurement
- **`measurements.delete(campaign_id: int, station_id: int, sensor_id: int)`** - Delete all measurements for a sensor
#### Utilities
- **`authenticate()`** - Test authentication and return status
- **`logout()`** - Logout and invalidate tokens
- **`list_campaigns(limit: int = 10, **kwargs)`\*\* - List campaigns with pagination
- **`list_stations(campaign_id: str, **kwargs)`\*\* - List stations for a campaign
- **`get_campaign(campaign_id: str)`** - Get detailed campaign information
- **`get_station(station_id: str, campaign_id: str)`** - Get detailed station information
### Core Classes
- **`UpstreamClient`** - Main SDK interface with CKAN integration
- **`CampaignManager`** - Campaign lifecycle management
- **`StationManager`** - Station creation and management
- **`MeasurementManager`** - Individual measurement data operations
- **`CKANIntegration`** - CKAN portal integration and publishing
### Data Models
The SDK uses Pydantic models from the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) for type-safe data handling and validation.
**📖 Complete Model Documentation:** [Documentation for Models](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-models)
**Key Models:**
- **`CampaignsIn`** - [Campaign creation model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/CampaignsIn.md)
- **`StationCreate`** - [Station configuration model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/StationCreate.md)
- **`MeasurementIn`** - [Individual measurement model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/MeasurementIn.md)
- **`AggregatedMeasurement`** - [Statistical measurement aggregation model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/AggregatedMeasurement.md)
**Usage Example:**
```python
from upstream_api_client.models import CampaignsIn, StationCreate, MeasurementIn
from datetime import datetime, timedelta
# See official documentation for complete field specifications
campaign = CampaignsIn(
name="Environmental Monitoring 2024",
allocation="TACC-allocation-id",
# ... see CampaignsIn.md for all fields
)
```
### Exceptions
- **`APIError`** - API-specific errors with detailed messages
- **`ValidationError`** - Data validation and format errors
- **`AuthManager`** - Authentication and token management
## Configuration
### Configuration File
```yaml
# config.yaml
upstream:
username: your_username
password: your_password
base_url: https://upstream-dso.tacc.utexas.edu
ckan:
url: https://ckan.tacc.utexas.edu
organization: your-organization
api_key: your_ckan_api_key # Optional for read-only
timeout: 30
logging:
level: INFO
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
```
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
git clone https://github.com/In-For-Disaster-Analytics/upstream-python-sdk.git
cd upstream-python-sdk
pip install -e .[dev]
pre-commit install
```
### Running Tests
```bash
pytest # Run all tests
pytest tests/test_auth.py # Run specific test file
pytest --cov=upstream # Run with coverage
```
## License
This project is licensed under the MIT License - see [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: [https://upstream-python-sdk.readthedocs.io](https://upstream-python-sdk.readthedocs.io)
- **Issues**: [GitHub Issues](https://github.com/In-For-Disaster-Analytics/upstream-python-sdk/issues)
## Citation
If you use this SDK in your research, please cite:
```bibtex
@software{upstream_python_sdk,
title={Upstream Python SDK: Environmental Sensor Data Integration},
author={In-For-Disaster-Analytics Team},
year={2024},
url={https://github.com/In-For-Disaster-Analytics/upstream-python-sdk},
version={1.0.0}
}
```
## Related Projects
- **[Upstream Platform](https://github.com/In-For-Disaster-Analytics/upstream-docker)** - Main platform repository
- **[CKAN Integration](https://ckan.tacc.utexas.edu)** - Data portal for published datasets
---
**Built for the environmental research community** 🌍
**Enabling automated, reproducible, and discoverable environmental data workflows**
Raw data
{
"_id": null,
"home_page": null,
"name": "upstream-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "In-For-Disaster-Analytics Team <info@tacc.utexas.edu>",
"keywords": "environmental, sensors, data, api, ckan, upstream",
"author": null,
"author_email": "In-For-Disaster-Analytics Team <info@tacc.utexas.edu>",
"download_url": "https://files.pythonhosted.org/packages/49/b0/47211b4253dd4da915f0475eba6339995e6fbcffa4024a9e0a6608449f31/upstream_sdk-1.0.2.tar.gz",
"platform": null,
"description": "# Upstream Python SDK\n\nA Python SDK for seamless integration with the Upstream environmental sensor data platform and CKAN data portal.\n\n> **Note**: This SDK is built on top of the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) - an OpenAPI-generated Python client. You can extend the SDK by using the underlying API client directly for advanced use cases or accessing endpoints not yet covered by the high-level SDK interface.\n\n## Overview\n\nThe Upstream Python SDK provides a standardized, production-ready toolkit for environmental researchers and organizations to:\n\n- **Authenticate** with Upstream API and CKAN data portals\n- **Manage** environmental monitoring campaigns and stations\n- **Upload** sensor data efficiently (with automatic chunking for large datasets)\n- **Publish** datasets automatically to CKAN for discoverability\n- **Automate** data pipelines for continuous sensor networks\n\n## Key Features\n\n### \ud83d\udd10 **Unified Authentication**\n\n- Seamless integration with Upstream API and Tapis/CKAN\n- Automatic token management and refresh\n- Secure credential handling\n\n### \ud83d\udcca **Complete Data Workflow**\n\n```python\nfrom upstream.client import UpstreamClient\nfrom upstream_api_client.models import CampaignsIn, StationCreate\nfrom datetime import datetime, timedelta\n\n# Initialize client with CKAN integration\nclient = UpstreamClient(\n username=\"researcher\",\n password=\"password\",\n base_url=\"https://upstream-dso.tacc.utexas.edu\",\n ckan_url=\"https://ckan.tacc.utexas.edu\",\n ckan_organization=\"your-org\"\n)\n\n# Create campaign\ncampaign_data = CampaignsIn(\n name=\"Environmental Monitoring 2024\",\n description=\"Environmental monitoring campaign with multi-sensor stations\",\n contact_name=\"Dr. Jane Smith\",\n contact_email=\"jane.smith@university.edu\",\n allocation=\"TACC\",\n start_date=datetime.now(),\n end_date=datetime.now() + timedelta(days=365)\n)\ncampaign = client.create_campaign(campaign_data)\n\n# Create monitoring station\nstation_data = StationCreate(\n name=\"Downtown Air Quality Monitor\",\n description=\"Multi-sensor environmental monitoring station\",\n contact_name=\"Dr. Jane Smith\",\n contact_email=\"jane.smith@university.edu\",\n start_date=datetime.now()\n)\nstation = client.create_station(campaign.id, station_data)\n\n# Upload sensor data\nresult = client.upload_csv_data(\n campaign_id=campaign.id,\n station_id=station.id,\n sensors_file=\"sensors.csv\",\n measurements_file=\"measurements.csv\"\n)\n\nprint(f\"Uploaded {result['response']['Total sensors processed']} sensors\")\nprint(f\"Added {result['response']['Total measurements added to database']} measurements\")\n\n# Publish to CKAN with rich metadata\npublication = client.publish_to_ckan(\n campaign_id=campaign.id,\n station_id=station.id\n)\nprint(f\"Data published at: {publication['ckan_url']}\")\n```\n\n### \ud83d\ude80 **Production-Ready Features**\n\n- **Type-safe interfaces** with Pydantic models and comprehensive validation\n- **Rich statistics** - automatic calculation of sensor measurement statistics\n- **Comprehensive error handling** with specific exception types (`APIError`, `ValidationError`)\n- **CKAN integration** with custom metadata support and automatic resource management\n- **Modular architecture** with dedicated managers for campaigns, stations, and sensors\n- **Extensive logging** and debugging capabilities\n- **Authentication management** with automatic token handling\n\n### \ud83d\udd04 **CKAN Integration & Publishing**\n\nSeamless data publishing to CKAN portals:\n\n```python\n# Publish with custom metadata\npublication_result = client.publish_to_ckan(\n campaign_id=campaign_id,\n station_id=station_id,\n\n # Custom dataset metadata\n dataset_metadata={\n \"project_name\": \"Air Quality Study\",\n \"funding_agency\": \"EPA\",\n \"grant_number\": \"EPA-2024-001\"\n },\n\n # Custom resource metadata\n resource_metadata={\n \"calibration_date\": \"2024-01-15\",\n \"quality_control\": \"Automated + Manual Review\",\n \"uncertainty_bounds\": \"\u00b12% of reading\"\n },\n\n # Custom tags for discoverability\n custom_tags=[\"air-quality\", \"epa-funded\", \"quality-controlled\"]\n)\n\nprint(f\"Dataset published: {publication_result['ckan_url']}\")\n```\n\n## Installation\n\n```bash\npip install upstream-sdk\n```\n\nFor development:\n\n```bash\npip install upstream-sdk[dev]\n```\n\n## Demo Notebooks\n\nThe SDK includes comprehensive demo notebooks that showcase all features:\n\n### \ud83d\udcd3 **UpstreamSDK_Core_Demo.ipynb**\n\nInteractive demonstration of core functionality:\n\n- Authentication and client setup\n- Campaign creation and management\n- Station setup with sensor configuration\n- CSV data upload with comprehensive validation\n- Sensor statistics and analytics\n- Error handling and best practices\n\n### \ud83d\udcd3 **UpstreamSDK_CKAN_Demo.ipynb**\n\nComplete CKAN integration workflow:\n\n- CKAN portal setup and authentication\n- Data export and preparation for publishing\n- Dataset creation with rich metadata\n- Custom metadata support (dataset, resource, and tags)\n- Resource management and updates\n- Dataset discovery and search capabilities\n\nBoth notebooks include detailed explanations, practical examples, and production-ready code patterns.\n\n## Quick Start\n\n### 1. Basic Setup\n\n```python\nfrom upstream.client import UpstreamClient\n\n# Initialize with credentials and CKAN integration\nclient = UpstreamClient(\n username=\"your_username\",\n password=\"your_password\",\n base_url=\"https://upstream-dso.tacc.utexas.edu\",\n ckan_url=\"https://ckan.tacc.utexas.edu\",\n ckan_organization=\"your-org\"\n)\n\n# Test authentication\nif client.authenticate():\n print(\"\u2705 Connected successfully!\")\n```\n\n### 2. Create Campaign\n\n```python\nfrom upstream.campaigns import CampaignManager\nfrom upstream_api_client.models import CampaignsIn\nfrom datetime import datetime, timedelta\n\n# Initialize campaign manager\ncampaign_manager = CampaignManager(client.auth_manager)\n\ncampaign_data = CampaignsIn(\n name=\"Environmental Monitoring 2024\",\n description=\"Multi-sensor environmental monitoring network\",\n contact_name=\"Dr. Jane Smith\",\n contact_email=\"jane.smith@university.edu\",\n allocation=\"TACC\",\n start_date=datetime.now(),\n end_date=datetime.now() + timedelta(days=365)\n)\ncampaign = campaign_manager.create(campaign_data)\nprint(f\"Campaign created with ID: {campaign.id}\")\n```\n\n### 3. Register Monitoring Station\n\n```python\nfrom upstream.stations import StationManager\nfrom upstream_api_client.models import StationCreate\nfrom datetime import datetime\n\n# Initialize station manager\nstation_manager = StationManager(client.auth_manager)\n\nstation_data = StationCreate(\n name=\"Downtown Air Quality Monitor\",\n description=\"Multi-sensor air quality monitoring station\",\n contact_name=\"Dr. Jane Smith\",\n contact_email=\"jane.smith@university.edu\",\n start_date=datetime.now()\n)\nstation = station_manager.create(\n campaign_id=campaign.id,\n station_create=station_data\n)\nprint(f\"Station created with ID: {station.id}\")\n```\n\n### 4. Upload Sensor Data\n\n```python\n# Upload from CSV files\nresult = client.upload_csv_data(\n campaign_id=campaign.id,\n station_id=station.id,\n sensors_file=\"path/to/sensors.csv\",\n measurements_file=\"path/to/measurements.csv\"\n)\n\n# Access detailed results\nresponse = result['response']\nprint(f\"Sensors processed: {response['Total sensors processed']}\")\nprint(f\"Measurements added: {response['Total measurements added to database']}\")\nprint(f\"Processing time: {response['Data Processing time']}\")\n```\n\n## Data Format Requirements\n\n### Sensors CSV Format\n\n```csv\nalias,variablename,units,postprocess,postprocessscript\ntemp_01,Air Temperature,\u00b0C,false,\nhumidity_01,Relative Humidity,%,false,\nPM25_01,PM2.5 Concentration,\u03bcg/m\u00b3,true,pm25_calibration\nwind_speed,Wind Speed,m/s,false,\nco2_01,CO2 Concentration,ppm,false,\n```\n\n### Measurements CSV Format\n\n```csv\ncollectiontime,Lat_deg,Lon_deg,temp_01,humidity_01,PM25_01,wind_speed,co2_01\n2024-01-15T10:00:00,30.2672,-97.7431,22.5,68.2,15.2,3.2,420\n2024-01-15T10:05:00,30.2672,-97.7431,22.7,67.8,14.8,3.5,425\n2024-01-15T10:10:00,30.2672,-97.7431,22.9,67.5,16.1,3.1,418\n```\n\n## Advanced Usage\n\n### Sensor Analytics and Statistics\n\n```python\n# Get sensor statistics after upload\nsensors = client.sensors.list(campaign_id=campaign_id, station_id=station_id)\n\nfor sensor in sensors.items:\n stats = sensor.statistics\n print(f\"Sensor: {sensor.alias} ({sensor.variablename})\")\n print(f\" Measurements: {stats.count}\")\n print(f\" Range: {stats.min_value:.2f} - {stats.max_value:.2f} {sensor.units}\")\n print(f\" Average: {stats.avg_value:.2f} {sensor.units}\")\n print(f\" Std Dev: {stats.stddev_value:.3f}\")\n print(f\" Last value: {stats.last_measurement_value:.2f}\")\n print(f\" Updated: {stats.stats_last_updated}\")\n```\n\n#### Force Update Sensor Statistics\n\nThe SDK provides methods to manually trigger statistics recalculation for sensors when needed (e.g., after data corrections or updates):\n\n```python\nfrom upstream.sensors import SensorManager\n\n# Initialize sensor manager\nsensor_manager = SensorManager(client.auth_manager)\n\n# Force update statistics for all sensors in a station\nupdate_result = sensor_manager.force_update_statistics(\n campaign_id=campaign_id,\n station_id=station_id\n)\nprint(f\"Statistics update completed for all sensors in station {station_id}\")\n\n# Force update statistics for a specific sensor\nsingle_update_result = sensor_manager.force_update_single_sensor_statistics(\n campaign_id=campaign_id,\n station_id=station_id,\n sensor_id=sensor_id\n)\nprint(f\"Statistics update completed for sensor {sensor_id}\")\n\n# Verify updated statistics\nupdated_sensors = client.sensors.list(campaign_id=campaign_id, station_id=station_id)\nfor sensor in updated_sensors.items:\n stats = sensor.statistics\n print(f\"Updated stats for {sensor.alias}: {stats.stats_last_updated}\")\n```\n\n**When to use statistics updates:**\n- After correcting measurement data\n- When statistics appear outdated or inconsistent\n- During data quality assurance processes\n- After bulk data imports or migrations\n\n### Measurement Data Management\n\n```python\nfrom upstream.measurements import MeasurementManager\nfrom upstream_api_client.models import MeasurementIn\nfrom datetime import datetime\n\n# Initialize measurement manager\nmeasurement_manager = MeasurementManager(client.auth_manager)\n\n# List measurements for a specific sensor\nmeasurements = measurement_manager.list(\n campaign_id=campaign_id,\n station_id=station_id,\n sensor_id=sensor_id,\n start_date=datetime(2024, 1, 1),\n end_date=datetime(2024, 12, 31),\n limit=100\n)\n\nprint(f\"Found {len(measurements.items)} measurements\")\n\n# Get measurements with confidence intervals for visualization\naggregated_data = measurement_manager.get_with_confidence_intervals(\n campaign_id=campaign_id,\n station_id=station_id,\n sensor_id=sensor_id,\n interval=\"hour\",\n interval_value=1,\n start_date=datetime(2024, 1, 1),\n end_date=datetime(2024, 1, 2)\n)\n\nfor measurement in aggregated_data:\n print(f\"Time: {measurement.time_bucket}\")\n print(f\" Average: {measurement.avg_value}\")\n print(f\" Min/Max: {measurement.min_value} - {measurement.max_value}\")\n print(f\" Confidence Interval: {measurement.confidence_interval_lower} - {measurement.confidence_interval_upper}\")\n```\n\n### Error Handling and Validation\n\n```python\nfrom upstream.exceptions import APIError, ValidationError\nfrom upstream.campaigns import CampaignManager\nfrom upstream.stations import StationManager\n\ntry:\n # Initialize managers\n campaign_manager = CampaignManager(client.auth_manager)\n station_manager = StationManager(client.auth_manager)\n\n # Create campaign with validation\n campaign = campaign_manager.create(campaign_data)\n station = station_manager.create(\n campaign_id=str(campaign.id),\n station_create=station_data\n )\n\nexcept ValidationError as e:\n print(f\"Data validation failed: {e}\")\nexcept APIError as e:\n print(f\"API error: {e}\")\nexcept Exception as e:\n print(f\"Unexpected error: {e}\")\n```\n\n### Comprehensive Data Upload\n\n```python\n# Upload with detailed response handling\nresult = client.upload_csv_data(\n campaign_id=campaign.id,\n station_id=station.id,\n sensors_file=\"path/to/sensors.csv\",\n measurements_file=\"path/to/measurements.csv\"\n)\n\n# Access detailed upload information\nresponse = result['response']\nprint(f\"Sensors processed: {response['Total sensors processed']}\")\nprint(f\"Measurements added: {response['Total measurements added to database']}\")\nprint(f\"Processing time: {response['Data Processing time']}\")\nprint(f\"Files stored: {response['uploaded_file_sensors stored in memory']}\")\n```\n\n### Automated Data Pipeline\n\n```python\n# Complete automated workflow\ndef automated_monitoring_pipeline():\n try:\n # List existing campaigns and stations\n campaigns = client.list_campaigns(limit=5)\n if campaigns.items:\n campaign = campaigns.items[0]\n stations = client.list_stations(campaign_id=str(campaign.id))\n\n if stations.items:\n station = stations.items[0]\n\n # Upload new sensor data\n result = client.upload_csv_data(\n campaign_id=campaign.id,\n station_id=station.id,\n sensors_file=\"latest_sensors.csv\",\n measurements_file=\"latest_measurements.csv\"\n )\n\n # Publish to CKAN automatically\n publication = client.publish_to_ckan(\n campaign_id=campaign.id,\n station_id=station.id,\n custom_tags=[\"automated\", \"real-time\"]\n )\n\n print(f\"Pipeline completed: {publication['ckan_url']}\")\n\n except Exception as e:\n print(f\"Pipeline error: {e}\")\n # Implement alerting/retry logic\n```\n\n### Extending the SDK with the Underlying API Client\n\nThe Upstream SDK provides high-level convenience methods, but you can access the full OpenAPI-generated client for advanced use cases.\n\n**\ud83d\udcd6 Complete API Documentation:** [Documentation for API Endpoints](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-api-endpoints)\n\nThe SDK uses Pydantic models from the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) for type-safe data handling and validation.\n\n**\ud83d\udcd6 Complete Model Documentation:** [Documentation for Models](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-models)\n\n```python\nfrom upstream.client import UpstreamClient\nfrom upstream_api_client.api.campaigns_api import CampaignsApi\nfrom upstream_api_client.api.measurements_api import MeasurementsApi\n\n# Initialize the SDK client\nclient = UpstreamClient(username=\"user\", password=\"pass\", base_url=\"https://upstream-dso.tacc.utexas.edu\")\nclient.authenticate()\n\n# Access the underlying API client for advanced operations\napi_client = client.auth_manager.api_client\n\n# Use the generated API classes directly\ncampaigns_api = CampaignsApi(api_client)\nmeasurements_api = MeasurementsApi(api_client)\n\n# Example: Use advanced filtering not yet available in the SDK\nresponse = measurements_api.get_measurements_api_v1_campaigns_campaign_id_stations_station_id_sensors_sensor_id_measurements_get(\n campaign_id=campaign_id,\n station_id=station_id,\n sensor_id=sensor_id,\n min_measurement_value=20.0,\n max_measurement_value=30.0,\n start_date=\"2024-01-01T00:00:00\",\n end_date=\"2024-12-31T23:59:59\",\n limit=1000,\n page=1\n)\n\nprint(f\"Advanced filtered measurements: {len(response.items)}\")\n```\n\nThis approach allows you to:\n- Access all available API endpoints\n- Use advanced filtering and pagination options\n- Handle complex data transformations\n- Implement custom error handling\n- Access response metadata and headers\n\n## Use Cases\n\n### \ud83c\udf2a\ufe0f **Disaster Response Networks**\n\n- Hurricane monitoring stations with automated data upload\n- Emergency response sensor deployment\n- Real-time environmental hazard tracking\n\n### \ud83c\udf2c\ufe0f **Environmental Research**\n\n- Long-term air quality monitoring\n- Climate change research networks\n- Urban environmental health studies\n\n### \ud83c\udf0a **Water Monitoring**\n\n- Stream gauge networks\n- Water quality assessment programs\n- Flood monitoring and prediction\n\n### \ud83c\udfed **Industrial Monitoring**\n\n- Emissions monitoring compliance\n- Environmental impact assessment\n- Regulatory reporting automation\n\n## API Reference\n\n### UpstreamClient Methods\n\n#### Campaign Management\n\n- **`create_campaign(campaign_in: CampaignsIn)`** - Create a new monitoring campaign\n- **`get_campaign(campaign_id: str)`** - Get campaign by ID\n- **`list_campaigns(**kwargs)`\\*\\* - List all campaigns\n\n#### Station Management\n\n- **`create_station(campaign_id: str, station_create: StationCreate)`** - Create a new monitoring station\n- **`get_station(station_id: str, campaign_id: str)`** - Get station by ID\n- **`list_stations(campaign_id: str, **kwargs)`\\*\\* - List stations for a campaign\n\n#### Data Upload\n\n- **`upload_csv_data(campaign_id: str, station_id: str, sensors_file: str, measurements_file: str)`** - Upload CSV files with comprehensive response\n- **`publish_to_ckan(campaign_id: str, station_id: str, dataset_metadata: dict = None, resource_metadata: dict = None, custom_tags: list = None, **kwargs)`\\*\\* - Publish to CKAN with custom metadata\n\n#### Sensor Management\n\n- **`sensors.get(sensor_id: int, station_id: int, campaign_id: int)`** - Get sensor by ID with statistics\n- **`sensors.list(campaign_id: int, station_id: int, **kwargs)`** - List sensors for a station with filtering options\n- **`sensors.update(sensor_id: int, station_id: int, campaign_id: int, sensor_update: SensorUpdate)`** - Update sensor configuration\n- **`sensors.delete(sensor_id: int, station_id: int, campaign_id: int)`** - Delete a sensor\n- **`sensors.upload_csv_files(campaign_id: int, station_id: int, sensors_file: str, measurements_file: str, chunk_size: int = 1000)`** - Upload CSV files with chunking support\n- **`sensors.force_update_statistics(campaign_id: int, station_id: int)`** - Force recalculation of statistics for all sensors in a station\n- **`sensors.force_update_single_sensor_statistics(campaign_id: int, station_id: int, sensor_id: int)`** - Force recalculation of statistics for a specific sensor\n\n#### Measurement Management\n\n- **`measurements.create(campaign_id: int, station_id: int, sensor_id: int, measurement_in: MeasurementIn)`** - Create a new measurement\n- **`measurements.list(campaign_id: int, station_id: int, sensor_id: int, **filters)`** - List measurements with filtering options\n- **`measurements.get_with_confidence_intervals(campaign_id: int, station_id: int, sensor_id: int, **params)`** - Get aggregated measurements with confidence intervals for visualization\n- **`measurements.update(campaign_id: int, station_id: int, sensor_id: int, measurement_id: int, measurement_update: MeasurementUpdate)`** - Update a specific measurement\n- **`measurements.delete(campaign_id: int, station_id: int, sensor_id: int)`** - Delete all measurements for a sensor\n\n#### Utilities\n\n- **`authenticate()`** - Test authentication and return status\n- **`logout()`** - Logout and invalidate tokens\n- **`list_campaigns(limit: int = 10, **kwargs)`\\*\\* - List campaigns with pagination\n- **`list_stations(campaign_id: str, **kwargs)`\\*\\* - List stations for a campaign\n- **`get_campaign(campaign_id: str)`** - Get detailed campaign information\n- **`get_station(station_id: str, campaign_id: str)`** - Get detailed station information\n\n### Core Classes\n\n- **`UpstreamClient`** - Main SDK interface with CKAN integration\n- **`CampaignManager`** - Campaign lifecycle management\n- **`StationManager`** - Station creation and management\n- **`MeasurementManager`** - Individual measurement data operations\n- **`CKANIntegration`** - CKAN portal integration and publishing\n\n### Data Models\n\nThe SDK uses Pydantic models from the [`upstream-python-api-client`](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client) for type-safe data handling and validation.\n\n**\ud83d\udcd6 Complete Model Documentation:** [Documentation for Models](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/tree/main?tab=readme-ov-file#documentation-for-models)\n\n**Key Models:**\n- **`CampaignsIn`** - [Campaign creation model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/CampaignsIn.md)\n- **`StationCreate`** - [Station configuration model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/StationCreate.md)\n- **`MeasurementIn`** - [Individual measurement model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/MeasurementIn.md)\n- **`AggregatedMeasurement`** - [Statistical measurement aggregation model](https://github.com/In-For-Disaster-Analytics/upstream-python-api-client/blob/main/docs/AggregatedMeasurement.md)\n\n**Usage Example:**\n```python\nfrom upstream_api_client.models import CampaignsIn, StationCreate, MeasurementIn\nfrom datetime import datetime, timedelta\n\n# See official documentation for complete field specifications\ncampaign = CampaignsIn(\n name=\"Environmental Monitoring 2024\",\n allocation=\"TACC-allocation-id\",\n # ... see CampaignsIn.md for all fields\n)\n```\n\n### Exceptions\n\n- **`APIError`** - API-specific errors with detailed messages\n- **`ValidationError`** - Data validation and format errors\n- **`AuthManager`** - Authentication and token management\n\n## Configuration\n\n### Configuration File\n\n```yaml\n# config.yaml\nupstream:\n username: your_username\n password: your_password\n base_url: https://upstream-dso.tacc.utexas.edu\n\nckan:\n url: https://ckan.tacc.utexas.edu\n organization: your-organization\n api_key: your_ckan_api_key # Optional for read-only\n timeout: 30\n\nlogging:\n level: INFO\n format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\n```\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Development Setup\n\n```bash\ngit clone https://github.com/In-For-Disaster-Analytics/upstream-python-sdk.git\ncd upstream-python-sdk\npip install -e .[dev]\npre-commit install\n```\n\n### Running Tests\n\n```bash\npytest # Run all tests\npytest tests/test_auth.py # Run specific test file\npytest --cov=upstream # Run with coverage\n```\n\n## License\n\nThis project is licensed under the MIT License - see [LICENSE](LICENSE) file for details.\n\n## Support\n\n- **Documentation**: [https://upstream-python-sdk.readthedocs.io](https://upstream-python-sdk.readthedocs.io)\n- **Issues**: [GitHub Issues](https://github.com/In-For-Disaster-Analytics/upstream-python-sdk/issues)\n\n## Citation\n\nIf you use this SDK in your research, please cite:\n\n```bibtex\n@software{upstream_python_sdk,\n title={Upstream Python SDK: Environmental Sensor Data Integration},\n author={In-For-Disaster-Analytics Team},\n year={2024},\n url={https://github.com/In-For-Disaster-Analytics/upstream-python-sdk},\n version={1.0.0}\n}\n```\n\n## Related Projects\n\n- **[Upstream Platform](https://github.com/In-For-Disaster-Analytics/upstream-docker)** - Main platform repository\n- **[CKAN Integration](https://ckan.tacc.utexas.edu)** - Data portal for published datasets\n\n---\n\n**Built for the environmental research community** \ud83c\udf0d\n**Enabling automated, reproducible, and discoverable environmental data workflows**\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python SDK for Upstream environmental sensor data platform and CKAN integration",
"version": "1.0.2",
"project_urls": {
"Changelog": "https://github.com/In-For-Disaster-Analytics/upstream-python-sdk/blob/main/CHANGELOG.md",
"Documentation": "https://upstream-python-sdk.readthedocs.io",
"Homepage": "https://github.com/In-For-Disaster-Analytics/upstream-python-sdk",
"Issues": "https://github.com/In-For-Disaster-Analytics/upstream-python-sdk/issues",
"Repository": "https://github.com/In-For-Disaster-Analytics/upstream-python-sdk"
},
"split_keywords": [
"environmental",
" sensors",
" data",
" api",
" ckan",
" upstream"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e9123fd0a6fe1a06b6e5f9d2b762fbebd7108d3ec9b37edaf9cfdca2fc18b9bd",
"md5": "d9c00949a564cb981f4dd32d1a75aca0",
"sha256": "39fc5d486e60e65dc084ba88cc2679349fa524cd3fed5cea7e5b0f3521597e15"
},
"downloads": -1,
"filename": "upstream_sdk-1.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9c00949a564cb981f4dd32d1a75aca0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 39847,
"upload_time": "2025-08-02T20:05:18",
"upload_time_iso_8601": "2025-08-02T20:05:18.005143Z",
"url": "https://files.pythonhosted.org/packages/e9/12/3fd0a6fe1a06b6e5f9d2b762fbebd7108d3ec9b37edaf9cfdca2fc18b9bd/upstream_sdk-1.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "49b047211b4253dd4da915f0475eba6339995e6fbcffa4024a9e0a6608449f31",
"md5": "af69b16c3290862b41a3f600885c5414",
"sha256": "1e7e341b64406d70ecda4a4d0e36bb717b5e76d264189dbff81fbb19464a4610"
},
"downloads": -1,
"filename": "upstream_sdk-1.0.2.tar.gz",
"has_sig": false,
"md5_digest": "af69b16c3290862b41a3f600885c5414",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 42109,
"upload_time": "2025-08-02T20:05:19",
"upload_time_iso_8601": "2025-08-02T20:05:19.164726Z",
"url": "https://files.pythonhosted.org/packages/49/b0/47211b4253dd4da915f0475eba6339995e6fbcffa4024a9e0a6608449f31/upstream_sdk-1.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-02 20:05:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "In-For-Disaster-Analytics",
"github_project": "upstream-python-sdk",
"github_not_found": true,
"lcname": "upstream-sdk"
}