# Python Database Utilities
[](https://badge.fury.io/py/pyspdbutils)
[](https://pypi.org/project/pyspdbutils/)
[](https://opensource.org/licenses/MIT)
A production-grade database utilities package that provides a unified interface for working with multiple database types. Built with enterprise requirements in mind, it includes hash collision detection, automatic schema validation, and robust error handling.
## 📖 Documentation Guide
**👋 New to db_utils? Start here:**
1. **README.md** (this file) - Overview, features, and quick start
2. **[INSTALL.md](INSTALL.md)** - Detailed installation guide for all databases
3. **[check_imports.py](check_imports.py)** - Test interface import examples
4. **[examples.py](examples.py)** - Basic db utils usage examples
5. **[data_approaches_demo.py](data_approaches_demo.py)** - End to end db utils Data handling patterns
**🔧 For developers:**
- **[SETUP.md](SETUP.md)** - Development environment setup and contribution guide
## Features
- **Universal Database Interface**: Work with 12+ database types using a single API
- **Hash Collision Detection**: Automatic duplicate prevention using configurable hash columns
- **Schema Validation**: Built-in validation for table schemas and data integrity
- **Query Builder**: Type-safe query building with parameter binding
- **Production Ready**: Comprehensive error handling and logging
- **Type Hints**: Full type annotation support for better IDE integration
- **Transaction Management**: Built-in transaction handling with rollback support
## Supported Databases
- SQLite
- PostgreSQL
- MySQL/MariaDB
- Microsoft SQL Server
- Azure SQL Database
- Oracle
- Snowflake
- Databricks
- Amazon Redshift
- IBM DB2
- Teradata
## Installation
### Basic Installation
```bash
pip install pyspdbutils
```
### Database-Specific Dependencies
Install with specific database support:
```bash
# PostgreSQL
pip install pyspdbutils[postgresql]
# MySQL/MariaDB
pip install pyspdbutils[mysql]
# SQL Server/Azure SQL
pip install pyspdbutils[sqlserver]
# Oracle
pip install pyspdbutils[oracle]
# Snowflake
pip install pyspdbutils[snowflake]
# Teradata
pip install pyspdbutils[teradata]
# Databricks
pip install pyspdbutils[databricks]
# All databases
pip install pyspdbutils[all]
```
## Quick Start
### SQLite Example
```python
from db_utils import DBManager, DBConfig
from db_utils.interfaces import SQLiteInterface
# Initialize database interface
interface = SQLiteInterface("example.db")
# Create manager with hash collision detection
manager = DBManager(interface, hash_columns=["id", "email"])
# Create table
schema = {
"id": "INTEGER PRIMARY KEY",
"name": "VARCHAR(100)",
"email": "VARCHAR(255) UNIQUE",
"created_at": "TIMESTAMP DEFAULT CURRENT_TIMESTAMP"
}
manager.create_table("users", schema)
# Insert data with automatic duplicate detection
user_data = {"id": 1, "name": "John Doe", "email": "john@example.com"}
result = manager.insert("users", user_data)
if result == "duplicate":
print("User already exists!")
else:
print("User created successfully!")
# Query data
users = manager.select("users", conditions={"name": "John Doe"})
print(f"Found {len(users)} users")
```
### PostgreSQL Example
```python
from db_utils import DBManager, DBConfig
from db_utils.interfaces import PostgreSQLInterface
# Initialize PostgreSQL interface
interface = PostgreSQLInterface(
host="localhost",
port=5432,
user="username",
password="password",
database="mydb"
)
# Or use environment variables with DBConfig
config = DBConfig.from_env("postgresql")
from db_utils.interfaces import SQLAlchemyInterface
interface = SQLAlchemyInterface(config.conn_str)
manager = DBManager(interface)
# Rest of the code is the same...
```
### Using Environment Variables
Create a `.env` file:
```env
POSTGRESQL_HOST=localhost
POSTGRESQL_PORT=5432
POSTGRESQL_USER=myuser
POSTGRESQL_PASSWORD=mypassword
POSTGRESQL_DATABASE=mydatabase
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASSWORD=password
MYSQL_DATABASE=testdb
SNOWFLAKE_ACCOUNT=your-account
SNOWFLAKE_USER=your-user
SNOWFLAKE_PASSWORD=your-password
SNOWFLAKE_WAREHOUSE=your-warehouse
SNOWFLAKE_DATABASE=your-database
SNOWFLAKE_SCHEMA=your-schema
```
Then use `DBConfig`:
```python
from db_utils import DBConfig
from db_utils.interfaces import SQLAlchemyInterface
# Load configuration from environment
config = DBConfig.from_env("postgresql")
interface = SQLAlchemyInterface(config.connection_string)
```
## Advanced Usage
### Hash Collision Detection
```python
# Configure hash columns for duplicate detection
manager = DBManager(interface, hash_columns=["email", "phone"])
# Insert will automatically check for duplicates
data = {"name": "Jane Doe", "email": "jane@example.com", "phone": "+1234567890"}
result = manager.insert("users", data, skip_duplicates=True)
if result == "duplicate":
print("Record with same email/phone already exists")
```
### Transaction Management
```python
# Using context manager for automatic transaction handling
with interface:
manager.insert("users", user1_data)
manager.insert("users", user2_data)
# Automatically commits on success, rolls back on error
```
### Custom Schema Creation
```python
# Create complex table with constraints
schema = {
"id": "SERIAL PRIMARY KEY",
"username": "VARCHAR(50) UNIQUE NOT NULL",
"email": "VARCHAR(100) UNIQUE NOT NULL",
"password_hash": "VARCHAR(255) NOT NULL",
"is_active": "BOOLEAN DEFAULT TRUE",
"created_at": "TIMESTAMP DEFAULT CURRENT_TIMESTAMP",
"updated_at": "TIMESTAMP DEFAULT CURRENT_TIMESTAMP"
}
manager.create_table("users", schema)
# Add indexes (database-specific)
if hasattr(interface, 'execute_query'):
interface.execute_query("CREATE INDEX idx_users_email ON users(email)")
```
### Batch Operations
```python
# Bulk insert with duplicate checking
users_data = [
{"name": "User 1", "email": "user1@example.com"},
{"name": "User 2", "email": "user2@example.com"},
{"name": "User 3", "email": "user3@example.com"},
]
inserted_count = 0
duplicate_count = 0
for user_data in users_data:
result = manager.insert("users", user_data)
if result == "duplicate":
duplicate_count += 1
else:
inserted_count += 1
print(f"Inserted: {inserted_count}, Duplicates: {duplicate_count}")
```
### Query Building
```python
from db_utils import QueryBuilder
# Custom query building
builder = QueryBuilder("postgresql")
# Build complex SELECT query
query, params = builder.build_query_params(
"SELECT",
"users",
columns=["id", "name", "email"],
conditions={"is_active": True, "created_at": "2024-01-01"},
limit=10,
offset=20
)
results = interface.execute_query(str(query), params, fetch="all")
```
## Error Handling
The package includes comprehensive error handling:
```python
from db_utils.exception import (
DBOperationError,
ValidationError,
ConnectionError,
ConfigurationError
)
try:
manager.insert("users", invalid_data)
except ValidationError as e:
print(f"Data validation failed: {e}")
except DBOperationError as e:
print(f"Database operation failed: {e}")
except ConnectionError as e:
print(f"Database connection failed: {e}")
```
## Configuration
### Supported Configuration Methods
1. **Direct instantiation**:
```python
config = DBConfig("postgresql", host="localhost", user="user", password="pass")
```
2. **Environment variables**:
```python
config = DBConfig.from_env("postgresql")
```
3. **Mixed approach**:
```python
config = DBConfig("postgresql", host="custom-host") # Other params from env
```
### Database-Specific Configuration
#### Snowflake
```python
config = DBConfig("snowflake",
account="your-account",
user="username",
password="password",
warehouse="compute_wh",
database="analytics",
schema="public",
role="analyst"
)
```
#### Databricks
```python
config = DBConfig("databricks",
host="your-workspace.cloud.databricks.com",
password="your-token", # Personal access token
database="/your/database/path"
)
```
## Testing
Run the test suite:
```bash
# Install development dependencies
pip install production-db-utils[dev]
# Run tests
pytest
# Run with coverage
pytest --cov=db_utils --cov-report=html
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Add tests for your changes
5. Run the test suite (`pytest`)
6. Commit your changes (`git commit -m 'Add amazing feature'`)
7. Push to the branch (`git push origin feature/amazing-feature`)
8. Open a Pull Request
## 📚 Additional Documentation
- **[📦 INSTALL.md](INSTALL.md)** - Complete installation guide for all database types
- **[🔧 SETUP.md](SETUP.md)** - Development setup and contribution guidelines
- **[📝 examples.py](examples.py)** - Basic usage examples and patterns
- **[🚀 advanced_examples_copy.py](advanced_examples_copy.py)** - Advanced features and enterprise usage
- **[📊 data_approaches_demo.py](data_approaches_demo.py)** - Data handling best practices
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Changelog
### Version 1.0.0
- Initial release
- Support for 12+ database types
- Hash collision detection
- Schema validation
- Production-grade error handling
- Comprehensive test suite
## Support
- 📧 Email: debi.rath817@gmail.com
- 🐛 Issues: [GitHub Issues](https://github.com/yourusername/production-db-utils/issues)
- 📖 Documentation: [GitHub Wiki](https://github.com/yourusername/production-db-utils/wiki)
## Related Projects
- [SQLAlchemy](https://sqlalchemy.org/) - The Python SQL toolkit
- [Pandas](https://pandas.pydata.org/) - Data analysis and manipulation tool
- [Alembic](https://alembic.sqlalchemy.org/) - Database migration tool
Raw data
{
"_id": null,
"home_page": "https://github.com/yourusername/pyspdbutils",
"name": "pyspdbutils",
"maintainer": "Sanjeev Kumar Pandey",
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": "sanjeev9088@gmail.com",
"keywords": "database, sql, orm, utilities, postgresql, mysql, sqlite, oracle, snowflake, databricks, redshift, teradata",
"author": "Debi Prasad Rath",
"author_email": "debi.rath817@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/6f/a9/0e217d0efa46275a5b9f72cc32160e22b528f688c34840bf061396d4c8d1/pyspdbutils-1.0.6.tar.gz",
"platform": null,
"description": "# Python Database Utilities\n\n[](https://badge.fury.io/py/pyspdbutils)\n[](https://pypi.org/project/pyspdbutils/)\n[](https://opensource.org/licenses/MIT)\n\nA production-grade database utilities package that provides a unified interface for working with multiple database types. Built with enterprise requirements in mind, it includes hash collision detection, automatic schema validation, and robust error handling.\n\n## \ud83d\udcd6 Documentation Guide\n\n**\ud83d\udc4b New to db_utils? Start here:**\n1. **README.md** (this file) - Overview, features, and quick start\n2. **[INSTALL.md](INSTALL.md)** - Detailed installation guide for all databases\n3. **[check_imports.py](check_imports.py)** - Test interface import examples\n4. **[examples.py](examples.py)** - Basic db utils usage examples\n5. **[data_approaches_demo.py](data_approaches_demo.py)** - End to end db utils Data handling patterns\n\n**\ud83d\udd27 For developers:**\n- **[SETUP.md](SETUP.md)** - Development environment setup and contribution guide\n\n## Features\n\n- **Universal Database Interface**: Work with 12+ database types using a single API\n- **Hash Collision Detection**: Automatic duplicate prevention using configurable hash columns\n- **Schema Validation**: Built-in validation for table schemas and data integrity\n- **Query Builder**: Type-safe query building with parameter binding\n- **Production Ready**: Comprehensive error handling and logging\n- **Type Hints**: Full type annotation support for better IDE integration\n- **Transaction Management**: Built-in transaction handling with rollback support\n\n## Supported Databases\n\n- SQLite\n- PostgreSQL\n- MySQL/MariaDB\n- Microsoft SQL Server\n- Azure SQL Database\n- Oracle\n- Snowflake\n- Databricks\n- Amazon Redshift\n- IBM DB2\n- Teradata\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install pyspdbutils\n```\n\n### Database-Specific Dependencies\n\nInstall with specific database support:\n\n```bash\n# PostgreSQL\npip install pyspdbutils[postgresql]\n\n# MySQL/MariaDB\npip install pyspdbutils[mysql]\n\n# SQL Server/Azure SQL\npip install pyspdbutils[sqlserver]\n\n# Oracle\npip install pyspdbutils[oracle]\n\n# Snowflake\npip install pyspdbutils[snowflake]\n\n# Teradata\npip install pyspdbutils[teradata]\n\n# Databricks\npip install pyspdbutils[databricks]\n\n\n# All databases\npip install pyspdbutils[all]\n```\n\n## Quick Start\n\n### SQLite Example\n\n```python\nfrom db_utils import DBManager, DBConfig\nfrom db_utils.interfaces import SQLiteInterface\n\n# Initialize database interface\ninterface = SQLiteInterface(\"example.db\")\n\n# Create manager with hash collision detection\nmanager = DBManager(interface, hash_columns=[\"id\", \"email\"])\n\n# Create table\nschema = {\n \"id\": \"INTEGER PRIMARY KEY\",\n \"name\": \"VARCHAR(100)\",\n \"email\": \"VARCHAR(255) UNIQUE\",\n \"created_at\": \"TIMESTAMP DEFAULT CURRENT_TIMESTAMP\"\n}\nmanager.create_table(\"users\", schema)\n\n# Insert data with automatic duplicate detection\nuser_data = {\"id\": 1, \"name\": \"John Doe\", \"email\": \"john@example.com\"}\nresult = manager.insert(\"users\", user_data)\n\nif result == \"duplicate\":\n print(\"User already exists!\")\nelse:\n print(\"User created successfully!\")\n\n# Query data\nusers = manager.select(\"users\", conditions={\"name\": \"John Doe\"})\nprint(f\"Found {len(users)} users\")\n```\n\n### PostgreSQL Example\n\n```python\nfrom db_utils import DBManager, DBConfig\nfrom db_utils.interfaces import PostgreSQLInterface\n\n# Initialize PostgreSQL interface\ninterface = PostgreSQLInterface(\n host=\"localhost\",\n port=5432,\n user=\"username\", \n password=\"password\",\n database=\"mydb\"\n)\n\n# Or use environment variables with DBConfig\nconfig = DBConfig.from_env(\"postgresql\")\nfrom db_utils.interfaces import SQLAlchemyInterface\ninterface = SQLAlchemyInterface(config.conn_str)\n\nmanager = DBManager(interface)\n\n# Rest of the code is the same...\n```\n\n### Using Environment Variables\n\nCreate a `.env` file:\n\n```env\nPOSTGRESQL_HOST=localhost\nPOSTGRESQL_PORT=5432\nPOSTGRESQL_USER=myuser\nPOSTGRESQL_PASSWORD=mypassword\nPOSTGRESQL_DATABASE=mydatabase\n\nMYSQL_HOST=localhost\nMYSQL_USER=root\nMYSQL_PASSWORD=password\nMYSQL_DATABASE=testdb\n\nSNOWFLAKE_ACCOUNT=your-account\nSNOWFLAKE_USER=your-user\nSNOWFLAKE_PASSWORD=your-password\nSNOWFLAKE_WAREHOUSE=your-warehouse\nSNOWFLAKE_DATABASE=your-database\nSNOWFLAKE_SCHEMA=your-schema\n```\n\nThen use `DBConfig`:\n\n```python\nfrom db_utils import DBConfig\nfrom db_utils.interfaces import SQLAlchemyInterface\n\n# Load configuration from environment\nconfig = DBConfig.from_env(\"postgresql\")\ninterface = SQLAlchemyInterface(config.connection_string)\n```\n\n## Advanced Usage\n\n### Hash Collision Detection\n\n```python\n# Configure hash columns for duplicate detection\nmanager = DBManager(interface, hash_columns=[\"email\", \"phone\"])\n\n# Insert will automatically check for duplicates\ndata = {\"name\": \"Jane Doe\", \"email\": \"jane@example.com\", \"phone\": \"+1234567890\"}\nresult = manager.insert(\"users\", data, skip_duplicates=True)\n\nif result == \"duplicate\":\n print(\"Record with same email/phone already exists\")\n```\n\n### Transaction Management\n\n```python\n# Using context manager for automatic transaction handling\nwith interface:\n manager.insert(\"users\", user1_data)\n manager.insert(\"users\", user2_data)\n # Automatically commits on success, rolls back on error\n```\n\n### Custom Schema Creation\n\n```python\n# Create complex table with constraints\nschema = {\n \"id\": \"SERIAL PRIMARY KEY\",\n \"username\": \"VARCHAR(50) UNIQUE NOT NULL\",\n \"email\": \"VARCHAR(100) UNIQUE NOT NULL\", \n \"password_hash\": \"VARCHAR(255) NOT NULL\",\n \"is_active\": \"BOOLEAN DEFAULT TRUE\",\n \"created_at\": \"TIMESTAMP DEFAULT CURRENT_TIMESTAMP\",\n \"updated_at\": \"TIMESTAMP DEFAULT CURRENT_TIMESTAMP\"\n}\n\nmanager.create_table(\"users\", schema)\n\n# Add indexes (database-specific)\nif hasattr(interface, 'execute_query'):\n interface.execute_query(\"CREATE INDEX idx_users_email ON users(email)\")\n```\n\n### Batch Operations\n\n```python\n# Bulk insert with duplicate checking\nusers_data = [\n {\"name\": \"User 1\", \"email\": \"user1@example.com\"},\n {\"name\": \"User 2\", \"email\": \"user2@example.com\"},\n {\"name\": \"User 3\", \"email\": \"user3@example.com\"},\n]\n\ninserted_count = 0\nduplicate_count = 0\n\nfor user_data in users_data:\n result = manager.insert(\"users\", user_data)\n if result == \"duplicate\":\n duplicate_count += 1\n else:\n inserted_count += 1\n\nprint(f\"Inserted: {inserted_count}, Duplicates: {duplicate_count}\")\n```\n\n### Query Building\n\n```python\nfrom db_utils import QueryBuilder\n\n# Custom query building\nbuilder = QueryBuilder(\"postgresql\")\n\n# Build complex SELECT query\nquery, params = builder.build_query_params(\n \"SELECT\",\n \"users\",\n columns=[\"id\", \"name\", \"email\"],\n conditions={\"is_active\": True, \"created_at\": \"2024-01-01\"},\n limit=10,\n offset=20\n)\n\nresults = interface.execute_query(str(query), params, fetch=\"all\")\n```\n\n## Error Handling\n\nThe package includes comprehensive error handling:\n\n```python\nfrom db_utils.exception import (\n DBOperationError,\n ValidationError,\n ConnectionError,\n ConfigurationError\n)\n\ntry:\n manager.insert(\"users\", invalid_data)\nexcept ValidationError as e:\n print(f\"Data validation failed: {e}\")\nexcept DBOperationError as e:\n print(f\"Database operation failed: {e}\")\nexcept ConnectionError as e:\n print(f\"Database connection failed: {e}\")\n```\n\n## Configuration\n\n### Supported Configuration Methods\n\n1. **Direct instantiation**:\n ```python\n config = DBConfig(\"postgresql\", host=\"localhost\", user=\"user\", password=\"pass\")\n ```\n\n2. **Environment variables**:\n ```python\n config = DBConfig.from_env(\"postgresql\")\n ```\n\n3. **Mixed approach**:\n ```python\n config = DBConfig(\"postgresql\", host=\"custom-host\") # Other params from env\n ```\n\n### Database-Specific Configuration\n\n#### Snowflake\n```python\nconfig = DBConfig(\"snowflake\",\n account=\"your-account\",\n user=\"username\",\n password=\"password\",\n warehouse=\"compute_wh\",\n database=\"analytics\",\n schema=\"public\",\n role=\"analyst\"\n)\n```\n\n#### Databricks\n```python\nconfig = DBConfig(\"databricks\",\n host=\"your-workspace.cloud.databricks.com\",\n password=\"your-token\", # Personal access token\n database=\"/your/database/path\"\n)\n```\n\n## Testing\n\nRun the test suite:\n\n```bash\n# Install development dependencies\npip install production-db-utils[dev]\n\n# Run tests\npytest\n\n# Run with coverage\npytest --cov=db_utils --cov-report=html\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes\n4. Add tests for your changes\n5. Run the test suite (`pytest`)\n6. Commit your changes (`git commit -m 'Add amazing feature'`)\n7. Push to the branch (`git push origin feature/amazing-feature`)\n8. Open a Pull Request\n\n## \ud83d\udcda Additional Documentation\n\n- **[\ud83d\udce6 INSTALL.md](INSTALL.md)** - Complete installation guide for all database types\n- **[\ud83d\udd27 SETUP.md](SETUP.md)** - Development setup and contribution guidelines \n- **[\ud83d\udcdd examples.py](examples.py)** - Basic usage examples and patterns\n- **[\ud83d\ude80 advanced_examples_copy.py](advanced_examples_copy.py)** - Advanced features and enterprise usage\n- **[\ud83d\udcca data_approaches_demo.py](data_approaches_demo.py)** - Data handling best practices\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Changelog\n\n### Version 1.0.0\n- Initial release\n- Support for 12+ database types\n- Hash collision detection\n- Schema validation\n- Production-grade error handling\n- Comprehensive test suite\n\n## Support\n\n- \ud83d\udce7 Email: debi.rath817@gmail.com\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/yourusername/production-db-utils/issues)\n- \ud83d\udcd6 Documentation: [GitHub Wiki](https://github.com/yourusername/production-db-utils/wiki)\n\n## Related Projects\n\n- [SQLAlchemy](https://sqlalchemy.org/) - The Python SQL toolkit\n- [Pandas](https://pandas.pydata.org/) - Data analysis and manipulation tool\n- [Alembic](https://alembic.sqlalchemy.org/) - Database migration tool\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Production-grade database utilities with universal interface and hash collision detection",
"version": "1.0.6",
"project_urls": {
"Documentation": "https://github.com/yourusername/pyspdbutils#readme",
"Homepage": "https://github.com/yourusername/pyspdbutils",
"Repository": "https://github.com/yourusername/pyspdbutils"
},
"split_keywords": [
"database",
" sql",
" orm",
" utilities",
" postgresql",
" mysql",
" sqlite",
" oracle",
" snowflake",
" databricks",
" redshift",
" teradata"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "562621015d8c1ddefa17e595d6955d66d93322e3934fdfe10f1fd4538e53314f",
"md5": "32b74183ea0dcd9dfcc124425929f61c",
"sha256": "82d945b1bee80f63787b32b6d322a4c0ac5a95b066367def12e1ac2d539c6cd1"
},
"downloads": -1,
"filename": "pyspdbutils-1.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "32b74183ea0dcd9dfcc124425929f61c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 34707,
"upload_time": "2025-09-08T17:09:55",
"upload_time_iso_8601": "2025-09-08T17:09:55.446148Z",
"url": "https://files.pythonhosted.org/packages/56/26/21015d8c1ddefa17e595d6955d66d93322e3934fdfe10f1fd4538e53314f/pyspdbutils-1.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6fa90e217d0efa46275a5b9f72cc32160e22b528f688c34840bf061396d4c8d1",
"md5": "03bee5e49bf88645f4a96a56a1826336",
"sha256": "67fb0c2767cdd7f265428158ddedba09474fc98f3cfa835d0802cd4275e70357"
},
"downloads": -1,
"filename": "pyspdbutils-1.0.6.tar.gz",
"has_sig": false,
"md5_digest": "03bee5e49bf88645f4a96a56a1826336",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 21768,
"upload_time": "2025-09-08T17:09:56",
"upload_time_iso_8601": "2025-09-08T17:09:56.650632Z",
"url": "https://files.pythonhosted.org/packages/6f/a9/0e217d0efa46275a5b9f72cc32160e22b528f688c34840bf061396d4c8d1/pyspdbutils-1.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-08 17:09:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "pyspdbutils",
"github_not_found": true,
"lcname": "pyspdbutils"
}