# Abraxos
[](https://pypi.org/project/abraxos/)
[](https://abraxos.readthedocs.io/en/latest/)
[](LICENSE)
[](https://github.com/eddiethedean/abraxos)
[](https://github.com/eddiethedean/abraxos)
**Abraxos** is a lightweight Python toolkit for robust, row-aware data processing using Pandas and Pydantic. It helps you build resilient ETL pipelines that gracefully handle errors at the row level.
## โจ Why Abraxos?
Traditional data pipelines fail completely when they encounter a single bad row. Abraxos changes that:
- ๐ก๏ธ **Fault-tolerant by design** - isolate and recover from row-level errors
- ๐ **Full error visibility** - see exactly which rows failed and why
- ๐ **Automatic retry logic** - recursive splitting to isolate problem rows
- ๐ **Production-ready** - 118 tests, 92% coverage, type-safe
---
## ๐ Features
- ๐ **CSV Ingestion with Bad Line Recovery**
Read CSVs in full or in chunks, automatically capturing malformed lines separately.
- ๐ **Transform DataFrames Resiliently**
Apply transformation functions and automatically isolate rows that fail.
- ๐งช **Pydantic-Based Row Validation**
Validate each row using Pydantic models, separating valid and invalid records.
- ๐ข๏ธ **SQL Insertion with Error Splitting**
Insert DataFrames into SQL databases with automatic retry and chunking for failed rows.
---
## ๐ฆ Installation
```bash
pip install abraxos
```
**With optional dependencies:**
```bash
# For SQL support
pip install abraxos[sql]
# For Pydantic validation
pip install abraxos[validate]
# For development
pip install abraxos[dev]
# Everything
pip install abraxos[all]
```
**Requirements:**
- Python 3.10+
- pandas >= 1.5.0
- numpy >= 1.23.0
- Optional: sqlalchemy >= 2.0.0
- Optional: pydantic >= 2.0.0
---
## ๐ Documentation
Full documentation is available at: [https://abraxos.readthedocs.io](https://abraxos.readthedocs.io)
---
## ๐ฏ Quick Start
Here are real, tested examples showing Abraxos in action:
### ๐ Example 1: Read CSVs with Error Recovery
Abraxos captures malformed lines instead of crashing your pipeline:
```python
from abraxos import read_csv
# Read a CSV that has some malformed lines
result = read_csv("data.csv")
print("Bad lines:", result.bad_lines)
print("\nClean data:")
print(result.dataframe)
```
**Output:**
```
Bad lines: [['TOO', 'MANY', 'COLUMNS', 'HERE']]
Clean data:
id name age
0 1 Joe 28
1 2 Alice 35
2 3 Marcus 40
```
---
### ๐งผ Example 2: Transform with Fault Isolation
Apply transformations that automatically isolate problematic rows:
```python
import pandas as pd
from abraxos import transform
df = pd.DataFrame({
'id': [1, 2, 3],
'name': [' Joe ', ' Alice ', ' Marcus '],
'age': [28, 35, 40]
})
def clean_data(df):
df = df.copy()
df["name"] = df["name"].str.strip().str.lower()
return df
result = transform(df, clean_data)
print("Errors:", result.errors)
print("\nSuccess DataFrame:")
print(result.success_df)
```
**Output:**
```
Errors: []
Success DataFrame:
id name age
0 1 joe 28
1 2 alice 35
2 3 marcus 40
```
---
### โก Example 3: Automatic Error Isolation
When transformation fails on some rows, Abraxos automatically isolates them:
```python
import pandas as pd
from abraxos import transform
df = pd.DataFrame({'value': [1, 2, 0, 3, 4]})
def divide_by_value(df):
df = df.copy()
if (df['value'] == 0).any():
raise ValueError('Cannot divide by zero')
df['result'] = 100 / df['value']
return df
result = transform(df, divide_by_value)
print(f"Errors encountered: {len(result.errors)}")
print(f"\nSuccessful rows ({len(result.success_df)}):")
print(result.success_df)
print(f"\nFailed rows ({len(result.errored_df)}):")
print(result.errored_df)
```
**Output:**
```
Errors encountered: 1
Successful rows (4):
value result
0 1 100.000000
1 2 50.000000
3 3 33.333333
4 4 25.000000
Failed rows (1):
value
2 0
```
Notice how Abraxos automatically isolated the problematic row (value=0) and processed the rest!
---
### โ
Example 4: Validate with Pydantic
Validate each row and separate valid from invalid data:
```python
import pandas as pd
from abraxos import validate
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
df = pd.DataFrame({
'name': ['Joe', 'Alice', 'Marcus'],
'age': [28, 'invalid', 40]
})
result = validate(df, Person)
print("Valid rows:")
print(result.success_df)
print(f"\nNumber of validation errors: {len(result.errors)}")
print("\nInvalid rows:")
print(result.errored_df)
```
**Output:**
```
Valid rows:
name age
0 Joe 28
2 Marcus 40
Number of validation errors: 1
Invalid rows:
name age
1 Alice invalid
```
---
### ๐๏ธ Example 5: SQL Insertion with Retry Logic
Insert data into SQL with automatic error handling:
```python
import pandas as pd
from abraxos import to_sql
from sqlalchemy import create_engine
engine = create_engine("sqlite:///example.db")
df = pd.DataFrame({
'name': ['Joe', 'Alice', 'Marcus'],
'age': [28, 35, 40]
})
result = to_sql(df, "people", engine)
print(f"Successful inserts: {result.success_df.shape[0]}")
print(f"Failed rows: {result.errored_df.shape[0]}")
```
**Output:**
```
Successful inserts: 3
Failed rows: 0
Data in database:
name age
0 Joe 28
1 Alice 35
2 Marcus 40
```
---
### ๐ Example 6: Process Large Files in Chunks
Read and process large CSV files efficiently:
```python
from abraxos import read_csv
# Read in chunks of 1000 rows
for chunk_result in read_csv("large_file.csv", chunksize=1000):
print(f"Processing chunk with {len(chunk_result.dataframe)} rows")
print(f"Bad lines in this chunk: {len(chunk_result.bad_lines)}")
# Process the chunk
# ... your processing logic here
```
**Output:**
```
Reading in chunks of 2 rows:
Chunk 1:
id value
0 1 10
1 2 20
Chunk 2:
id value
2 3 30
3 4 40
Chunk 3:
id value
4 5 50
```
---
## ๐ Complete ETL Pipeline Example
Here's a complete example combining multiple features:
```python
from abraxos import read_csv, transform, validate, to_sql
from pydantic import BaseModel
from sqlalchemy import create_engine
# 1. Extract: Read CSV with error recovery
csv_result = read_csv("messy_data.csv")
print(f"Captured {len(csv_result.bad_lines)} bad lines")
# 2. Transform: Clean the data
def clean_data(df):
df = df.copy()
df['name'] = df['name'].str.strip().str.title()
df['age'] = pd.to_numeric(df['age'], errors='coerce')
return df.dropna()
transform_result = transform(csv_result.dataframe, clean_data)
print(f"Transformed {len(transform_result.success_df)} rows successfully")
# 3. Validate: Ensure data quality
class Person(BaseModel):
name: str
age: int
validate_result = validate(transform_result.success_df, Person)
print(f"Validated {len(validate_result.success_df)} rows")
print(f"Validation failed for {len(validate_result.errored_df)} rows")
# 4. Load: Insert into database
engine = create_engine("sqlite:///clean_data.db")
load_result = to_sql(validate_result.success_df, "people", engine)
print(f"Loaded {len(load_result.success_df)} rows to database")
# Save error reports
csv_result.bad_lines # Malformed CSV lines
transform_result.errored_df # Rows that failed transformation
validate_result.errored_df # Rows that failed validation
load_result.errored_df # Rows that failed to insert
```
---
## ๐๏ธ API Reference
### Core Functions
#### `read_csv(path, *, chunksize=None, **kwargs) -> ReadCsvResult | Generator`
Read CSV files with automatic bad line recovery.
**Returns:** `ReadCsvResult(bad_lines, dataframe)` or generator of results if chunked.
#### `transform(df, transformer, chunks=2) -> TransformResult`
Apply a transformation function with automatic error isolation.
**Returns:** `TransformResult(errors, errored_df, success_df)`
#### `validate(df, model) -> ValidateResult`
Validate DataFrame rows using a Pydantic model.
**Returns:** `ValidateResult(errors, errored_df, success_df)`
#### `to_sql(df, name, con, *, if_exists='append', chunks=2, **kwargs) -> ToSqlResult`
Insert DataFrame into SQL database with retry logic.
**Returns:** `ToSqlResult(errors, errored_df, success_df)`
### Utility Functions
- `split(df, n=2)` - Split DataFrame into n parts
- `clear(df)` - Create empty DataFrame with same schema
- `to_records(df)` - Convert DataFrame to list of dicts with None for NaN
---
## ๐งช Testing & Development
Abraxos is thoroughly tested and type-safe:
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests with coverage (118 tests, 92% coverage)
pytest
# Run type checking
mypy abraxos # Success: no issues found
# Run linting and formatting
ruff check . # All checks passed
ruff format .
```
**Test Coverage:**
- 118 tests passing
- 92% code coverage
- All major code paths tested
- Type-safe with mypy
---
## ๐ค Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
**Quick checklist:**
- โ
Add tests for new features
- โ
Maintain 90%+ coverage
- โ
Pass all type checks (`mypy abraxos`)
- โ
Pass all lints (`ruff check .`)
- โ
Update documentation
---
## ๐ Changelog
See [CHANGELOG.md](CHANGELOG.md) for version history and migration guides.
---
## ๐ License
MIT License ยฉ 2024 Odos Matthews
---
## ๐งโโ๏ธ Author
Crafted by [Odos Matthews](https://github.com/eddiethedean) to bring resilience and magic to data workflows.
---
## โญ Support
If Abraxos helps your project, consider:
- โญ Starring the repo
- ๐ Reporting issues
- ๐ค Contributing improvements
- ๐ข Sharing with others
**Happy data processing! ๐**
Raw data
{
"_id": null,
"home_page": null,
"name": "abraxos",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "etl, validation, sql, csv, pandas, pydantic",
"author": null,
"author_email": "Odos Matthews <eddiethedean@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/a2/49/23fb7c81b49bbf6b18bcfc14a4ffd249b935523936b7e39883e91a2984bf/abraxos-0.1.0.tar.gz",
"platform": null,
"description": "# Abraxos\n\n[](https://pypi.org/project/abraxos/)\n[](https://abraxos.readthedocs.io/en/latest/)\n[](LICENSE)\n[](https://github.com/eddiethedean/abraxos)\n[](https://github.com/eddiethedean/abraxos)\n\n**Abraxos** is a lightweight Python toolkit for robust, row-aware data processing using Pandas and Pydantic. It helps you build resilient ETL pipelines that gracefully handle errors at the row level.\n\n## \u2728 Why Abraxos?\n\nTraditional data pipelines fail completely when they encounter a single bad row. Abraxos changes that:\n\n- \ud83d\udee1\ufe0f **Fault-tolerant by design** - isolate and recover from row-level errors\n- \ud83d\udd0d **Full error visibility** - see exactly which rows failed and why\n- \ud83d\udd04 **Automatic retry logic** - recursive splitting to isolate problem rows\n- \ud83d\udcca **Production-ready** - 118 tests, 92% coverage, type-safe\n\n---\n\n## \ud83d\ude80 Features\n\n- \ud83d\udcc4 **CSV Ingestion with Bad Line Recovery** \n Read CSVs in full or in chunks, automatically capturing malformed lines separately.\n\n- \ud83d\udd01 **Transform DataFrames Resiliently** \n Apply transformation functions and automatically isolate rows that fail.\n\n- \ud83e\uddea **Pydantic-Based Row Validation** \n Validate each row using Pydantic models, separating valid and invalid records.\n\n- \ud83d\udee2\ufe0f **SQL Insertion with Error Splitting** \n Insert DataFrames into SQL databases with automatic retry and chunking for failed rows.\n\n---\n\n## \ud83d\udce6 Installation\n\n```bash\npip install abraxos\n```\n\n**With optional dependencies:**\n```bash\n# For SQL support\npip install abraxos[sql]\n\n# For Pydantic validation\npip install abraxos[validate]\n\n# For development\npip install abraxos[dev]\n\n# Everything\npip install abraxos[all]\n```\n\n**Requirements:**\n- Python 3.10+\n- pandas >= 1.5.0\n- numpy >= 1.23.0\n- Optional: sqlalchemy >= 2.0.0\n- Optional: pydantic >= 2.0.0\n\n---\n\n## \ud83d\udcd6 Documentation\n\nFull documentation is available at: [https://abraxos.readthedocs.io](https://abraxos.readthedocs.io)\n\n---\n\n## \ud83c\udfaf Quick Start\n\nHere are real, tested examples showing Abraxos in action:\n\n### \ud83d\udd0d Example 1: Read CSVs with Error Recovery\n\nAbraxos captures malformed lines instead of crashing your pipeline:\n\n```python\nfrom abraxos import read_csv\n\n# Read a CSV that has some malformed lines\nresult = read_csv(\"data.csv\")\n\nprint(\"Bad lines:\", result.bad_lines)\nprint(\"\\nClean data:\")\nprint(result.dataframe)\n```\n\n**Output:**\n```\nBad lines: [['TOO', 'MANY', 'COLUMNS', 'HERE']]\n\nClean data:\n id name age\n0 1 Joe 28\n1 2 Alice 35\n2 3 Marcus 40\n```\n\n---\n\n### \ud83e\uddfc Example 2: Transform with Fault Isolation\n\nApply transformations that automatically isolate problematic rows:\n\n```python\nimport pandas as pd\nfrom abraxos import transform\n\ndf = pd.DataFrame({\n 'id': [1, 2, 3],\n 'name': [' Joe ', ' Alice ', ' Marcus '],\n 'age': [28, 35, 40]\n})\n\ndef clean_data(df):\n df = df.copy()\n df[\"name\"] = df[\"name\"].str.strip().str.lower()\n return df\n\nresult = transform(df, clean_data)\nprint(\"Errors:\", result.errors)\nprint(\"\\nSuccess DataFrame:\")\nprint(result.success_df)\n```\n\n**Output:**\n```\nErrors: []\n\nSuccess DataFrame:\n id name age\n0 1 joe 28\n1 2 alice 35\n2 3 marcus 40\n```\n\n---\n\n### \u26a1 Example 3: Automatic Error Isolation\n\nWhen transformation fails on some rows, Abraxos automatically isolates them:\n\n```python\nimport pandas as pd\nfrom abraxos import transform\n\ndf = pd.DataFrame({'value': [1, 2, 0, 3, 4]})\n\ndef divide_by_value(df):\n df = df.copy()\n if (df['value'] == 0).any():\n raise ValueError('Cannot divide by zero')\n df['result'] = 100 / df['value']\n return df\n\nresult = transform(df, divide_by_value)\n\nprint(f\"Errors encountered: {len(result.errors)}\")\nprint(f\"\\nSuccessful rows ({len(result.success_df)}):\")\nprint(result.success_df)\nprint(f\"\\nFailed rows ({len(result.errored_df)}):\")\nprint(result.errored_df)\n```\n\n**Output:**\n```\nErrors encountered: 1\n\nSuccessful rows (4):\n value result\n0 1 100.000000\n1 2 50.000000\n3 3 33.333333\n4 4 25.000000\n\nFailed rows (1):\n value\n2 0\n```\n\nNotice how Abraxos automatically isolated the problematic row (value=0) and processed the rest!\n\n---\n\n### \u2705 Example 4: Validate with Pydantic\n\nValidate each row and separate valid from invalid data:\n\n```python\nimport pandas as pd\nfrom abraxos import validate\nfrom pydantic import BaseModel\n\nclass Person(BaseModel):\n name: str\n age: int\n\ndf = pd.DataFrame({\n 'name': ['Joe', 'Alice', 'Marcus'],\n 'age': [28, 'invalid', 40]\n})\n\nresult = validate(df, Person)\n\nprint(\"Valid rows:\")\nprint(result.success_df)\nprint(f\"\\nNumber of validation errors: {len(result.errors)}\")\nprint(\"\\nInvalid rows:\")\nprint(result.errored_df)\n```\n\n**Output:**\n```\nValid rows:\n name age\n0 Joe 28\n2 Marcus 40\n\nNumber of validation errors: 1\n\nInvalid rows:\n name age\n1 Alice invalid\n```\n\n---\n\n### \ud83d\uddc3\ufe0f Example 5: SQL Insertion with Retry Logic\n\nInsert data into SQL with automatic error handling:\n\n```python\nimport pandas as pd\nfrom abraxos import to_sql\nfrom sqlalchemy import create_engine\n\nengine = create_engine(\"sqlite:///example.db\")\n\ndf = pd.DataFrame({\n 'name': ['Joe', 'Alice', 'Marcus'],\n 'age': [28, 35, 40]\n})\n\nresult = to_sql(df, \"people\", engine)\n\nprint(f\"Successful inserts: {result.success_df.shape[0]}\")\nprint(f\"Failed rows: {result.errored_df.shape[0]}\")\n```\n\n**Output:**\n```\nSuccessful inserts: 3\nFailed rows: 0\n\nData in database:\n name age\n0 Joe 28\n1 Alice 35\n2 Marcus 40\n```\n\n---\n\n### \ud83d\udcda Example 6: Process Large Files in Chunks\n\nRead and process large CSV files efficiently:\n\n```python\nfrom abraxos import read_csv\n\n# Read in chunks of 1000 rows\nfor chunk_result in read_csv(\"large_file.csv\", chunksize=1000):\n print(f\"Processing chunk with {len(chunk_result.dataframe)} rows\")\n print(f\"Bad lines in this chunk: {len(chunk_result.bad_lines)}\")\n \n # Process the chunk\n # ... your processing logic here\n```\n\n**Output:**\n```\nReading in chunks of 2 rows:\n\nChunk 1:\n id value\n0 1 10\n1 2 20\n\nChunk 2:\n id value\n2 3 30\n3 4 40\n\nChunk 3:\n id value\n4 5 50\n```\n\n---\n\n## \ud83d\udd04 Complete ETL Pipeline Example\n\nHere's a complete example combining multiple features:\n\n```python\nfrom abraxos import read_csv, transform, validate, to_sql\nfrom pydantic import BaseModel\nfrom sqlalchemy import create_engine\n\n# 1. Extract: Read CSV with error recovery\ncsv_result = read_csv(\"messy_data.csv\")\nprint(f\"Captured {len(csv_result.bad_lines)} bad lines\")\n\n# 2. Transform: Clean the data\ndef clean_data(df):\n df = df.copy()\n df['name'] = df['name'].str.strip().str.title()\n df['age'] = pd.to_numeric(df['age'], errors='coerce')\n return df.dropna()\n\ntransform_result = transform(csv_result.dataframe, clean_data)\nprint(f\"Transformed {len(transform_result.success_df)} rows successfully\")\n\n# 3. Validate: Ensure data quality\nclass Person(BaseModel):\n name: str\n age: int\n\nvalidate_result = validate(transform_result.success_df, Person)\nprint(f\"Validated {len(validate_result.success_df)} rows\")\nprint(f\"Validation failed for {len(validate_result.errored_df)} rows\")\n\n# 4. Load: Insert into database\nengine = create_engine(\"sqlite:///clean_data.db\")\nload_result = to_sql(validate_result.success_df, \"people\", engine)\nprint(f\"Loaded {len(load_result.success_df)} rows to database\")\n\n# Save error reports\ncsv_result.bad_lines # Malformed CSV lines\ntransform_result.errored_df # Rows that failed transformation\nvalidate_result.errored_df # Rows that failed validation\nload_result.errored_df # Rows that failed to insert\n```\n\n---\n\n## \ud83c\udfd7\ufe0f API Reference\n\n### Core Functions\n\n#### `read_csv(path, *, chunksize=None, **kwargs) -> ReadCsvResult | Generator`\nRead CSV files with automatic bad line recovery.\n\n**Returns:** `ReadCsvResult(bad_lines, dataframe)` or generator of results if chunked.\n\n#### `transform(df, transformer, chunks=2) -> TransformResult`\nApply a transformation function with automatic error isolation.\n\n**Returns:** `TransformResult(errors, errored_df, success_df)`\n\n#### `validate(df, model) -> ValidateResult`\nValidate DataFrame rows using a Pydantic model.\n\n**Returns:** `ValidateResult(errors, errored_df, success_df)`\n\n#### `to_sql(df, name, con, *, if_exists='append', chunks=2, **kwargs) -> ToSqlResult`\nInsert DataFrame into SQL database with retry logic.\n\n**Returns:** `ToSqlResult(errors, errored_df, success_df)`\n\n### Utility Functions\n\n- `split(df, n=2)` - Split DataFrame into n parts\n- `clear(df)` - Create empty DataFrame with same schema\n- `to_records(df)` - Convert DataFrame to list of dicts with None for NaN\n\n---\n\n## \ud83e\uddea Testing & Development\n\nAbraxos is thoroughly tested and type-safe:\n\n```bash\n# Install with dev dependencies\npip install -e \".[dev]\"\n\n# Run tests with coverage (118 tests, 92% coverage)\npytest\n\n# Run type checking\nmypy abraxos # Success: no issues found\n\n# Run linting and formatting\nruff check . # All checks passed\nruff format .\n```\n\n**Test Coverage:**\n- 118 tests passing\n- 92% code coverage\n- All major code paths tested\n- Type-safe with mypy\n\n---\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n**Quick checklist:**\n- \u2705 Add tests for new features\n- \u2705 Maintain 90%+ coverage\n- \u2705 Pass all type checks (`mypy abraxos`)\n- \u2705 Pass all lints (`ruff check .`)\n- \u2705 Update documentation\n\n---\n\n## \ud83d\udcdd Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md) for version history and migration guides.\n\n---\n\n## \ud83d\udcc4 License\n\nMIT License \u00a9 2024 Odos Matthews\n\n---\n\n## \ud83e\uddd9\u200d\u2642\ufe0f Author\n\nCrafted by [Odos Matthews](https://github.com/eddiethedean) to bring resilience and magic to data workflows.\n\n---\n\n## \u2b50 Support\n\nIf Abraxos helps your project, consider:\n- \u2b50 Starring the repo\n- \ud83d\udc1b Reporting issues\n- \ud83e\udd1d Contributing improvements\n- \ud83d\udce2 Sharing with others\n\n**Happy data processing! \ud83d\ude80**\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "Upgrade your Pandas ETL process.",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://abraxos.readthedocs.io",
"Source": "https://github.com/eddiethedean/abraxos",
"Tracker": "https://github.com/eddiethedean/abraxos/issues"
},
"split_keywords": [
"etl",
" validation",
" sql",
" csv",
" pandas",
" pydantic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2034c18266eb5d6c1a8a44c2c73355847c59a419eda56caca2ec9e32b3b5a6b7",
"md5": "6e0a2480b14aa03de70970ac5da798a0",
"sha256": "0019f4092ccc485a0b85ba30cc1de4d34dc2277ec333d864839e4382c8c1c836"
},
"downloads": -1,
"filename": "abraxos-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6e0a2480b14aa03de70970ac5da798a0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 29437,
"upload_time": "2025-10-20T23:29:13",
"upload_time_iso_8601": "2025-10-20T23:29:13.598221Z",
"url": "https://files.pythonhosted.org/packages/20/34/c18266eb5d6c1a8a44c2c73355847c59a419eda56caca2ec9e32b3b5a6b7/abraxos-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a24923fb7c81b49bbf6b18bcfc14a4ffd249b935523936b7e39883e91a2984bf",
"md5": "c65146e5f5bc4bee64cb9250a6fb154f",
"sha256": "267c426961e91e79e63b53b405e905de8a87287c63a2b6e5ecf18467e382d297"
},
"downloads": -1,
"filename": "abraxos-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "c65146e5f5bc4bee64cb9250a6fb154f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 27137,
"upload_time": "2025-10-20T23:29:14",
"upload_time_iso_8601": "2025-10-20T23:29:14.808927Z",
"url": "https://files.pythonhosted.org/packages/a2/49/23fb7c81b49bbf6b18bcfc14a4ffd249b935523936b7e39883e91a2984bf/abraxos-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-20 23:29:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "eddiethedean",
"github_project": "abraxos",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pandas",
"specs": [
[
">=",
"1.5.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.23.0"
]
]
}
],
"lcname": "abraxos"
}