rdatacompy


Namerdatacompy JSON
Version 0.1.9 PyPI version JSON
download
home_pageNone
SummaryLightning-fast dataframe comparison library built in Rust with Python bindings
upload_time2025-10-27 15:32:18
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseApache-2.0
keywords dataframe comparison arrow pyarrow rust data-quality testing pandas spark
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RDataCompy

Lightning-fast dataframe comparison library, implemented in Rust with Python bindings.

## Overview

RDataCompy is a high-performance library for comparing dataframes, inspired by Capital One's [datacompy](https://github.com/capitalone/datacompy). Built in Rust and leveraging Apache Arrow, it provides **40+ million cells/second** throughput with comprehensive comparison reports.

### Why RDataCompy?

- 🚀 **Blazing Fast**: 40-46M cells/second (100-1000x faster than Python-based solutions)
- 💾 **Memory Efficient**: Zero-copy operations using Apache Arrow columnar format
- 🎯 **Flexible**: Configurable tolerance for numeric comparisons
- 📊 **Comprehensive**: Detailed reports showing exact differences
- 🔧 **Multi-Format**: Works with PyArrow, Pandas, PySpark, and Polars DataFrames
- 💰 **Decimal Support**: Full DECIMAL(p,s) support with cross-precision compatibility

## Installation

```bash
pip install rdatacompy
```

### Optional Dependencies

```bash
# For PySpark support
pip install rdatacompy[spark]

# For Pandas support
pip install rdatacompy[pandas]

# For Polars support
pip install rdatacompy[polars]

# Install everything
pip install rdatacompy[all]
```

## Quick Start

### Basic Usage (PyArrow)

```python
import pyarrow as pa
from rdatacompy import Compare

# Create sample tables
df1 = pa.table({
    'id': [1, 2, 3, 4],
    'value': [10.0, 20.0, 30.0, 40.0],
    'name': ['Alice', 'Bob', 'Charlie', 'David']
})

df2 = pa.table({
    'id': [1, 2, 3, 5],
    'value': [10.001, 20.0, 30.5, 50.0],
    'name': ['Alice', 'Bob', 'Chuck', 'Eve']
})

# Compare dataframes
comp = Compare(
    df1, 
    df2,
    join_columns=['id'],
    abs_tol=0.01,  # Absolute tolerance for floats
    df1_name='original',
    df2_name='updated'
)

# Print comprehensive report
print(comp.report())
```

**Example Output:**

```
DataComPy Comparison
--------------------

DataFrame Summary
-----------------

DataFrame          Columns       Rows
original                 3          4
updated                  3          4

Column Summary
--------------

Number of columns in common: 3
Number of columns in original but not in updated: 0
Number of columns in updated but not in original: 0

Row Summary
-----------

Matched on: id
Any duplicates on match values: No
Absolute Tolerance: 0.01
Relative Tolerance: 0
Number of rows in common: 3
Number of rows in original but not in updated: 1
Number of rows in updated but not in original: 1

Number of rows with some compared columns unequal: 1
Number of rows with all compared columns equal: 2

Column Comparison
-----------------

Number of columns compared with some values unequal: 2
Number of columns compared with all values equal: 0
Total number of values which compare unequal: 2

Columns with Unequal Values or Types
------------------------------------

Column               original dtype  updated dtype      # Unequal     Max Diff  # Null Diff
value                float64         float64                    1       0.5000            0
name                 string          string                     1          N/A            0

Sample Rows with Unequal Values for 'value'
--------------------------------------------------

id                   value (original)          value (updated)          
3                    30.000000                 30.500000                

Sample Rows with Unequal Values for 'name'
--------------------------------------------------

id                   name (original)           name (updated)           
3                    Charlie                   Chuck                    
```

**Understanding the Report:**

- **DataFrame Summary**: Shows the dimensions of both dataframes being compared
- **Column Summary**: Lists columns that exist in both, and columns unique to each dataframe
- **Row Summary**: 
  - Shows which columns were used for matching (join keys)
  - Number of rows that exist in both dataframes (3 in this example)
  - Rows unique to each dataframe (id=4 only in original, id=5 only in updated)
  - How many matched rows have differences (1 row has differences, 2 are identical)
- **Column Comparison**: Summary of which columns have differences across all matched rows
- **Columns with Unequal Values**: Detailed breakdown per column showing:
  - Data types in each dataframe
  - Number of unequal values
  - Max difference (for numeric columns)
  - Number of null mismatches
- **Sample Rows**: Shows actual examples of differences with **join key values** displayed first (not row index), making it easy to identify exactly which records differ

### Using with Pandas

```python
import pandas as pd
from rdatacompy import Compare

df1 = pd.DataFrame({
    'id': [1, 2, 3],
    'amount': [100.50, 200.75, 300.25]
})

df2 = pd.DataFrame({
    'id': [1, 2, 3],
    'amount': [100.51, 200.75, 300.24]
})

# Directly compare Pandas DataFrames (auto-converted to Arrow)
comp = Compare(df1, df2, join_columns=['id'], abs_tol=0.01)
print(comp.report())
```

### Using with PySpark

```python
from pyspark.sql import SparkSession
from rdatacompy import Compare

# For Spark 3.5, enable Arrow for better performance
spark = SparkSession.builder \
    .config("spark.sql.execution.arrow.pyspark.enabled", "true") \
    .getOrCreate()

df1 = spark.createDataFrame([(1, 100), (2, 200)], ['id', 'value'])
df2 = spark.createDataFrame([(1, 100), (2, 201)], ['id', 'value'])

# Directly compare Spark DataFrames (auto-converted to Arrow)
# Works with Spark 3.5+ (via toPandas) and 4.0+ (via toArrow)
comp = Compare(df1, df2, join_columns=['id'])
print(comp.report())
```

### Decimal Support

```python
from decimal import Decimal
import pyarrow as pa
from rdatacompy import Compare

# Compare DECIMAL columns with different precision/scale
df1 = pa.table({
    'id': [1, 2, 3],
    'price': pa.array([
        Decimal('123.456789012345'),
        Decimal('999.999999999999'),
        Decimal('42.123456789012')
    ], type=pa.decimal128(28, 12))  # High precision
})

df2 = pa.table({
    'id': [1, 2, 3],
    'price': pa.array([
        Decimal('123.456789'),
        Decimal('999.999998'),
        Decimal('42.123457')
    ], type=pa.decimal128(18, 6))  # Lower precision
})

# Compare with tolerance - handles different precision automatically
comp = Compare(df1, df2, join_columns=['id'], abs_tol=0.00001)
print(comp.report())
```

## Features

### Comparison Report

The report includes:
- **DataFrame Summary**: Row and column counts
- **Column Summary**: Common columns, unique to each dataframe
- **Row Summary**: Matched rows, unique rows, duplicates
- **Column Comparison**: Which columns have differences
- **Sample Differences**: Example rows with unequal values
- **Statistics**: Number of differences, max difference, null differences

### API Methods

```python
comp = Compare(df1, df2, join_columns=['id'])

# Get full comparison report
report = comp.report()

# Check if dataframes match
matches = comp.matches()  # Returns bool

# Get common columns
common_cols = comp.intersect_columns()

# Get columns unique to each dataframe
df1_only = comp.df1_unq_columns()
df2_only = comp.df2_unq_columns()
```

### Supported Data Types

- ✅ Integers: int8, int16, int32, int64, uint8, uint16, uint32, uint64
- ✅ Floats: float32, float64
- ✅ Decimals: decimal128, decimal256 (with cross-precision compatibility)
- ✅ Strings: utf8, large_utf8
- ✅ Booleans
- ✅ Dates: date32, date64
- ✅ Timestamps (with timezone support)

### Cross-Type Compatibility

RDataCompy is designed for real-world data migration scenarios:

```python
# Compare different numeric types (int vs float vs decimal)
df1 = pa.table({'id': [1], 'val': pa.array([100], type=pa.int64())})
df2 = pa.table({'id': [1], 'val': pa.array([100.0], type=pa.float64())})
comp = Compare(df1, df2, join_columns=['id'])
comp.matches()  # True - types are compatible!

# Compare different decimal precisions
df1 = pa.table({'val': pa.array([Decimal('123.45')], type=pa.decimal128(28, 12))})
df2 = pa.table({'val': pa.array([Decimal('123.45')], type=pa.decimal128(18, 6))})
# Compares successfully - precision difference handled automatically
```

## Performance

Benchmarked on a dataset with 150,000 rows × 200 columns (58.8M data points):

- **Comparison time**: 1.3 seconds
- **Throughput**: 46 million cells/second
- **Memory overhead**: 16 MB (only stores differences)

### vs datacompy

RDataCompy is significantly faster than Python-based solutions:
- **Columnar processing**: Uses SIMD-optimized Arrow compute kernels
- **Zero-copy**: Works directly on Arrow arrays without data duplication
- **Hash-based joins**: O(n) row matching vs O(n²) pandas merge
- **No type inference**: Arrow types known upfront (no runtime checks per column)

## Development

### Prerequisites

- Rust 1.70+
- Python 3.8+
- maturin

### Building from Source

```bash
# Clone repository
git clone https://github.com/yourusername/rdatacompy
cd rdatacompy

# Create virtual environment
python -m venv .venv
source .venv/bin/activate

# Install maturin
pip install maturin

# Build and install in development mode
maturin develop --release

# Run examples
python examples/basic_usage.py
```

### Running Tests

```bash
# Rust tests
cargo test

# Python examples
python examples/test_multi_dataframe_types.py
python examples/test_decimal_types.py
python examples/benchmark_large.py
```

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

Apache-2.0

## Acknowledgments

- Inspired by [Capital One's datacompy](https://github.com/capitalone/datacompy)
- Built with [Apache Arrow](https://arrow.apache.org/) and [PyO3](https://pyo3.rs/)

## Roadmap

See [TODO.md](TODO.md) for planned features and improvements.



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "rdatacompy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "dataframe, comparison, arrow, pyarrow, rust, data-quality, testing, pandas, spark",
    "author": null,
    "author_email": "Juho J\u00e4\u00e4skel\u00e4inen <juho.jaaskelainen95@hotmail.fi>",
    "download_url": null,
    "platform": null,
    "description": "# RDataCompy\n\nLightning-fast dataframe comparison library, implemented in Rust with Python bindings.\n\n## Overview\n\nRDataCompy is a high-performance library for comparing dataframes, inspired by Capital One's [datacompy](https://github.com/capitalone/datacompy). Built in Rust and leveraging Apache Arrow, it provides **40+ million cells/second** throughput with comprehensive comparison reports.\n\n### Why RDataCompy?\n\n- \ud83d\ude80 **Blazing Fast**: 40-46M cells/second (100-1000x faster than Python-based solutions)\n- \ud83d\udcbe **Memory Efficient**: Zero-copy operations using Apache Arrow columnar format\n- \ud83c\udfaf **Flexible**: Configurable tolerance for numeric comparisons\n- \ud83d\udcca **Comprehensive**: Detailed reports showing exact differences\n- \ud83d\udd27 **Multi-Format**: Works with PyArrow, Pandas, PySpark, and Polars DataFrames\n- \ud83d\udcb0 **Decimal Support**: Full DECIMAL(p,s) support with cross-precision compatibility\n\n## Installation\n\n```bash\npip install rdatacompy\n```\n\n### Optional Dependencies\n\n```bash\n# For PySpark support\npip install rdatacompy[spark]\n\n# For Pandas support\npip install rdatacompy[pandas]\n\n# For Polars support\npip install rdatacompy[polars]\n\n# Install everything\npip install rdatacompy[all]\n```\n\n## Quick Start\n\n### Basic Usage (PyArrow)\n\n```python\nimport pyarrow as pa\nfrom rdatacompy import Compare\n\n# Create sample tables\ndf1 = pa.table({\n    'id': [1, 2, 3, 4],\n    'value': [10.0, 20.0, 30.0, 40.0],\n    'name': ['Alice', 'Bob', 'Charlie', 'David']\n})\n\ndf2 = pa.table({\n    'id': [1, 2, 3, 5],\n    'value': [10.001, 20.0, 30.5, 50.0],\n    'name': ['Alice', 'Bob', 'Chuck', 'Eve']\n})\n\n# Compare dataframes\ncomp = Compare(\n    df1, \n    df2,\n    join_columns=['id'],\n    abs_tol=0.01,  # Absolute tolerance for floats\n    df1_name='original',\n    df2_name='updated'\n)\n\n# Print comprehensive report\nprint(comp.report())\n```\n\n**Example Output:**\n\n```\nDataComPy Comparison\n--------------------\n\nDataFrame Summary\n-----------------\n\nDataFrame          Columns       Rows\noriginal                 3          4\nupdated                  3          4\n\nColumn Summary\n--------------\n\nNumber of columns in common: 3\nNumber of columns in original but not in updated: 0\nNumber of columns in updated but not in original: 0\n\nRow Summary\n-----------\n\nMatched on: id\nAny duplicates on match values: No\nAbsolute Tolerance: 0.01\nRelative Tolerance: 0\nNumber of rows in common: 3\nNumber of rows in original but not in updated: 1\nNumber of rows in updated but not in original: 1\n\nNumber of rows with some compared columns unequal: 1\nNumber of rows with all compared columns equal: 2\n\nColumn Comparison\n-----------------\n\nNumber of columns compared with some values unequal: 2\nNumber of columns compared with all values equal: 0\nTotal number of values which compare unequal: 2\n\nColumns with Unequal Values or Types\n------------------------------------\n\nColumn               original dtype  updated dtype      # Unequal     Max Diff  # Null Diff\nvalue                float64         float64                    1       0.5000            0\nname                 string          string                     1          N/A            0\n\nSample Rows with Unequal Values for 'value'\n--------------------------------------------------\n\nid                   value (original)          value (updated)          \n3                    30.000000                 30.500000                \n\nSample Rows with Unequal Values for 'name'\n--------------------------------------------------\n\nid                   name (original)           name (updated)           \n3                    Charlie                   Chuck                    \n```\n\n**Understanding the Report:**\n\n- **DataFrame Summary**: Shows the dimensions of both dataframes being compared\n- **Column Summary**: Lists columns that exist in both, and columns unique to each dataframe\n- **Row Summary**: \n  - Shows which columns were used for matching (join keys)\n  - Number of rows that exist in both dataframes (3 in this example)\n  - Rows unique to each dataframe (id=4 only in original, id=5 only in updated)\n  - How many matched rows have differences (1 row has differences, 2 are identical)\n- **Column Comparison**: Summary of which columns have differences across all matched rows\n- **Columns with Unequal Values**: Detailed breakdown per column showing:\n  - Data types in each dataframe\n  - Number of unequal values\n  - Max difference (for numeric columns)\n  - Number of null mismatches\n- **Sample Rows**: Shows actual examples of differences with **join key values** displayed first (not row index), making it easy to identify exactly which records differ\n\n### Using with Pandas\n\n```python\nimport pandas as pd\nfrom rdatacompy import Compare\n\ndf1 = pd.DataFrame({\n    'id': [1, 2, 3],\n    'amount': [100.50, 200.75, 300.25]\n})\n\ndf2 = pd.DataFrame({\n    'id': [1, 2, 3],\n    'amount': [100.51, 200.75, 300.24]\n})\n\n# Directly compare Pandas DataFrames (auto-converted to Arrow)\ncomp = Compare(df1, df2, join_columns=['id'], abs_tol=0.01)\nprint(comp.report())\n```\n\n### Using with PySpark\n\n```python\nfrom pyspark.sql import SparkSession\nfrom rdatacompy import Compare\n\n# For Spark 3.5, enable Arrow for better performance\nspark = SparkSession.builder \\\n    .config(\"spark.sql.execution.arrow.pyspark.enabled\", \"true\") \\\n    .getOrCreate()\n\ndf1 = spark.createDataFrame([(1, 100), (2, 200)], ['id', 'value'])\ndf2 = spark.createDataFrame([(1, 100), (2, 201)], ['id', 'value'])\n\n# Directly compare Spark DataFrames (auto-converted to Arrow)\n# Works with Spark 3.5+ (via toPandas) and 4.0+ (via toArrow)\ncomp = Compare(df1, df2, join_columns=['id'])\nprint(comp.report())\n```\n\n### Decimal Support\n\n```python\nfrom decimal import Decimal\nimport pyarrow as pa\nfrom rdatacompy import Compare\n\n# Compare DECIMAL columns with different precision/scale\ndf1 = pa.table({\n    'id': [1, 2, 3],\n    'price': pa.array([\n        Decimal('123.456789012345'),\n        Decimal('999.999999999999'),\n        Decimal('42.123456789012')\n    ], type=pa.decimal128(28, 12))  # High precision\n})\n\ndf2 = pa.table({\n    'id': [1, 2, 3],\n    'price': pa.array([\n        Decimal('123.456789'),\n        Decimal('999.999998'),\n        Decimal('42.123457')\n    ], type=pa.decimal128(18, 6))  # Lower precision\n})\n\n# Compare with tolerance - handles different precision automatically\ncomp = Compare(df1, df2, join_columns=['id'], abs_tol=0.00001)\nprint(comp.report())\n```\n\n## Features\n\n### Comparison Report\n\nThe report includes:\n- **DataFrame Summary**: Row and column counts\n- **Column Summary**: Common columns, unique to each dataframe\n- **Row Summary**: Matched rows, unique rows, duplicates\n- **Column Comparison**: Which columns have differences\n- **Sample Differences**: Example rows with unequal values\n- **Statistics**: Number of differences, max difference, null differences\n\n### API Methods\n\n```python\ncomp = Compare(df1, df2, join_columns=['id'])\n\n# Get full comparison report\nreport = comp.report()\n\n# Check if dataframes match\nmatches = comp.matches()  # Returns bool\n\n# Get common columns\ncommon_cols = comp.intersect_columns()\n\n# Get columns unique to each dataframe\ndf1_only = comp.df1_unq_columns()\ndf2_only = comp.df2_unq_columns()\n```\n\n### Supported Data Types\n\n- \u2705 Integers: int8, int16, int32, int64, uint8, uint16, uint32, uint64\n- \u2705 Floats: float32, float64\n- \u2705 Decimals: decimal128, decimal256 (with cross-precision compatibility)\n- \u2705 Strings: utf8, large_utf8\n- \u2705 Booleans\n- \u2705 Dates: date32, date64\n- \u2705 Timestamps (with timezone support)\n\n### Cross-Type Compatibility\n\nRDataCompy is designed for real-world data migration scenarios:\n\n```python\n# Compare different numeric types (int vs float vs decimal)\ndf1 = pa.table({'id': [1], 'val': pa.array([100], type=pa.int64())})\ndf2 = pa.table({'id': [1], 'val': pa.array([100.0], type=pa.float64())})\ncomp = Compare(df1, df2, join_columns=['id'])\ncomp.matches()  # True - types are compatible!\n\n# Compare different decimal precisions\ndf1 = pa.table({'val': pa.array([Decimal('123.45')], type=pa.decimal128(28, 12))})\ndf2 = pa.table({'val': pa.array([Decimal('123.45')], type=pa.decimal128(18, 6))})\n# Compares successfully - precision difference handled automatically\n```\n\n## Performance\n\nBenchmarked on a dataset with 150,000 rows \u00d7 200 columns (58.8M data points):\n\n- **Comparison time**: 1.3 seconds\n- **Throughput**: 46 million cells/second\n- **Memory overhead**: 16 MB (only stores differences)\n\n### vs datacompy\n\nRDataCompy is significantly faster than Python-based solutions:\n- **Columnar processing**: Uses SIMD-optimized Arrow compute kernels\n- **Zero-copy**: Works directly on Arrow arrays without data duplication\n- **Hash-based joins**: O(n) row matching vs O(n\u00b2) pandas merge\n- **No type inference**: Arrow types known upfront (no runtime checks per column)\n\n## Development\n\n### Prerequisites\n\n- Rust 1.70+\n- Python 3.8+\n- maturin\n\n### Building from Source\n\n```bash\n# Clone repository\ngit clone https://github.com/yourusername/rdatacompy\ncd rdatacompy\n\n# Create virtual environment\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install maturin\npip install maturin\n\n# Build and install in development mode\nmaturin develop --release\n\n# Run examples\npython examples/basic_usage.py\n```\n\n### Running Tests\n\n```bash\n# Rust tests\ncargo test\n\n# Python examples\npython examples/test_multi_dataframe_types.py\npython examples/test_decimal_types.py\npython examples/benchmark_large.py\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nApache-2.0\n\n## Acknowledgments\n\n- Inspired by [Capital One's datacompy](https://github.com/capitalone/datacompy)\n- Built with [Apache Arrow](https://arrow.apache.org/) and [PyO3](https://pyo3.rs/)\n\n## Roadmap\n\nSee [TODO.md](TODO.md) for planned features and improvements.\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Lightning-fast dataframe comparison library built in Rust with Python bindings",
    "version": "0.1.9",
    "project_urls": {
        "Documentation": "https://github.com/Jassi95/rdatacompy#readme",
        "Homepage": "https://github.com/Jassi95/rdatacompy",
        "Issues": "https://github.com/Jassi95/rdatacompy/issues",
        "Repository": "https://github.com/Jassi95/rdatacompy"
    },
    "split_keywords": [
        "dataframe",
        " comparison",
        " arrow",
        " pyarrow",
        " rust",
        " data-quality",
        " testing",
        " pandas",
        " spark"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "94e4545be01d0fa9766a40fac643767cbf4064b025c5bf13af8821d681b4af0d",
                "md5": "a691eda218452071e89cc5ec1fc4487d",
                "sha256": "104f8ccb743be0af5ad8955b9b78aecbe92abd2409e9925882bcbda699a830db"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a691eda218452071e89cc5ec1fc4487d",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 915536,
            "upload_time": "2025-10-27T15:32:18",
            "upload_time_iso_8601": "2025-10-27T15:32:18.359009Z",
            "url": "https://files.pythonhosted.org/packages/94/e4/545be01d0fa9766a40fac643767cbf4064b025c5bf13af8821d681b4af0d/rdatacompy-0.1.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5d3fb7f8b7c90a3a302a5b1cfc77b23065bebed343e8d1bd579f2b48d24c531f",
                "md5": "602a7389c261d392d2977729bd0c09ca",
                "sha256": "af23dbf31a1a610cea1450e88edb41bc1ff3621657bb18c01fc2636729b47c1e"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "602a7389c261d392d2977729bd0c09ca",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 787791,
            "upload_time": "2025-10-27T15:32:20",
            "upload_time_iso_8601": "2025-10-27T15:32:20.506414Z",
            "url": "https://files.pythonhosted.org/packages/5d/3f/b7f8b7c90a3a302a5b1cfc77b23065bebed343e8d1bd579f2b48d24c531f/rdatacompy-0.1.9-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c54126ff732906f78fd3d6be134662b3cb39fc8c072922d965e2aa9e6d1f52fa",
                "md5": "2aa56232a429472cfe8c2d021fb17403",
                "sha256": "bc244f99ae2154994eaaf28e58dcb050bdad29553b345f00d9648d3ffa581b10"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "2aa56232a429472cfe8c2d021fb17403",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 915318,
            "upload_time": "2025-10-27T15:32:22",
            "upload_time_iso_8601": "2025-10-27T15:32:22.356631Z",
            "url": "https://files.pythonhosted.org/packages/c5/41/26ff732906f78fd3d6be134662b3cb39fc8c072922d965e2aa9e6d1f52fa/rdatacompy-0.1.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1b76c68b4f023c1d772026fb59632a21dd440b15c83b1ac297c3032d9b32dd57",
                "md5": "6cef62bc0dbdd413995eef67e0890590",
                "sha256": "dbd65d42b489fd3b3ef750cc0a65dfd2ad17867fa03f9d0338c7470c0019114d"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "6cef62bc0dbdd413995eef67e0890590",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 787676,
            "upload_time": "2025-10-27T15:32:24",
            "upload_time_iso_8601": "2025-10-27T15:32:24.200622Z",
            "url": "https://files.pythonhosted.org/packages/1b/76/c68b4f023c1d772026fb59632a21dd440b15c83b1ac297c3032d9b32dd57/rdatacompy-0.1.9-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "865e6de0e73a3e8962571f8711a6cf078b4e9c9ba12d02880ccebb7d9b6dc584",
                "md5": "aa72e91666503d5764f52e305d854ffd",
                "sha256": "4d76c8d2503fa2f7a464b76e0e9124b0f7465d49fb4e4aff8c5216d6bc5dac2b"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "aa72e91666503d5764f52e305d854ffd",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.8",
            "size": 916080,
            "upload_time": "2025-10-27T15:32:26",
            "upload_time_iso_8601": "2025-10-27T15:32:26.187227Z",
            "url": "https://files.pythonhosted.org/packages/86/5e/6de0e73a3e8962571f8711a6cf078b4e9c9ba12d02880ccebb7d9b6dc584/rdatacompy-0.1.9-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ee252c31f853485862f16297ca192d6a6cb2c8c98357056a65b39553564f5f76",
                "md5": "c683787e2f8728cde912e5ab6e8bf91b",
                "sha256": "68e7d5182e079feb726d74c1d49e2703457244b29b51846d57bba8a69bbfcfac"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "c683787e2f8728cde912e5ab6e8bf91b",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.8",
            "size": 788896,
            "upload_time": "2025-10-27T15:32:27",
            "upload_time_iso_8601": "2025-10-27T15:32:27.640628Z",
            "url": "https://files.pythonhosted.org/packages/ee/25/2c31f853485862f16297ca192d6a6cb2c8c98357056a65b39553564f5f76/rdatacompy-0.1.9-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1aca1ef4ea0a70cfde33b2089cd2dd83c9c4e186283ffddab80b2c43e80aabca",
                "md5": "a3d259875de417e27c59aa173765d754",
                "sha256": "0175339c8c4ef67e111c21b43d391a208edd499a84cb92bde73779fc381ee5e3"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a3d259875de417e27c59aa173765d754",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 916228,
            "upload_time": "2025-10-27T15:32:29",
            "upload_time_iso_8601": "2025-10-27T15:32:29.385175Z",
            "url": "https://files.pythonhosted.org/packages/1a/ca/1ef4ea0a70cfde33b2089cd2dd83c9c4e186283ffddab80b2c43e80aabca/rdatacompy-0.1.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "92985b9083a6fadd8fb610f99afcc8dbb7bce6cad594d38f3cdf4b2b31514dc2",
                "md5": "ec22a0a420831ce95946057b2914829b",
                "sha256": "18abf65089dc9b99e969adf2a79868bdbc6d18795beb95b124180c60f0007b58"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp38-cp38-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "ec22a0a420831ce95946057b2914829b",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 788216,
            "upload_time": "2025-10-27T15:32:31",
            "upload_time_iso_8601": "2025-10-27T15:32:31.602121Z",
            "url": "https://files.pythonhosted.org/packages/92/98/5b9083a6fadd8fb610f99afcc8dbb7bce6cad594d38f3cdf4b2b31514dc2/rdatacompy-0.1.9-cp38-cp38-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ff757f58fc725640a8dbefb81037c681a2534539e633a45d4d1c8f7b6a9a6b6d",
                "md5": "bbc7f6fac2f4054091ac227e4568e5ee",
                "sha256": "2bcff74862a1c910eaeafd571d7d60664310219e16cc348be6d951d707ecc3f0"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "bbc7f6fac2f4054091ac227e4568e5ee",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 916062,
            "upload_time": "2025-10-27T15:32:33",
            "upload_time_iso_8601": "2025-10-27T15:32:33.273078Z",
            "url": "https://files.pythonhosted.org/packages/ff/75/7f58fc725640a8dbefb81037c681a2534539e633a45d4d1c8f7b6a9a6b6d/rdatacompy-0.1.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5c5d2ac18ee34abe1c7ca87d685f3e0bd0c50c8fa12fd1a4014282816c3ce70a",
                "md5": "ddad9ba339af48497dc75f85fe4650f5",
                "sha256": "bba3812a92d95c0f7f5af8c778446746cc7f3a68ba4c6d9fcd9061400d90b47d"
            },
            "downloads": -1,
            "filename": "rdatacompy-0.1.9-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "ddad9ba339af48497dc75f85fe4650f5",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 788080,
            "upload_time": "2025-10-27T15:32:34",
            "upload_time_iso_8601": "2025-10-27T15:32:34.966385Z",
            "url": "https://files.pythonhosted.org/packages/5c/5d/2ac18ee34abe1c7ca87d685f3e0bd0c50c8fa12fd1a4014282816c3ce70a/rdatacompy-0.1.9-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-27 15:32:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Jassi95",
    "github_project": "rdatacompy#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "rdatacompy"
}
        
Elapsed time: 2.08479s