# Intrinsic Dimension Analysis with DRR Metrics
[](https://github.com/andre-motta/dimensionality_reduction_ratio/actions)
[](https://www.python.org/downloads/)
[](http://unlicense.org/)
[](https://github.com/psf/black)
[](https://github.com/andre-motta/dimensionality_reduction_ratio/actions)
A professional Python toolkit for estimating the intrinsic dimensionality of datasets and computing Dimensionality Reduction Ratio (DRR) metrics. This implementation is based on the correlation function approach from Levina & Bickel (2005) with enhancements for large-scale dataset processing.
## ๐ Quick Start
```bash
# Install the package
pip install drr
# Process all datasets from configuration file
drr batch datasets.txt
# Process a single dataset
drr single data/config/Apache_AllMeasurements.csv
# Use custom parameters with debug logging
drr --log-level DEBUG batch datasets.txt --max-samples 5000 --metric euclidean
```
## ๐ Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Installation](#installation)
- [Usage](#usage)
- [Dataset Configuration](#dataset-configuration)
- [Algorithm Details](#algorithm-details)
- [DRR Metrics](#drr-metrics)
- [API Reference](#api-reference)
- [Results](#results)
- [Contributing](#contributing)
## ๐ Overview
This toolkit implements the **Levina-Bickel correlation function method** for intrinsic dimension estimation, enhanced with:
- **DRR (Dimensionality Reduction Ratio)** metric: `DRR = 1 - (I/R)`
- **Large dataset handling** with intelligent sampling strategies
- **Batch processing** capabilities for multiple datasets
- **Professional logging** and error handling
- **Resume functionality** for interrupted processing jobs
### What is Intrinsic Dimension?
The **intrinsic dimension** of a dataset is the minimum number of parameters needed to represent the data without significant information loss. While a dataset might exist in a high-dimensional space (raw dimension R), its true complexity might be much lower (intrinsic dimension I).
### What is DRR?
**Dimensionality Reduction Ratio (DRR)** quantifies how much dimensionality reduction is possible:
- `DRR = 1 - (I/R)`
- **High DRR (>0.5)**: Significant dimensionality reduction possible
- **Low DRR (<0.3)**: Dataset complexity is close to its raw dimensionality
## โจ Features
### Core Capabilities
- ๐ฌ **Intrinsic dimension estimation** using correlation function analysis
- ๐ **DRR metric computation** for dataset complexity analysis
- ๐๏ธ **Batch processing** of multiple datasets from configuration files
- ๐ **Large dataset optimization** with multi-level sampling
- ๐ง **Resume functionality** for interrupted processing jobs
### Technical Features
- ๐๏ธ **Professional architecture** with modular design
- ๐ **Comprehensive logging** with configurable levels
- ๐ก๏ธ **Robust error handling** and validation
- ๐ **Progress tracking** and status reporting
- ๐ **CSV results export** with detailed metrics
### Data Processing
- ๐งน **Automatic preprocessing** (categorical encoding, missing value handling)
- ๐ฏ **Goal variable detection** and removal
- ๐ **Distance metric selection** (L1, L2, Euclidean, Manhattan, Cosine)
- ๐ **Intelligent sampling** for datasets >50K rows
## ๐ ๏ธ Installation
## ๐ ๏ธ Installation
### From PyPI (Recommended)
```bash
# Install the latest stable version
pip install drr
# Install with development dependencies
pip install drr[dev]
# Install with all optional dependencies
pip install drr[all]
```
### From Source
```bash
# Clone the repository
git clone https://github.com/andre-motta/dimensionality_reduction_ratio.git
cd dimensionality_reduction_ratio
# Install in development mode
pip install -e .
# Or install with development dependencies
pip install -e .[dev]
```
### Prerequisites
- Python 3.11+
- pip (Python package installer)
### Verify Installation
```bash
# Test the command-line interface
drr --help
# Or if installed from source
cd src
python -m drr --help
```
### Dependencies
This project uses the following key libraries:
- **Click**: Modern command-line interface framework
- **NumPy**: Numerical computing library
- **Pandas**: Data manipulation and analysis
- **SciPy**: Scientific computing library
- **Matplotlib**: Plotting library
## ๐ Usage
### Command Line Interface
#### Batch Processing
Process multiple datasets from a configuration file:
```bash
drr batch datasets.txt
```
With custom parameters:
```bash
drr --log-level DEBUG batch datasets.txt \
--max-samples 5000 \
--metric euclidean \
--data-root data
```
#### Single Dataset Processing
Process an individual dataset:
```bash
drr single data/config/Apache_AllMeasurements.csv
```
With custom parameters:
```bash
drr single data/config/Apache_AllMeasurements.csv \
--max-samples 3000 \
--metric manhattan
```
#### Global Options
- `--log-level`: Logging level (`DEBUG`, `INFO`, `WARNING`, `ERROR`)
- `--log-file`: Optional log file path
#### Batch Command Options
- `datasets_file`: Path to configuration file listing datasets to process
- `--data-root`: Root directory for dataset files (default: `../data`)
- `--max-samples`: Maximum samples for large datasets (default: 2000)
- `--metric`: Distance metric (`l1`, `l2`, `euclidean`, `manhattan`, `cosine`)
#### Single Command Options
- `dataset_path`: Path to the dataset file to process
- `--max-samples`: Maximum samples for large datasets (default: 2000)
- `--metric`: Distance metric (`l1`, `l2`, `euclidean`, `manhattan`, `cosine`)
### Python API
#### Single Dataset Analysis
```python
import drr
# Simple usage with convenience function
data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # Your dataset
original_dims, intrinsic_dim, drr_value = drr.estimate_intrinsic_dimension(data)
print(f"Raw dimensions: {original_dims}")
print(f"Intrinsic dimension: {intrinsic_dim}")
print(f"DRR: {drr_value:.3f}")
# Advanced usage with classes
estimator = drr.IntrinsicDimensionEstimator(max_samples=2000, distance_metric='euclidean')
processor = drr.DataProcessor()
# Process dataset from file
data, metadata = processor.process_dataset('data/config/Apache_AllMeasurements.csv')
original_dims, intrinsic_dim, drr_value = estimator.estimate(data)
```
#### Batch Processing
```python
import drr
# Initialize batch processor
processor = drr.BatchProcessor(
results_file="results/my_results.csv",
max_samples=2000,
distance_metric='manhattan'
)
# Process all datasets
results = processor.process_datasets_from_file('datasets.txt')
print(f"Processed {results['successful']} datasets successfully")
```
## ๐ Dataset Configuration
The `datasets.txt` file defines which datasets to process using a hierarchical structure:
### Format
```
# Configuration section
config
Apache_AllMeasurements
HSMGP_num
SQL_AllMeasurements
# Classification datasets
classify
breastcancer
diabetes
german
# Software measurement datasets
mvn
training_set/mvn_training
test_set/mvn_test
```
### Rules
1. **Section headers** have no indentation
2. **Dataset names** are indented (spaces or tabs)
3. **Comments** start with `#`
4. **File paths** are relative to `data_root` directory
5. **CSV extension** is automatically added
## ๐ฌ Algorithm Details
### Correlation Function Method
The algorithm estimates intrinsic dimension using the correlation function approach:
1. **Distance Computation**: Calculate pairwise distances between data points
2. **Correlation Function**: `C(r) = (2 * I) / (n * (n-1))` where I is the number of pairs with distance โค r
3. **Log-Log Analysis**: Fit linear regression to `log(C(r))` vs `log(r)`
4. **Dimension Estimation**: The slope approximates the intrinsic dimension
## ๐ DRR Metrics
### Understanding DRR Values
**DRR = 1 - (I/R)** where:
- **I**: Intrinsic dimension (estimated)
- **R**: Raw dimension (number of features)
- **DRR**: Dimensionality Reduction Ratio
### Interpretation Guidelines
| DRR Range | Interpretation | Example Dataset Type |
|-----------|----------------|---------------------|
| **0.0 - 0.2** | Low reduction potential | Behavior/performance data |
| **0.2 - 0.4** | Moderate reduction | Mixed datasets |
| **0.4 - 0.6** | Good reduction potential | Configuration data |
| **0.6 - 1.0** | High reduction potential | Highly correlated features |
## ๐ Results
### Sample Output
```
===============================================
RESULTS FOR: Apache_AllMeasurements.csv
===============================================
Original Dimensions (R): 43
Intrinsic Dimension (I): 12
DRR (1 - I/R): 0.721
Data Quality: 72.1% dimensionality reduction
===============================================
```
## ๐๏ธ Directory Structure
```
dimensionality_reduction_ratio/
โโโ src/ # Source code modules
โ โโโ main.py # Command-line entry point
โ โโโ intrinsic_dimension.py # Core algorithm
โ โโโ data_processor.py # Data preprocessing
โ โโโ batch_processor.py # Batch processing
โโโ config/ # Configuration files
โ โโโ datasets.txt # Dataset configuration
โ โโโ test_datasets.txt # Test configuration
โโโ data/ # Dataset files
โโโ results/ # Output files
โโโ logs/ # Log files
โโโ examples/ # Usage examples
โ โโโ example_usage.py # API usage examples
โโโ README.md # This documentation
```
## ๐งช Testing
### Validate Installation
```bash
# Test the command-line interface
drr --help
drr batch --help
drr single --help
# Test with sample data
drr single data/optimize/config/SS-A.csv
# Test batch processing (small subset)
drr batch config/test_dataset.txt
```
---
## ๐ Repository
**GitHub Repository**: https://github.com/andre-motta/dimensionality_reduction_ratio
For questions or support, please open an issue in the repository or contact the maintainers.
Raw data
{
"_id": null,
"home_page": null,
"name": "drr",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "Andre Lustosa <dexmotta6@gmail.com>",
"keywords": "dimensionality-reduction, intrinsic-dimension, machine-learning, data-analysis, correlation-function, levina-bickel",
"author": null,
"author_email": "Andre Lustosa <dexmotta6@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/49/26/e1d1101135fdff47db9c98a1e802028f883f44959a0c648056f4290c22a4/drr-1.0.1.tar.gz",
"platform": null,
"description": "# Intrinsic Dimension Analysis with DRR Metrics\n\n[](https://github.com/andre-motta/dimensionality_reduction_ratio/actions)\n[](https://www.python.org/downloads/)\n[](http://unlicense.org/)\n[](https://github.com/psf/black)\n[](https://github.com/andre-motta/dimensionality_reduction_ratio/actions)\n\nA professional Python toolkit for estimating the intrinsic dimensionality of datasets and computing Dimensionality Reduction Ratio (DRR) metrics. This implementation is based on the correlation function approach from Levina & Bickel (2005) with enhancements for large-scale dataset processing.\n\n## \ud83d\ude80 Quick Start\n\n```bash\n# Install the package\npip install drr\n\n# Process all datasets from configuration file\ndrr batch datasets.txt\n\n# Process a single dataset\ndrr single data/config/Apache_AllMeasurements.csv\n\n# Use custom parameters with debug logging\ndrr --log-level DEBUG batch datasets.txt --max-samples 5000 --metric euclidean\n```\n\n## \ud83d\udccb Table of Contents\n\n- [Overview](#overview)\n- [Features](#features)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Dataset Configuration](#dataset-configuration)\n- [Algorithm Details](#algorithm-details)\n- [DRR Metrics](#drr-metrics)\n- [API Reference](#api-reference)\n- [Results](#results)\n- [Contributing](#contributing)\n\n## \ud83d\udd0d Overview\n\nThis toolkit implements the **Levina-Bickel correlation function method** for intrinsic dimension estimation, enhanced with:\n\n- **DRR (Dimensionality Reduction Ratio)** metric: `DRR = 1 - (I/R)`\n- **Large dataset handling** with intelligent sampling strategies\n- **Batch processing** capabilities for multiple datasets\n- **Professional logging** and error handling\n- **Resume functionality** for interrupted processing jobs\n\n### What is Intrinsic Dimension?\n\nThe **intrinsic dimension** of a dataset is the minimum number of parameters needed to represent the data without significant information loss. While a dataset might exist in a high-dimensional space (raw dimension R), its true complexity might be much lower (intrinsic dimension I).\n\n### What is DRR?\n\n**Dimensionality Reduction Ratio (DRR)** quantifies how much dimensionality reduction is possible:\n- `DRR = 1 - (I/R)`\n- **High DRR (>0.5)**: Significant dimensionality reduction possible\n- **Low DRR (<0.3)**: Dataset complexity is close to its raw dimensionality\n\n## \u2728 Features\n\n### Core Capabilities\n- \ud83d\udd2c **Intrinsic dimension estimation** using correlation function analysis\n- \ud83d\udcca **DRR metric computation** for dataset complexity analysis\n- \ud83d\uddc2\ufe0f **Batch processing** of multiple datasets from configuration files\n- \ud83d\udcc8 **Large dataset optimization** with multi-level sampling\n- \ud83d\udd27 **Resume functionality** for interrupted processing jobs\n\n### Technical Features\n- \ud83c\udfd7\ufe0f **Professional architecture** with modular design\n- \ud83d\udcdd **Comprehensive logging** with configurable levels\n- \ud83d\udee1\ufe0f **Robust error handling** and validation\n- \ud83d\udd04 **Progress tracking** and status reporting\n- \ud83d\udcca **CSV results export** with detailed metrics\n\n### Data Processing\n- \ud83e\uddf9 **Automatic preprocessing** (categorical encoding, missing value handling)\n- \ud83c\udfaf **Goal variable detection** and removal\n- \ud83d\udccf **Distance metric selection** (L1, L2, Euclidean, Manhattan, Cosine)\n- \ud83d\udd00 **Intelligent sampling** for datasets >50K rows\n\n## \ud83d\udee0\ufe0f Installation\n\n## \ud83d\udee0\ufe0f Installation\n\n### From PyPI (Recommended)\n```bash\n# Install the latest stable version\npip install drr\n\n# Install with development dependencies\npip install drr[dev]\n\n# Install with all optional dependencies\npip install drr[all]\n```\n\n### From Source\n```bash\n# Clone the repository\ngit clone https://github.com/andre-motta/dimensionality_reduction_ratio.git\ncd dimensionality_reduction_ratio\n\n# Install in development mode\npip install -e .\n\n# Or install with development dependencies\npip install -e .[dev]\n```\n\n### Prerequisites\n- Python 3.11+\n- pip (Python package installer)\n\n### Verify Installation\n```bash\n# Test the command-line interface\ndrr --help\n\n# Or if installed from source\ncd src\npython -m drr --help\n```\n\n### Dependencies\nThis project uses the following key libraries:\n- **Click**: Modern command-line interface framework\n- **NumPy**: Numerical computing library\n- **Pandas**: Data manipulation and analysis\n- **SciPy**: Scientific computing library\n- **Matplotlib**: Plotting library\n\n## \ud83d\udcd6 Usage\n\n### Command Line Interface\n\n#### Batch Processing\nProcess multiple datasets from a configuration file:\n```bash\ndrr batch datasets.txt\n```\n\nWith custom parameters:\n```bash\ndrr --log-level DEBUG batch datasets.txt \\\n --max-samples 5000 \\\n --metric euclidean \\\n --data-root data\n```\n\n#### Single Dataset Processing\nProcess an individual dataset:\n```bash\ndrr single data/config/Apache_AllMeasurements.csv\n```\n\nWith custom parameters:\n```bash\ndrr single data/config/Apache_AllMeasurements.csv \\\n --max-samples 3000 \\\n --metric manhattan\n```\n\n#### Global Options\n- `--log-level`: Logging level (`DEBUG`, `INFO`, `WARNING`, `ERROR`)\n- `--log-file`: Optional log file path\n\n#### Batch Command Options\n- `datasets_file`: Path to configuration file listing datasets to process\n- `--data-root`: Root directory for dataset files (default: `../data`)\n- `--max-samples`: Maximum samples for large datasets (default: 2000)\n- `--metric`: Distance metric (`l1`, `l2`, `euclidean`, `manhattan`, `cosine`)\n\n#### Single Command Options \n- `dataset_path`: Path to the dataset file to process\n- `--max-samples`: Maximum samples for large datasets (default: 2000)\n- `--metric`: Distance metric (`l1`, `l2`, `euclidean`, `manhattan`, `cosine`)\n\n### Python API\n\n#### Single Dataset Analysis\n```python\nimport drr\n\n# Simple usage with convenience function\ndata = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # Your dataset\noriginal_dims, intrinsic_dim, drr_value = drr.estimate_intrinsic_dimension(data)\n\nprint(f\"Raw dimensions: {original_dims}\")\nprint(f\"Intrinsic dimension: {intrinsic_dim}\")\nprint(f\"DRR: {drr_value:.3f}\")\n\n# Advanced usage with classes\nestimator = drr.IntrinsicDimensionEstimator(max_samples=2000, distance_metric='euclidean')\nprocessor = drr.DataProcessor()\n\n# Process dataset from file\ndata, metadata = processor.process_dataset('data/config/Apache_AllMeasurements.csv')\noriginal_dims, intrinsic_dim, drr_value = estimator.estimate(data)\n```\n\n#### Batch Processing\n```python\nimport drr\n\n# Initialize batch processor\nprocessor = drr.BatchProcessor(\n results_file=\"results/my_results.csv\",\n max_samples=2000,\n distance_metric='manhattan'\n)\n\n# Process all datasets\nresults = processor.process_datasets_from_file('datasets.txt')\nprint(f\"Processed {results['successful']} datasets successfully\")\n```\n\n## \ud83d\udcc1 Dataset Configuration\n\nThe `datasets.txt` file defines which datasets to process using a hierarchical structure:\n\n### Format\n```\n# Configuration section\nconfig\n Apache_AllMeasurements\n HSMGP_num\n SQL_AllMeasurements\n\n# Classification datasets \nclassify\n breastcancer\n diabetes\n german\n\n# Software measurement datasets\nmvn\n training_set/mvn_training\n test_set/mvn_test\n```\n\n### Rules\n1. **Section headers** have no indentation\n2. **Dataset names** are indented (spaces or tabs)\n3. **Comments** start with `#`\n4. **File paths** are relative to `data_root` directory\n5. **CSV extension** is automatically added\n\n## \ud83d\udd2c Algorithm Details\n\n### Correlation Function Method\n\nThe algorithm estimates intrinsic dimension using the correlation function approach:\n\n1. **Distance Computation**: Calculate pairwise distances between data points\n2. **Correlation Function**: `C(r) = (2 * I) / (n * (n-1))` where I is the number of pairs with distance \u2264 r\n3. **Log-Log Analysis**: Fit linear regression to `log(C(r))` vs `log(r)`\n4. **Dimension Estimation**: The slope approximates the intrinsic dimension\n\n## \ud83d\udcca DRR Metrics\n\n### Understanding DRR Values\n\n**DRR = 1 - (I/R)** where:\n- **I**: Intrinsic dimension (estimated)\n- **R**: Raw dimension (number of features)\n- **DRR**: Dimensionality Reduction Ratio\n\n### Interpretation Guidelines\n\n| DRR Range | Interpretation | Example Dataset Type |\n|-----------|----------------|---------------------|\n| **0.0 - 0.2** | Low reduction potential | Behavior/performance data |\n| **0.2 - 0.4** | Moderate reduction | Mixed datasets |\n| **0.4 - 0.6** | Good reduction potential | Configuration data |\n| **0.6 - 1.0** | High reduction potential | Highly correlated features |\n\n## \ud83d\udcc8 Results\n\n### Sample Output\n\n```\n===============================================\nRESULTS FOR: Apache_AllMeasurements.csv\n===============================================\nOriginal Dimensions (R): 43\nIntrinsic Dimension (I): 12\nDRR (1 - I/R): 0.721\nData Quality: 72.1% dimensionality reduction\n===============================================\n```\n\n## \ud83d\uddc2\ufe0f Directory Structure\n\n```\ndimensionality_reduction_ratio/\n\u251c\u2500\u2500 src/ # Source code modules\n\u2502 \u251c\u2500\u2500 main.py # Command-line entry point\n\u2502 \u251c\u2500\u2500 intrinsic_dimension.py # Core algorithm\n\u2502 \u251c\u2500\u2500 data_processor.py # Data preprocessing\n\u2502 \u2514\u2500\u2500 batch_processor.py # Batch processing\n\u251c\u2500\u2500 config/ # Configuration files\n\u2502 \u251c\u2500\u2500 datasets.txt # Dataset configuration\n\u2502 \u2514\u2500\u2500 test_datasets.txt # Test configuration\n\u251c\u2500\u2500 data/ # Dataset files\n\u251c\u2500\u2500 results/ # Output files\n\u251c\u2500\u2500 logs/ # Log files\n\u251c\u2500\u2500 examples/ # Usage examples\n\u2502 \u2514\u2500\u2500 example_usage.py # API usage examples\n\u2514\u2500\u2500 README.md # This documentation\n```\n\n## \ud83e\uddea Testing\n\n### Validate Installation\n```bash\n# Test the command-line interface\ndrr --help\ndrr batch --help \ndrr single --help\n\n# Test with sample data\ndrr single data/optimize/config/SS-A.csv\n\n# Test batch processing (small subset)\ndrr batch config/test_dataset.txt\n```\n\n---\n\n## \ud83d\udd17 Repository\n\n**GitHub Repository**: https://github.com/andre-motta/dimensionality_reduction_ratio\n\nFor questions or support, please open an issue in the repository or contact the maintainers.\n",
"bugtrack_url": null,
"license": "Unlicense",
"summary": "A professional Python toolkit for estimating intrinsic dimensionality and computing Dimensionality Reduction Ratio (DRR) metrics",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/andre-motta/dimensionality_reduction_ratio/issues",
"Changelog": "https://github.com/andre-motta/dimensionality_reduction_ratio/releases",
"Documentation": "https://github.com/andre-motta/dimensionality_reduction_ratio#readme",
"Homepage": "https://github.com/andre-motta/dimensionality_reduction_ratio",
"Repository": "https://github.com/andre-motta/dimensionality_reduction_ratio"
},
"split_keywords": [
"dimensionality-reduction",
" intrinsic-dimension",
" machine-learning",
" data-analysis",
" correlation-function",
" levina-bickel"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "72eb2aa8c77b67624a0eaee8e203335b6ad6152f90e85b2f93c6641c2ac4cc47",
"md5": "a919f1cb523eec3b7da4cd8cefbde7cf",
"sha256": "1e9c2f1b78387f6658699f56bd8c2e799de18832f07345ebc449d81de6f26bae"
},
"downloads": -1,
"filename": "drr-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a919f1cb523eec3b7da4cd8cefbde7cf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 19761,
"upload_time": "2025-09-01T17:30:37",
"upload_time_iso_8601": "2025-09-01T17:30:37.501509Z",
"url": "https://files.pythonhosted.org/packages/72/eb/2aa8c77b67624a0eaee8e203335b6ad6152f90e85b2f93c6641c2ac4cc47/drr-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4926e1d1101135fdff47db9c98a1e802028f883f44959a0c648056f4290c22a4",
"md5": "6de5fb9fe21eb8d8d98bd2c32e46c523",
"sha256": "830fead39d2b0b163a2bf62e9a83e9b266a9a4d3b60588c2b54c7158cc8e4753"
},
"downloads": -1,
"filename": "drr-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "6de5fb9fe21eb8d8d98bd2c32e46c523",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 35550,
"upload_time": "2025-09-01T17:30:38",
"upload_time_iso_8601": "2025-09-01T17:30:38.979246Z",
"url": "https://files.pythonhosted.org/packages/49/26/e1d1101135fdff47db9c98a1e802028f883f44959a0c648056f4290c22a4/drr-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-01 17:30:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "andre-motta",
"github_project": "dimensionality_reduction_ratio",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.21.0"
],
[
"<",
"2.0.0"
]
]
},
{
"name": "pandas",
"specs": [
[
"<",
"3.0.0"
],
[
">=",
"1.3.0"
]
]
},
{
"name": "scipy",
"specs": [
[
"<",
"2.0.0"
],
[
">=",
"1.7.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.4.0"
],
[
"<",
"4.0.0"
]
]
},
{
"name": "click",
"specs": [
[
">=",
"8.0.0"
],
[
"<",
"9.0.0"
]
]
},
{
"name": "setuptools",
"specs": [
[
">=",
"78.1.1"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.0.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "pytest-mock",
"specs": [
[
">=",
"3.14.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"7.0.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"25.0.0"
]
]
},
{
"name": "isort",
"specs": [
[
">=",
"6.0.0"
]
]
}
],
"lcname": "drr"
}