lrdbenchmark


Namelrdbenchmark JSON
Version 2.2.0 PyPI version JSON
download
home_pagehttps://github.com/dave2k77/LRDBenchmark
SummaryComprehensive Long-Range Dependence Benchmarking Framework with Classical, ML, and Neural Network Estimators + 5 Demonstration Notebooks
upload_time2025-10-14 09:26:25
maintainerNone
docs_urlNone
authorDavian R. Chin
requires_python>=3.8
licenseMIT
keywords long-range dependence hurst parameter time series analysis benchmarking machine learning neural networks reproducible research fractional brownian motion wavelet analysis spectral analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LRDBenchmark

A comprehensive, reproducible framework for Long-Range Dependence (LRD) estimation and benchmarking across Classical, Machine Learning, and Neural Network methods.

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)

## ๐Ÿš€ Features

**Comprehensive Estimator Suite:**
- **8+ Classical Methods**: R/S, DFA, DMA, Higuchi, Periodogram, GPH, Whittle, CWT, and more
- **3 Machine Learning Models**: Random Forest, SVR, Gradient Boosting with optimized hyperparameters  
- **4 Neural Network Architectures**: LSTM, GRU, CNN, Transformer with pre-trained models
- **Generalized Hurst Exponent (GHE)**: Advanced multifractal analysis capabilities

**Robust Heavy-Tail Analysis:**
- ฮฑ-stable distribution modeling for heavy-tailed time series
- Adaptive preprocessing: standardization, winsorization, log-winsorization, detrending
- Contamination-aware estimation with intelligent fallback mechanisms

**High-Performance Computing:**
- Intelligent optimization backend with graceful fallbacks: JAX โ†’ Numba โ†’ NumPy
- GPU acceleration support where available
- Optimized implementations for large-scale analysis

**Comprehensive Benchmarking:**
- End-to-end benchmarking scripts with statistical analysis
- Confidence intervals, significance tests, and effect size calculations
- Performance leaderboards and comparative analysis tools

**๐Ÿ“š Demonstration Notebooks:**
- **5 Comprehensive Jupyter Notebooks** showcasing all library features
- **Data Generation & Visualization**: All stochastic models with comprehensive plots
- **Estimation & Validation**: All estimator categories with statistical validation
- **Custom Models & Estimators**: Library extensibility and custom implementations
- **Comprehensive Benchmarking**: Full benchmarking system with contamination testing
- **Leaderboard Generation**: Performance rankings and comparative analysis

## ๐Ÿ”ง Quick Start

### Basic Usage

```python
from lrdbenchmark.analysis.temporal.rs.rs_estimator_unified import RSEstimator
from lrdbenchmark.models.data_models.fbm.fbm_model import FractionalBrownianMotion

# Generate synthetic fractional Brownian motion
fbm = FractionalBrownianMotion(H=0.7, sigma=1.0)
x = fbm.generate(n=1000, seed=42)

# Estimate Hurst parameter using R/S analysis
estimator = RSEstimator()
result = estimator.estimate(x)
print(f"Estimated H: {result['hurst_parameter']:.3f}")  # ~0.7
```

### Advanced Benchmarking

```python
from lrdbenchmark.analysis.benchmark import ComprehensiveBenchmark

# Run comprehensive benchmark across multiple estimators
benchmark = ComprehensiveBenchmark()
results = benchmark.run_classical_estimators(
    data_models=['fbm', 'fgn', 'arfima'],
    n_samples=1000,
    n_trials=100
)
benchmark.generate_leaderboard(results)
```

### Heavy-Tail Robustness Analysis

```python
from lrdbenchmark.models.data_models.alpha_stable.alpha_stable_model import AlphaStableModel
from lrdbenchmark.robustness.adaptive_preprocessor import AdaptivePreprocessor

# Generate heavy-tailed ฮฑ-stable process
alpha_stable = AlphaStableModel(alpha=1.5, beta=0.0, scale=1.0)
x = alpha_stable.generate(n=1000, seed=42)

# Apply adaptive preprocessing for robust estimation
preprocessor = AdaptivePreprocessor()
x_processed = preprocessor.preprocess(x, method='auto')

# Estimate with robust preprocessing
estimator = RSEstimator()
result = estimator.estimate(x_processed)
```

## ๐Ÿ“ฆ Installation

### From PyPI (Recommended)

```bash
pip install lrdbenchmark
```

### Development Installation

```bash
git clone https://github.com/dave2k77/LRDBenchmark.git
cd LRDBenchmark
pip install -e .
```

### Optional Dependencies

For enhanced performance and additional features:

```bash
# GPU acceleration (JAX)
pip install "lrdbenchmark[jax]"

# Documentation building
pip install "lrdbenchmark[docs]"

# Development tools
pip install "lrdbenchmark[dev]"
```

## ๐Ÿ“š Documentation

- **๐Ÿ“– Full Documentation**: [https://lrdbenchmark.readthedocs.io/](https://lrdbenchmark.readthedocs.io/)
- **๐Ÿš€ Quick Start Guide**: [`docs/quickstart.rst`](docs/quickstart.rst)
- **๐Ÿ’ก Examples**: [`docs/examples/`](docs/examples/) and [`examples/`](examples/)
- **๐Ÿ”ง API Reference**: [API Documentation](https://lrdbenchmark.readthedocs.io/en/latest/api/)
- **๐Ÿ““ Demonstration Notebooks**: [`notebooks/`](notebooks/) - 5 comprehensive Jupyter notebooks showcasing all features

## ๐Ÿ—๏ธ Project Structure

```
LRDBenchmark/
โ”œโ”€โ”€ lrdbenchmark/           # Main package
โ”‚   โ”œโ”€โ”€ analysis/           # Estimator implementations
โ”‚   โ”œโ”€โ”€ models/            # Data generation models
โ”‚   โ”œโ”€โ”€ analytics/         # Performance monitoring
โ”‚   โ””โ”€โ”€ robustness/        # Heavy-tail robustness tools
โ”œโ”€โ”€ notebooks/             # Demonstration notebooks (5 comprehensive Jupyter notebooks)
โ”œโ”€โ”€ scripts/               # Benchmarking and analysis scripts
โ”œโ”€โ”€ examples/              # Usage examples
โ”œโ”€โ”€ docs/                  # Documentation
โ”œโ”€โ”€ tests/                 # Test suite
โ”œโ”€โ”€ tools/                 # Development utilities
โ””โ”€โ”€ config/                # Configuration files
```

## ๐Ÿ› ๏ธ Available Estimators

### Classical Methods
- **R/S Analysis** - Rescaled Range analysis
- **DFA** - Detrended Fluctuation Analysis  
- **DMA** - Detrended Moving Average
- **Higuchi** - Higuchi's fractal dimension method
- **Periodogram** - Periodogram-based estimation
- **GPH** - Geweke and Porter-Hudak estimator
- **Whittle** - Whittle maximum likelihood
- **CWT** - Continuous Wavelet Transform
- **GHE** - Generalized Hurst Exponent

### Machine Learning
- **Random Forest** - Ensemble tree-based estimation
- **Support Vector Regression** - SVM-based estimation
- **Gradient Boosting** - Boosted tree estimation

### Neural Networks
- **LSTM** - Long Short-Term Memory networks
- **GRU** - Gated Recurrent Units
- **CNN** - Convolutional Neural Networks
- **Transformer** - Attention-based architectures

## ๐Ÿ““ Demonstration Notebooks

LRDBenchmark includes 5 comprehensive Jupyter notebooks that demonstrate all library features:

### 1. Data Generation and Visualization
**File**: `notebooks/01_data_generation_and_visualisation.ipynb`

Demonstrates all available data models with comprehensive visualizations:
- **FBM/FGN**: Fractional Brownian Motion and Gaussian Noise
- **ARFIMA**: Autoregressive Fractionally Integrated Moving Average
- **MRW**: Multifractal Random Walk
- **Alpha-Stable**: Heavy-tailed distributions
- **Visualizations**: Time series, ACF, PSD, distributions
- **Quality Assessment**: Statistical validation and theoretical properties

### 2. Estimation and Statistical Validation
**File**: `notebooks/02_estimation_and_validation.ipynb`

Covers all estimator categories with statistical validation:
- **Classical**: R/S, DFA, DMA, Higuchi, GPH, Whittle, Periodogram, CWT
- **Machine Learning**: Random Forest, SVR, Gradient Boosting
- **Neural Networks**: CNN, LSTM, GRU, Transformer
- **Statistical Validation**: Confidence intervals, bootstrap methods
- **Performance Comparison**: Accuracy, speed, and reliability analysis

### 3. Custom Models and Estimators
**File**: `notebooks/03_custom_models_and_estimators.ipynb`

Shows how to extend the library with custom components:
- **Custom Data Models**: Fractional Ornstein-Uhlenbeck process
- **Custom Estimators**: Variance-Based Hurst Estimator
- **Library Extensibility**: Base classes and integration patterns
- **Best Practices**: Guidelines for custom implementations

### 4. Comprehensive Benchmarking
**File**: `notebooks/04_comprehensive_benchmarking.ipynb`

Demonstrates the full benchmarking system:
- **Benchmark Types**: Classical, ML, Neural, Comprehensive
- **Contamination Testing**: Noise, outliers, trends, seasonal patterns
- **Performance Metrics**: MAE, execution time, success rate
- **Statistical Analysis**: Confidence intervals and significance tests

### 5. Leaderboard Generation
**File**: `notebooks/05_leaderboard_generation.ipynb`

Shows performance ranking and comparative analysis:
- **Performance Rankings**: Overall and category-wise leaderboards
- **Composite Scoring**: Accuracy, speed, and robustness metrics
- **Visualization**: Performance plots and comparison tables
- **Export Options**: CSV, JSON, LaTeX formats

### Getting Started with Notebooks

```bash
# Clone the repository
git clone https://github.com/dave2k77/LRDBenchmark.git
cd LRDBenchmark

# Install dependencies
pip install -e .
pip install jupyter matplotlib seaborn

# Start Jupyter
jupyter notebook notebooks/
```

Each notebook is self-contained, well-documented, and provides a complete learning path from basic concepts to advanced applications.

## ๐Ÿงช Testing

Run the test suite:

```bash
# Basic tests
python -m pytest tests/

# With coverage
python -m pytest tests/ --cov=lrdbenchmark --cov-report=html
```

## ๐Ÿค Contributing

We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## ๐Ÿ“„ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## ๐Ÿ™ Acknowledgments

- Built with modern Python scientific computing stack
- Leverages JAX for high-performance computing
- Inspired by the need for reproducible LRD analysis
- Community-driven development and validation

## ๐Ÿ“ž Support

- **Issues**: [GitHub Issues](https://github.com/dave2k77/LRDBenchmark/issues)
- **Discussions**: [GitHub Discussions](https://github.com/dave2k77/LRDBenchmark/discussions)
- **Documentation**: [ReadTheDocs](https://lrdbenchmark.readthedocs.io/)

---

**Made with โค๏ธ for the time series analysis community**










            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dave2k77/LRDBenchmark",
    "name": "lrdbenchmark",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "long-range dependence, hurst parameter, time series analysis, benchmarking, machine learning, neural networks, reproducible research, fractional brownian motion, wavelet analysis, spectral analysis",
    "author": "Davian R. Chin",
    "author_email": "\"Davian R. Chin\" <d.r.chin@reading.ac.uk>",
    "download_url": "https://files.pythonhosted.org/packages/a4/3f/5af7589fe8b25aba33c450d8a500d5b63b33a33fdae5e90bf64448604148/lrdbenchmark-2.2.0.tar.gz",
    "platform": null,
    "description": "# LRDBenchmark\n\nA comprehensive, reproducible framework for Long-Range Dependence (LRD) estimation and benchmarking across Classical, Machine Learning, and Neural Network methods.\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n\n## \ud83d\ude80 Features\n\n**Comprehensive Estimator Suite:**\n- **8+ Classical Methods**: R/S, DFA, DMA, Higuchi, Periodogram, GPH, Whittle, CWT, and more\n- **3 Machine Learning Models**: Random Forest, SVR, Gradient Boosting with optimized hyperparameters  \n- **4 Neural Network Architectures**: LSTM, GRU, CNN, Transformer with pre-trained models\n- **Generalized Hurst Exponent (GHE)**: Advanced multifractal analysis capabilities\n\n**Robust Heavy-Tail Analysis:**\n- \u03b1-stable distribution modeling for heavy-tailed time series\n- Adaptive preprocessing: standardization, winsorization, log-winsorization, detrending\n- Contamination-aware estimation with intelligent fallback mechanisms\n\n**High-Performance Computing:**\n- Intelligent optimization backend with graceful fallbacks: JAX \u2192 Numba \u2192 NumPy\n- GPU acceleration support where available\n- Optimized implementations for large-scale analysis\n\n**Comprehensive Benchmarking:**\n- End-to-end benchmarking scripts with statistical analysis\n- Confidence intervals, significance tests, and effect size calculations\n- Performance leaderboards and comparative analysis tools\n\n**\ud83d\udcda Demonstration Notebooks:**\n- **5 Comprehensive Jupyter Notebooks** showcasing all library features\n- **Data Generation & Visualization**: All stochastic models with comprehensive plots\n- **Estimation & Validation**: All estimator categories with statistical validation\n- **Custom Models & Estimators**: Library extensibility and custom implementations\n- **Comprehensive Benchmarking**: Full benchmarking system with contamination testing\n- **Leaderboard Generation**: Performance rankings and comparative analysis\n\n## \ud83d\udd27 Quick Start\n\n### Basic Usage\n\n```python\nfrom lrdbenchmark.analysis.temporal.rs.rs_estimator_unified import RSEstimator\nfrom lrdbenchmark.models.data_models.fbm.fbm_model import FractionalBrownianMotion\n\n# Generate synthetic fractional Brownian motion\nfbm = FractionalBrownianMotion(H=0.7, sigma=1.0)\nx = fbm.generate(n=1000, seed=42)\n\n# Estimate Hurst parameter using R/S analysis\nestimator = RSEstimator()\nresult = estimator.estimate(x)\nprint(f\"Estimated H: {result['hurst_parameter']:.3f}\")  # ~0.7\n```\n\n### Advanced Benchmarking\n\n```python\nfrom lrdbenchmark.analysis.benchmark import ComprehensiveBenchmark\n\n# Run comprehensive benchmark across multiple estimators\nbenchmark = ComprehensiveBenchmark()\nresults = benchmark.run_classical_estimators(\n    data_models=['fbm', 'fgn', 'arfima'],\n    n_samples=1000,\n    n_trials=100\n)\nbenchmark.generate_leaderboard(results)\n```\n\n### Heavy-Tail Robustness Analysis\n\n```python\nfrom lrdbenchmark.models.data_models.alpha_stable.alpha_stable_model import AlphaStableModel\nfrom lrdbenchmark.robustness.adaptive_preprocessor import AdaptivePreprocessor\n\n# Generate heavy-tailed \u03b1-stable process\nalpha_stable = AlphaStableModel(alpha=1.5, beta=0.0, scale=1.0)\nx = alpha_stable.generate(n=1000, seed=42)\n\n# Apply adaptive preprocessing for robust estimation\npreprocessor = AdaptivePreprocessor()\nx_processed = preprocessor.preprocess(x, method='auto')\n\n# Estimate with robust preprocessing\nestimator = RSEstimator()\nresult = estimator.estimate(x_processed)\n```\n\n## \ud83d\udce6 Installation\n\n### From PyPI (Recommended)\n\n```bash\npip install lrdbenchmark\n```\n\n### Development Installation\n\n```bash\ngit clone https://github.com/dave2k77/LRDBenchmark.git\ncd LRDBenchmark\npip install -e .\n```\n\n### Optional Dependencies\n\nFor enhanced performance and additional features:\n\n```bash\n# GPU acceleration (JAX)\npip install \"lrdbenchmark[jax]\"\n\n# Documentation building\npip install \"lrdbenchmark[docs]\"\n\n# Development tools\npip install \"lrdbenchmark[dev]\"\n```\n\n## \ud83d\udcda Documentation\n\n- **\ud83d\udcd6 Full Documentation**: [https://lrdbenchmark.readthedocs.io/](https://lrdbenchmark.readthedocs.io/)\n- **\ud83d\ude80 Quick Start Guide**: [`docs/quickstart.rst`](docs/quickstart.rst)\n- **\ud83d\udca1 Examples**: [`docs/examples/`](docs/examples/) and [`examples/`](examples/)\n- **\ud83d\udd27 API Reference**: [API Documentation](https://lrdbenchmark.readthedocs.io/en/latest/api/)\n- **\ud83d\udcd3 Demonstration Notebooks**: [`notebooks/`](notebooks/) - 5 comprehensive Jupyter notebooks showcasing all features\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nLRDBenchmark/\n\u251c\u2500\u2500 lrdbenchmark/           # Main package\n\u2502   \u251c\u2500\u2500 analysis/           # Estimator implementations\n\u2502   \u251c\u2500\u2500 models/            # Data generation models\n\u2502   \u251c\u2500\u2500 analytics/         # Performance monitoring\n\u2502   \u2514\u2500\u2500 robustness/        # Heavy-tail robustness tools\n\u251c\u2500\u2500 notebooks/             # Demonstration notebooks (5 comprehensive Jupyter notebooks)\n\u251c\u2500\u2500 scripts/               # Benchmarking and analysis scripts\n\u251c\u2500\u2500 examples/              # Usage examples\n\u251c\u2500\u2500 docs/                  # Documentation\n\u251c\u2500\u2500 tests/                 # Test suite\n\u251c\u2500\u2500 tools/                 # Development utilities\n\u2514\u2500\u2500 config/                # Configuration files\n```\n\n## \ud83d\udee0\ufe0f Available Estimators\n\n### Classical Methods\n- **R/S Analysis** - Rescaled Range analysis\n- **DFA** - Detrended Fluctuation Analysis  \n- **DMA** - Detrended Moving Average\n- **Higuchi** - Higuchi's fractal dimension method\n- **Periodogram** - Periodogram-based estimation\n- **GPH** - Geweke and Porter-Hudak estimator\n- **Whittle** - Whittle maximum likelihood\n- **CWT** - Continuous Wavelet Transform\n- **GHE** - Generalized Hurst Exponent\n\n### Machine Learning\n- **Random Forest** - Ensemble tree-based estimation\n- **Support Vector Regression** - SVM-based estimation\n- **Gradient Boosting** - Boosted tree estimation\n\n### Neural Networks\n- **LSTM** - Long Short-Term Memory networks\n- **GRU** - Gated Recurrent Units\n- **CNN** - Convolutional Neural Networks\n- **Transformer** - Attention-based architectures\n\n## \ud83d\udcd3 Demonstration Notebooks\n\nLRDBenchmark includes 5 comprehensive Jupyter notebooks that demonstrate all library features:\n\n### 1. Data Generation and Visualization\n**File**: `notebooks/01_data_generation_and_visualisation.ipynb`\n\nDemonstrates all available data models with comprehensive visualizations:\n- **FBM/FGN**: Fractional Brownian Motion and Gaussian Noise\n- **ARFIMA**: Autoregressive Fractionally Integrated Moving Average\n- **MRW**: Multifractal Random Walk\n- **Alpha-Stable**: Heavy-tailed distributions\n- **Visualizations**: Time series, ACF, PSD, distributions\n- **Quality Assessment**: Statistical validation and theoretical properties\n\n### 2. Estimation and Statistical Validation\n**File**: `notebooks/02_estimation_and_validation.ipynb`\n\nCovers all estimator categories with statistical validation:\n- **Classical**: R/S, DFA, DMA, Higuchi, GPH, Whittle, Periodogram, CWT\n- **Machine Learning**: Random Forest, SVR, Gradient Boosting\n- **Neural Networks**: CNN, LSTM, GRU, Transformer\n- **Statistical Validation**: Confidence intervals, bootstrap methods\n- **Performance Comparison**: Accuracy, speed, and reliability analysis\n\n### 3. Custom Models and Estimators\n**File**: `notebooks/03_custom_models_and_estimators.ipynb`\n\nShows how to extend the library with custom components:\n- **Custom Data Models**: Fractional Ornstein-Uhlenbeck process\n- **Custom Estimators**: Variance-Based Hurst Estimator\n- **Library Extensibility**: Base classes and integration patterns\n- **Best Practices**: Guidelines for custom implementations\n\n### 4. Comprehensive Benchmarking\n**File**: `notebooks/04_comprehensive_benchmarking.ipynb`\n\nDemonstrates the full benchmarking system:\n- **Benchmark Types**: Classical, ML, Neural, Comprehensive\n- **Contamination Testing**: Noise, outliers, trends, seasonal patterns\n- **Performance Metrics**: MAE, execution time, success rate\n- **Statistical Analysis**: Confidence intervals and significance tests\n\n### 5. Leaderboard Generation\n**File**: `notebooks/05_leaderboard_generation.ipynb`\n\nShows performance ranking and comparative analysis:\n- **Performance Rankings**: Overall and category-wise leaderboards\n- **Composite Scoring**: Accuracy, speed, and robustness metrics\n- **Visualization**: Performance plots and comparison tables\n- **Export Options**: CSV, JSON, LaTeX formats\n\n### Getting Started with Notebooks\n\n```bash\n# Clone the repository\ngit clone https://github.com/dave2k77/LRDBenchmark.git\ncd LRDBenchmark\n\n# Install dependencies\npip install -e .\npip install jupyter matplotlib seaborn\n\n# Start Jupyter\njupyter notebook notebooks/\n```\n\nEach notebook is self-contained, well-documented, and provides a complete learning path from basic concepts to advanced applications.\n\n## \ud83e\uddea Testing\n\nRun the test suite:\n\n```bash\n# Basic tests\npython -m pytest tests/\n\n# With coverage\npython -m pytest tests/ --cov=lrdbenchmark --cov-report=html\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built with modern Python scientific computing stack\n- Leverages JAX for high-performance computing\n- Inspired by the need for reproducible LRD analysis\n- Community-driven development and validation\n\n## \ud83d\udcde Support\n\n- **Issues**: [GitHub Issues](https://github.com/dave2k77/LRDBenchmark/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/dave2k77/LRDBenchmark/discussions)\n- **Documentation**: [ReadTheDocs](https://lrdbenchmark.readthedocs.io/)\n\n---\n\n**Made with \u2764\ufe0f for the time series analysis community**\n\n\n\n\n\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Comprehensive Long-Range Dependence Benchmarking Framework with Classical, ML, and Neural Network Estimators + 5 Demonstration Notebooks",
    "version": "2.2.0",
    "project_urls": {
        "Documentation": "https://lrdbenchmark.readthedocs.io/",
        "Download": "https://pypi.org/project/lrdbenchmark/",
        "Homepage": "https://github.com/dave2k77/LRDBenchmark",
        "Issues": "https://github.com/dave2k77/LRDBenchmark/issues",
        "Repository": "https://github.com/dave2k77/LRDBenchmark.git",
        "Source": "https://github.com/dave2k77/LRDBenchmark"
    },
    "split_keywords": [
        "long-range dependence",
        " hurst parameter",
        " time series analysis",
        " benchmarking",
        " machine learning",
        " neural networks",
        " reproducible research",
        " fractional brownian motion",
        " wavelet analysis",
        " spectral analysis"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c5bd05227bf1c49626682b5fc7c283be4f9df398dbe6e5a714db63029802adec",
                "md5": "37270c4e474023d546d7b08f81129f97",
                "sha256": "1515580e94acbde1f6e507f8608ba433673e2b8dec3354b2ef523eb0fe026dce"
            },
            "downloads": -1,
            "filename": "lrdbenchmark-2.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "37270c4e474023d546d7b08f81129f97",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 461786,
            "upload_time": "2025-10-14T09:26:21",
            "upload_time_iso_8601": "2025-10-14T09:26:21.593743Z",
            "url": "https://files.pythonhosted.org/packages/c5/bd/05227bf1c49626682b5fc7c283be4f9df398dbe6e5a714db63029802adec/lrdbenchmark-2.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a43f5af7589fe8b25aba33c450d8a500d5b63b33a33fdae5e90bf64448604148",
                "md5": "4dee5655191f1c0e1d16736ecf8eb744",
                "sha256": "a306367d0f40f27127b4677b31a559c4d10dbdeb30e09ba0b2245832e6eeb0f4"
            },
            "downloads": -1,
            "filename": "lrdbenchmark-2.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4dee5655191f1c0e1d16736ecf8eb744",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 329662,
            "upload_time": "2025-10-14T09:26:25",
            "upload_time_iso_8601": "2025-10-14T09:26:25.995156Z",
            "url": "https://files.pythonhosted.org/packages/a4/3f/5af7589fe8b25aba33c450d8a500d5b63b33a33fdae5e90bf64448604148/lrdbenchmark-2.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-14 09:26:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dave2k77",
    "github_project": "LRDBenchmark",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "lrdbenchmark"
}
        
Elapsed time: 1.77983s